paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_LgjKqSjDzr | SALT : Sharing Attention between Linear layer and Transformer for tabular dataset | Handling tabular data with deep learning models is a challenging problem despite their remarkable success in vision and language processing applications. Therefore, many practitioners still rely on classical models such as gradient boosting decision trees (GBDTs) rather than deep networks due to their superior performance with tabular data. In this paper, we propose a novel hybrid deep network architecture for tabular data, dubbed SALT (Sharing Attention between Linear layer and Transformer). The proposed SALT consists of two blocks: Transformers and linear layers blocks that take advantage of shared attention matrices. The shared attention matrices enable transformers and linear layers to closely cooperate with each other, and it leads to improved performance and robustness. Our algorithm outperforms tree-based ensemble models and previous deep learning methods in multiple benchmark datasets. We further demonstrate the robustness of the proposed SALT with semi-supervised learning and pre-training with small dataset scenarios. | Reject | This paper proposes a method to use Transformers with tabular data by sharing attention. Reviewers raise significant concerns about the motivation, writing and experimental results. Author's did not submit a response. Hence I recommend rejection. | train | [
"e0CrrbNMzTH",
"jJP87vyPFd7",
"ODX7CeUiutA",
"VSrZ9Qk09jc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new tabular deep learning architecture based on sharing attention matrices enable transformers and linear layers. Comparisons with other tabular learning models on various benchmarks are demonstrated. I have serious concerns about the paper, as listed below: \n\n- The paper writing is very po... | [
3,
3,
5,
5
] | [
4,
3,
4,
4
] | [
"iclr_2022_LgjKqSjDzr",
"iclr_2022_LgjKqSjDzr",
"iclr_2022_LgjKqSjDzr",
"iclr_2022_LgjKqSjDzr"
] |
iclr_2022_C5u6Z9voQ1 | Evaluating the Robustness of Time Series Anomaly and Intrusion Detection Methods against Adversarial Attacks | Time series anomaly and intrusion detection are extensively studied in statistics, economics, and computer science. Over the years, numerous methods have been proposed for time series anomaly and intrusion detection using deep learning-based methods. Many of these methods demonstrate state-of-the-art performance on benchmark datasets, giving the false impression that these systems are robust and deployable in practical and industrial scenarios. In this paper, we demonstrate that state-of-the-art anomaly and intrusion detection methods can be easily fooled by adding adversarial perturbations to the sensor data. We use different scoring metrics such as prediction errors, anomaly, and classification scores over several public and private datasets belong to aerospace applications, automobiles, server machines, and cyber-physical systems. We evaluate state-of-the-art deep neural networks (DNNs) and graph neural networks (GNNs) methods, which claim to be robust against anomalies and intrusions, and find their performance can drop to as low as 0\% under adversarial attacks from Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) methods. To the best of our knowledge, we are the first to demonstrate the vulnerabilities of anomaly and intrusion detection systems against adversarial attacks. Our code is available here: https://anonymous.4open.science/r/ICLR298 | Reject | The paper investigates attacks against time series analysis methods such as GNN and DNN for anomaly and intrusion detection. Standard attacks such as FGSM and PGD are extended for the time series domain and evaluated on several datasets including automotive, aerospace and resource utilization datasets. While the authors claim to be the first to investigate such attacks, some related work was not considered in the paper, which was pointed out by reviewers. Also some other weaknesses of the proposed method, e.g., its focus on feature space perturbations were pointed out. Hence, while acknowledging the importance and the novelty of this paper's contributions, the reviewers agree that the paper must be better positioned in the context of the related work in order to be accepted. | train | [
"7YB8xJX1lbB",
"VECO5B4FzWQ",
"yZlcuzt7tZn",
"OBclaDUi9ax",
"07kRCvupi5",
"KGMv7CfNHHg",
"orTa0Uzif-9",
"r6oNAsG7FQD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would like to express our gratitude to the reviewer for their insightful comments and suggestions, such as attack transferability and basic defense methods. We intend to include them in a future version of this work.",
" Appreciate the authors provide detailed response. \nHowever, there are some unresolved p... | [
-1,
-1,
5,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
5
] | [
"VECO5B4FzWQ",
"orTa0Uzif-9",
"iclr_2022_C5u6Z9voQ1",
"r6oNAsG7FQD",
"orTa0Uzif-9",
"yZlcuzt7tZn",
"iclr_2022_C5u6Z9voQ1",
"iclr_2022_C5u6Z9voQ1"
] |
iclr_2022_Ihxw4h-JnC | Stochastic Induction of Decision Trees with Application to Learning Haar Tree | Decision trees are a convenient and established approach for any supervised learning task. Decision trees are used in a broad range of applications from medical imaging to computer vision. Decision trees are trained by greedily splitting the leaf nodes into a split and two leaf nodes until a certain stopping criterion is reached. The procedure of splitting a node consists of finding the best feature and threshold that minimizes a criterion. The criterion minimization problem is solved through an exhaustive search algorithm. However, this exhaustive search algorithm is very expensive, especially, if the number of samples and features are high. In this paper, we propose a novel stochastic approach for the criterion minimization. Asymptotically, the proposed algorithm is faster than conventional exhaustive search by several orders of magnitude. It is further shown that the proposed approach minimizes an upper bound for the criterion. Experimentally, the algorithm is compared with several other related state-of-the-art decision tree learning methods, including the baseline non-stochastic approach. The proposed algorithm outperforms every other decision tree learning (including online and fast) approaches and performs as well as the baseline algorithm in terms of accuracy and computational cost, despite being non-deterministic. For empirical evaluation, we apply the proposed algorithm to learn a Haar tree over MNIST dataset that consists of over $200,000$ features and $60,000$ samples. This tree achieved a test accuracy of $94\%$ over MNIST which is $4\%$ higher than any other known axis-aligned tree. This result is comparable to the performance of oblique trees, while providing a significant speed-up at both inference and training times. | Reject | All reviewers are very consistent with their evaluation of the paper. The discussion phase did not change their initial evaluation. Therefore, I also recommend to reject the paper. | train | [
"VSrv3PFYqVy",
"NAW3hs_6BcQ",
"a0JUvGOvu6",
"lBKOO9XfSXT",
"0TTdQ5e_CgC",
"xwH1kp_miq7",
"S3gFo-KOjKm",
"PQqx6xILIZk",
"K_9dsw9U-c7",
"wAMDfb1i5Q5",
"W9mDeYlWIDI",
"lClHLVbY67U"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Sorry, I had missed Figures 8 and 9. However, they don't seem to be discussed anywhere, and it seems to remain unclear how to choose values for the hyperparameters.",
" We highly appreciate your comments and that you have read the paper throughly. We will consider comparing the algorithm with the mentioned mode... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"W9mDeYlWIDI",
"lClHLVbY67U",
"W9mDeYlWIDI",
"wAMDfb1i5Q5",
"wAMDfb1i5Q5",
"wAMDfb1i5Q5",
"K_9dsw9U-c7",
"iclr_2022_Ihxw4h-JnC",
"iclr_2022_Ihxw4h-JnC",
"iclr_2022_Ihxw4h-JnC",
"iclr_2022_Ihxw4h-JnC",
"iclr_2022_Ihxw4h-JnC"
] |
iclr_2022_-0LuSWi6j4 | Mind Your Bits and Errors: Prioritizing the Bits that Matter in Variational Autoencoders | Good likelihoods do not imply great sample quality. However, the precise manner in which models trained to achieve good likelihoods fail at sample quality remains poorly understood. In this work, we consider the task of image generative modeling with variational autoencoders and posit that the nature of high-dimensional image data distributions poses an intrinsic challenge. In particular, much of the entropy in these natural image distributions is attributable to visually imperceptible information. This signal dominates the training objective, giving models an easy way to achieve competitive likelihoods without successful modeling of the visually perceptible bits. Based on this hypothesis, we decompose the task of generative modeling explicitly into two steps: we first prioritize the modeling of visually perceptible information to achieve good sample quality, and then subsequently model the imperceptible information---the bulk of the likelihood signal---to achieve good likelihoods. Our work highlights the well-known adage that "not all bits are created equal" and demonstrates that this property can and should be exploited in the design of variational autoencoders. | Reject | The authors discuss the disconnect between log-likelihood and sample quality of VAEs and relate it to an undesirable focus of the model on high-frequency signals. They propose to alleviate it through a two-stage training scheme for VAEs.
As it is, the paper does not explain well its contributions, especially compared to the rate-distortion balance discussion in "Fixing a Broken ELBo" by Alemi et al. (2018) (see [reviews sh3z](https://openreview.net/forum?id=-0LuSWi6j4¬eId=D52ninjThn1), [7Pio](https://openreview.net/forum?id=-0LuSWi6j4¬eId=9qMQNUGk6bx), and [LBJj](https://openreview.net/forum?id=-0LuSWi6j4¬eId=gyG86hghxsU)), and lacks the experiments to back up its claim (see [LBJj](https://openreview.net/forum?id=-0LuSWi6j4¬eId=gyG86hghxsU), and [KKon](https://openreview.net/forum?id=-0LuSWi6j4¬eId=zeFApaHliSv)). While the authors have made a more precise statement about their contributions in their rebuttal, the writing remains unclear.
I recommend this submission for rejection. | train | [
"5IjgXGUXI91",
"cAsfylqFYFp",
"9qMQNUGk6bx",
"BbT_zdGZBFK",
"rxJFd0PunG",
"yUWw21p6wXU",
"rkvqVr91kLX",
"5f8TIHdbEP_",
"DD3PTJzKy_A",
"K-n92oy6L09",
"wjwSkEdhvn",
"D52ninjThn1",
"gyG86hghxsU",
"zeFApaHliSv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response! And we look forward to continuing this discussion beyond the conference review. I think it is possible that we may be proposing related hypotheses of the same underlying phenomenon at different levels of abstraction (akin to macroscopic vs microscopic theories). I look forward to having a... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"cAsfylqFYFp",
"yUWw21p6wXU",
"iclr_2022_-0LuSWi6j4",
"5f8TIHdbEP_",
"rkvqVr91kLX",
"zeFApaHliSv",
"D52ninjThn1",
"gyG86hghxsU",
"9qMQNUGk6bx",
"wjwSkEdhvn",
"iclr_2022_-0LuSWi6j4",
"iclr_2022_-0LuSWi6j4",
"iclr_2022_-0LuSWi6j4",
"iclr_2022_-0LuSWi6j4"
] |
iclr_2022_oTQNAU_g_AZ | DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation | We address the problem of safely solving complex bimanual robot manipulation tasks with sparse rewards. Such challenging tasks can be decomposed into sub-tasks that are accomplishable by different robots concurrently or sequentially for better efficiency. While previous reinforcement learning approaches primarily focus on modeling the compositionality of sub-tasks, two fundamental issues are largely ignored particularly when learning cooperative strategies for two robots: (i) domination, i.e., one robot may try to solve a task by itself and leaves the other idle; (ii) conflict, i.e., one robot can interrupt another's workspace when executing different sub-tasks simultaneously, which leads to unsafe collisions. To tackle these two issues, we propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub-tasks and objects. We evaluate our method on five bimanual manipulation tasks. Experimental results show that our proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more efficient and safer cooperative strategies than all the baselines. Our project page with videos is at https://bimanual-attention.github.io/. | Reject | The paper presents a method for collaborative task solving via an attention mechanism. The method is evaluated on manipulation task in simulation.
The reviewers agree that the paper is well written and the idea is novel and intuitive. They also share concerns about the limited applicability (too many assumptions and only two robots) of the method and that it contains unjustified claims, and therefore does not meet the ICLR bar.
Constructive feedback for the next version of the manuscript:
- The authors should decide if this is a robot learning (a learning-based method that advances robotics, specifically robot manipulation) or a machine learning paper (a method that advances cooperative multi-agent learning). The decision should drive the publication venue and the baselines. If the target is robot learning, the paper should consider adding on-robot experiments. If the target is ML method, that more baselines and benchmarks from the MARL community should be added to the evaluation section.
- The authors should be careful not to confuse safety guarantees, which have theoretical and analytical implications, with empirical evaluation without collisions.
- Evaluate the learning on more that 3 seeds. | train | [
"A9Ie9PBdE-h",
"Jbwv5bvzdrc",
"A965PRYuhnX",
"KoxlWyzaHap",
"vJZiWPWZyE6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Given the missing author response and negative reviews across the reviewers, I would maintain my current rating, weak rejection.",
"The paper presents an attention mechanism that also serves as intrinsic regularisation for a two-agent reinforcement learning framework of collaborative tasks. The method is evalua... | [
-1,
3,
5,
5,
6
] | [
-1,
5,
4,
3,
3
] | [
"A965PRYuhnX",
"iclr_2022_oTQNAU_g_AZ",
"iclr_2022_oTQNAU_g_AZ",
"iclr_2022_oTQNAU_g_AZ",
"iclr_2022_oTQNAU_g_AZ"
] |
iclr_2022_NK5hHymegzo | On the One-sided Convergence of Adam-type Algorithms in Non-convex Non-concave Min-max Optimization | Adam-type methods, the extension of adaptive gradient methods, have shown great performance in the training of both supervised and unsupervised machine learning models. In particular, Adam-type optimizers have been widely used empirically as the default tool for training generative adversarial networks (GANs). On the theory side, however, despite the existence of theoretical results showing the efficiency of Adam-type methods in minimization problems, the reason of their wonderful performance still remains absent in GAN's training. In existing works, the fast convergence has long been considered as one of the most important reasons and multiple works have been proposed to give a theoretical guarantee of the convergence to a critical point of min-max optimization algorithms under certain assumptions. In this paper, we firstly argue empirically that in GAN's training, Adam does not converge to a critical point even upon successful training: Only the generator is converging while the discriminator's gradient norm remains high throughout the training. We name this one-sided convergence. Then we bridge the gap between experiments and theory by showing that Adam-type algorithms provably converge to a one-sided first order stationary points in min-max optimization problems under the one-sided MVI condition. We also empirically verify that such one-sided MVI condition is satisfied for standard GANs after trained over standard data sets. To the best of our knowledge, this is the very first result which provides an empirical observation and a strict theoretical guarantee on the one-sided convergence of Adam-type algorithms in min-max optimization. | Reject | This paper studies the convergence of Adam-type algorithms (two variants of AMSGrad in particular) in min-max problems that satisfy a one-sided "Minty variational inequality" condition.
The reviewers identified several weaknesses in the paper and the authors did not provide a rebuttal to these concerns so there was consensus to reject the paper. | test | [
"IeoBXVF102s",
"z8sI2BiuX5M",
"U7v5H1OPOVC",
"sQ7jjmSDNQ4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper observes the one-sided convergence phenomenon of GAN's training, and proposes the one-sided MVI condition suitable for this problem. Then the convergence analysis is provided for the proposed AMSGRAD-EG and AMSGrad-EG-DRD algorithms. Pros:\n- The paper is well written and easy to follow.\n- The convergen... | [
5,
6,
3,
5
] | [
4,
3,
5,
4
] | [
"iclr_2022_NK5hHymegzo",
"iclr_2022_NK5hHymegzo",
"iclr_2022_NK5hHymegzo",
"iclr_2022_NK5hHymegzo"
] |
iclr_2022_per0G3dnkYh | Marginal Tail-Adaptive Normalizing Flows | Learning the tail behavior of a distribution is a notoriously difficult problem. The number of samples from the tail is small, and deep generative models, such as normalizing flows, tend to concentrate on learning the body of the distribution. In this paper, we focus on improving the ability of normalizing flows to correctly capture the tail behavior and, thus, form more accurate models. We prove that the marginal tailedness of a triangular flow can be controlled via the tailedness of the marginals of the base distribution of the normalizing flow. This theoretical insight leads us to a novel type of triangular flows based on learnable base distributions and data-driven permutations. Since the proposed flows preserve marginal tailedness, we call them marginal tail-adaptive flows (mTAFs). An empirical analysis on synthetic data shows that mTAF improves on the robustness and efficiency of vanilla flows and—motivated by our theory—allows to successfully generate tail samples from the distributions. More generally, our experiments affirm that a careful choice of the base distribution is an effective way to introducing inductive biases to normalizing flows. | Reject | This paper addresses the performance of normalizing flows in the tail of the distribution. It does this by controlling tail properties in the marginals of the high-dimensional distribution. The paper is well-motivated, and the key theoretical insight has merit. However, the general perspective and methodology appears to be incremental relative to past results. Furthermore, some concerns over correctness remain after discussion with authors. Also, clear baselines and more realistic settings are lacking in the experimental results. Thus, while the paper generally has promising ideas on a pertinent topic, it appears to be not developed enough to merit dissemination. | train | [
"8efn0VlHgZr",
"ypeXDertvKt",
"JrRa7MvepAb",
"taExrvvYnhs",
"KhXW6mkAHt",
"fBKF_mFZRN",
"AEam6n-fgEp",
"-gObd5Huwo6",
"AjVmIGXaECS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" dear authors,\n\nI do appreciate your general response. However, I still think that this paper is not up to standard for a top conference at the moment. I will therefore not change my score. ",
" Thank you for the response that you provided in the rebuttal -- I'm confirming that I've read its explanation as wel... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"AjVmIGXaECS",
"taExrvvYnhs",
"AEam6n-fgEp",
"-gObd5Huwo6",
"iclr_2022_per0G3dnkYh",
"iclr_2022_per0G3dnkYh",
"iclr_2022_per0G3dnkYh",
"iclr_2022_per0G3dnkYh",
"iclr_2022_per0G3dnkYh"
] |
iclr_2022_7x_47XJULn | Federated Learning with Heterogeneous Architectures using Graph HyperNetworks | Standard Federated Learning (FL) techniques are limited to clients with identical network architectures. As a result, inter-organizational collaboration is severely restricted when both data privacy and architectural proprietary are required. In this work, we propose a new FL framework that removes this limitation by adopting a graph hypernetwork as a shared knowledge aggregator. A property of the graph hyper network is that it can adapt to various computational graphs, thereby allowing meaningful parameter sharing across models. Unlike existing solutions, our framework makes no use of external data and does not require clients to disclose their model architecture. Compared with distillation-based and non-graph hypernetwork baselines, our method performs notably better on standard benchmarks. We additionally show encouraging generalization performance to unseen architectures. | Reject | Meta Review of Federated Learning with Heterogeneous Architectures using Graph HyperNetworks
This work investigates a method for federated learning in a neural architecture-agnostic setting. They do this by using a graph hypernetwork to predict the weights of given neural network architectures (which is not exactly known at the onset). The authors conduct federated learning experiments to demonstrate good performance on several real datasets, and also showed that the trained GHN model can generalize (somewhat) to unseen architectures (which are mainly in the ResNet family). Personally, as AC, I find the results very promising, and the experiments show that GHNs are highly applicable to real world applications. But the reviewers outline several weaknesses in the discussion that makes it difficult to recommend acceptance of this paper for ICLR 2022.
The main weaknesses of the work are that application is mainly focused on a narrow family of ResNet architectures (can it be shown to go beyond this? If not, can the writing be improved to show that this is useful enough for many applications?) Reviewer U48w suggested improvements to the generalization experiments, and other details that can be addressed in the writing. Reviewer Tk9o mentioned that this work can be seen as a straightforward application of GHNs (limited novelty), while other reviewers do acknowledge the novelty of the work. I recommend improving the writing to clearly address this and defend why this is not a straightforward application of previous work. With these improvements, I'm confident that this work will be accepted at a future ML conference or journal.
Even though I cannot recommend acceptance, both myself and other reviewers are looking forward to seeing improved versions of this work for publication in the future. As jPp2 also noted, “Previous works on federated learning either focus on the mechanism of parameter aggregation or the aspect of privacy. This paper opens a new direction in FL where clients may not be willing to share their unique model designs. From this perspective, I think this paper has promising impact on the research field of FL.” Good luck! | train | [
"yhUsLR5bujy",
"baDfH4-l5Pe",
"6qURSSgr8SB",
"FnOt1NKimTS",
"z1g38QODIL2",
"WkFRKyaqB8K",
"l1x04ADRkQG",
"Hc7UHYtFtmv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' efforts in addressing the raised concerns. I have read the author response and other reviews and keep my score due to the following reasons. W4 and W5 have not been addressed. For W3, i.e., Graph importance (“without graph”), as shown in Table 1 in the submission, the accuracy of the propo... | [
-1,
-1,
-1,
-1,
3,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"z1g38QODIL2",
"l1x04ADRkQG",
"WkFRKyaqB8K",
"iclr_2022_7x_47XJULn",
"iclr_2022_7x_47XJULn",
"iclr_2022_7x_47XJULn",
"iclr_2022_7x_47XJULn",
"iclr_2022_7x_47XJULn"
] |
iclr_2022_KTF1h2XWKZA | Multi-batch Reinforcement Learning via Sample Transfer and Imitation Learning | Reinforcement learning (RL), especially deep reinforcement learning, has achieved impressive performance on different control tasks. Unfortunately, most online reinforcement learning algorithms require a large number of interactions with the environment to learn a reliable control policy. This assumption of the availability of repeated interactions with the environment does not hold for many real-world applications due to safety concerns, the cost/inconvenience related to interactions, or the lack of an accurate simulator to enable effective sim2real training. As a consequence, there has been a surge in research addressing this issue, including batch reinforcement learning. Batch RL aims to learn a good control policy from a previously collected dataset. Most existing batch RL algorithms are designed for a single batch setting and assume that we have a large number of interaction samples in fixed data sets. These assumptions limit the use of batch RL algorithms in the real world. We use transfer learning to address this data efficiency challenge. This approach is evaluated on multiple continuous control tasks against several robust baselines. Compared with other batch RL algorithms, the methods described here can be used to deal with more general real-world scenarios. | Reject | The main identified issues were the limited contribution and use cases, poor writing and missing baseline comparisons and more needed experiments. These issues were not addressed satisfactorily by the rebuttal and hence, I believe the paper should be revised by the authors and undergo another review process at another conference. I therefore recommend rejection. | train | [
"NzXe0VpRYY",
"2Jc07peg6W2",
"KQPDSv4U32Q",
"RL4vav7NGn2",
"8gK-3RHWBVv",
"RtaytuoZmEE",
"cEeu_KTSKeA",
"fV1vakJGXZv",
"q6Y_yWUx-I5",
"6r6EM1497Z",
"IERLEiqWj2",
"LS-55yT5Iv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThank you for the rebuttal and the comments. \n\n> Our work is mainly focused on BAIL, including extending it to BAIL+ and MBAIL. Thus, how to use the proposed solution for other batch RL algorithms is not well discussed. Thanks for your suggestion. We agree that this work can be more impactful i... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"RtaytuoZmEE",
"fV1vakJGXZv",
"cEeu_KTSKeA",
"iclr_2022_KTF1h2XWKZA",
"q6Y_yWUx-I5",
"IERLEiqWj2",
"RL4vav7NGn2",
"6r6EM1497Z",
"LS-55yT5Iv",
"iclr_2022_KTF1h2XWKZA",
"iclr_2022_KTF1h2XWKZA",
"iclr_2022_KTF1h2XWKZA"
] |
iclr_2022_5SgoJKayTvs | Intervention Adversarial Auto-Encoder | In this paper we propose a new method to stabilize the training process of the latent variables of adversarial auto-encoders, which we name Intervention Adversarial auto-encoder (IVAAE). The main idea is to introduce a sequence of distributions that bridge the distribution of the learned latent variable and its prior distribution. We theoretically and heuristically demonstrate that such bridge-like distributions, realized by a multi-output discriminator, have an effect on guiding the initial latent distribution towards the target one and hence stabilizing the training process. Several different types of the bridge distributions are proposed. We also apply a novel use of Stein variational gradient descent (SVGD), by which point assemble develops in a smooth and gradual fashion. We conduct experiments on multiple real-world datasets. It shows that IVAAE enjoys a more stable training process and achieves a better generating performance compared to the vanilla Adversarial auto-encoder (AAE) | Reject | This paper aims at improving AAEs with an intervention loss. While the topic is important, the reviewers agree that
- The paper has poor clarity,
- The related work is not adequately put into perspective,
- There are concerns with technical correctness,
- Experimental evidence is lacking,
As the authors have not addressed any of these concerns, the paper can not be accepted in its current form. | val | [
"4InGgIcyZXP",
"VunKqP972qE",
"ljL_Ex7GRy"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper aims to improve AAEs with the intervention loss (Liang et al., 2020) and SVGD (by constructing \"bridge distributions\") and demonstrates some empirical results. Pros\n+ The topic is interesting.\n\nCons\n- The technical correctness is very concerning. There are many claims left unsupported (via referenc... | [
3,
3,
3
] | [
3,
5,
4
] | [
"iclr_2022_5SgoJKayTvs",
"iclr_2022_5SgoJKayTvs",
"iclr_2022_5SgoJKayTvs"
] |
iclr_2022_tFQyjbOz34 | Detecting Modularity in Deep Neural Networks | A neural network is modular to the extent that parts of its computational graph (i.e. structure) can be represented as performing some comprehensible subtask relevant to the overall task (i.e. functionality). Are modern deep neural networks modular? How can this be quantified? In this paper, we consider the problem of assessing the modularity exhibited by a partitioning of a network's neurons. We propose two proxies for this: importance, which reflects how crucial sets of neurons are to network performance; and coherence, which reflects how consistently their neurons associate with features of the inputs. To measure these proxies, we develop a set of statistical methods based on techniques conventionally used to interpret individual neurons. We apply the proxies to partitionings generated by spectrally clustering a graph representation of the network's neurons with edges determined either by network weights or correlations of activations. We show that these partitionings, even ones based only on weights (i.e. strictly from non-runtime analysis), reveal groups of neurons that are important and coherent. These results suggest that graph-based partitioning can reveal modularity and help us understand how deep neural networks function. | Reject | Paper studies if DNNs are modular and proposes statistical methods to quantify modularity.
cluster the neurons of the network using spectral clustering applied to a graph that is weighted by similarity between the neurons.
While the reviewers find the question of modularity relevant, they raise the issue that the results are inconclusive regarding the main stated contribution of the paper (i.e., if modularity is appropriately measured). After discussion, some concerns are answered. However, the main problem of inconclusive results stands. Therefore, this borderline paper is rejected. | train | [
"fudxhoiMZgV",
"Yuqskq_O_dF",
"iMT7AGQF8l",
"PSewbmYLzWA",
"dizPl7k9qXs",
"OblE-WSrK20",
"InXHxiMkJsX",
"v2jc5JQM-B",
"4YZ3eD4NbW"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the constructive feedback. \n\nRe: “However, while they claim these results \"measure modularity\", it is unclear to me whether this is the case, or are they merely measuring redundancy in the network… It is not clear how to go from this evidence of redundancy to saying the network is modular, which... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"InXHxiMkJsX",
"4YZ3eD4NbW",
"v2jc5JQM-B",
"OblE-WSrK20",
"iclr_2022_tFQyjbOz34",
"iclr_2022_tFQyjbOz34",
"iclr_2022_tFQyjbOz34",
"iclr_2022_tFQyjbOz34",
"iclr_2022_tFQyjbOz34"
] |
iclr_2022_SN2bkl9f69 | Multi-Tailed, Multi-Headed, Spatial Dynamic Memory refined Text-to-Image Synthesis | Synthesizing high-quality, realistic images from text-descriptions is a challenging task, and current methods synthesize images from text in a multi-stage manner, typically by first generating a rough initial image and then refining image details at subsequent stages. However, existing methods that follow this paradigm suffer from three important limitations. Firstly, they synthesize initial images without attempting to separate image attributes at a word-level. As a result, object attributes of initial images (that provide a basis for subsequent refinement) are inherently entangled and ambiguous in nature. Secondly, by using common text-representations for all regions, current methods prevent us from interpreting text in fundamentally different ways at different parts of an image. Different image regions are therefore only allowed to assimilate the same type of information from text at each refinement stage. Finally, current methods generate refinement features only once at each refinement stage and attempt to address all image aspects in a single shot. This single-shot refinement limits the precision with which each refinement stage can learn to improve the prior image. Our proposed method introduces three novel components to address these shortcomings: (1) An initial generation stage that explicitly generates separate sets of image features for each word n-gram. (2) A spatial dynamic memory module for refinement of images. (3) An iterative multi-headed mechanism to make it easier to improve upon multiple image aspects. Experimental results demonstrate that our Multi-Headed Spatial Dynamic Memory image refinement with our Multi-Tailed Word-level Initial Generation (MSMT-GAN) performs favourably against the previous state of the art on the CUB and COCO datasets. | Reject | The reviewers' evaluation of this paper are borderline/negative. The AC considered the reviews, rebuttal, and the paper itself, and concurs with the reviewers. The AC found that the paper is an extension of previous work DM-GAN (DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis, CVPR 2019, https://arxiv.org/pdf/1904.01310.pdf). This work uses the word features in addition to sentence features at the first stage of generation, while DM-GAN and other previous work don’t use word features in the first stage, but use them in the later stages when the feature resolution is higher. The authors improve the dynamic memory in DM-GAN into spatial dynamic memory, and also change the image refinement process in DM-GAN into an iterative refinement. The proposed multi-tailed word-level initial generation, spatial dynamic memory, and iterative refinement are incremental changes to DM-GAN. Moreover, the proposed structure almost doubles the parameter size of DM-GAN (shown in Table 2), yet the evaluation results on COCO are similar to DM-GAN with only minor improvements. It is not clear whether the performance improvement comes from the increased number of parameters or the architecture design. Especially on the CUB dataset with limited number of images, the model can easily overfit with a larger number of parameters. The proposed method shares the similar network structure and dynamic memory blocks as DM-GAN, except for a few changes. Overall, the AC finds this paper not suitable for acceptance at ICLR in present form. | train | [
"O_BUMoyS5Gh",
"Z-Wn8WlEgS2",
"c-oqV4Ae1iU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a new text-to-image generation method that would like to tackle the entangled textual inputs by multi-tailed word-level initial generation, create the region-contextualized text representations for region-aware image refinement, and propose an iterative multi-headed mechanism to allow multiple ... | [
5,
5,
6
] | [
4,
4,
4
] | [
"iclr_2022_SN2bkl9f69",
"iclr_2022_SN2bkl9f69",
"iclr_2022_SN2bkl9f69"
] |
iclr_2022_W6lWkLqOss | Class-Weighted Evaluation Metrics for Imbalanced Data Classification | Class distribution skews in imbalanced datasets may lead to models with prediction bias towards majority classes, making fair assessment of classifiers a challenging task. Metrics such as Balanced Accuracy are commonly used to evaluate a classifier’s prediction performance under such scenarios. However, these metrics fall short when classes vary in importance. In this paper, we propose a simple and general-purpose evaluation framework for imbalanced data classification that is sensitive to arbitrary skews in class cardinalities and importances. Experiments with several state-of-the-art classifiers tested on real-world datasets from three different domains show the effectiveness of our framework – not only in evaluating and ranking classifiers, but also training them. | Reject | Although two reviewers have given score 6, the other reviews have clearly indicated that the paper is not good enough for ICLR. The paper does not give any new insight into the considered problem, many existing papers are ignored, and the only interesting part about evaluation of label importance is rather very shallow borrowing ideas from other fields without any deep discussion or analysis. | train | [
"88jTQIuPsjd",
"CyQ2aMS0S_G",
"D570omRhDvw",
"2OM2Lx3lSDR",
"lalIMcWkYW5",
"Hax8x-vDKq6",
"LVA6nE0aqXl",
"iImzP317-3i",
"-K6Su6OmD0r",
"Kqnh87Q4FU9",
"0H-wQ5-BjIo",
"H-xdzfKameH",
"HmKinajGUst"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" While it is true that a linear combination of class-conditioned scores for multi-class evaluation is simple and general, it just isn't a research contribution. In my estimation, the reason you haven't found a reference for this idea is that it is obvious and widely used -- I literally see this basic idea in busin... | [
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"-K6Su6OmD0r",
"iclr_2022_W6lWkLqOss",
"Hax8x-vDKq6",
"HmKinajGUst",
"H-xdzfKameH",
"0H-wQ5-BjIo",
"-K6Su6OmD0r",
"Kqnh87Q4FU9",
"CyQ2aMS0S_G",
"iclr_2022_W6lWkLqOss",
"iclr_2022_W6lWkLqOss",
"iclr_2022_W6lWkLqOss",
"iclr_2022_W6lWkLqOss"
] |
iclr_2022_fTYeefgXReA | Equivariant Heterogeneous Graph Networks | Many real-world datasets include multiple distinct types of entities and relations, and so they are naturally best represented by heterogeneous graphs. However, the most common forms of neural networks operating on graphs either assume that their input graphs are homogeneous, or they convert heterogeneous graphs into homogeneous ones, losing valuable information in the process. Any neural network that acts on graph data should be equivariant or invariant to permutations of nodes, but this is complicated when there are multiple distinct node and edge types. With this as motivation, we design graph neural networks that are composed of linear layers that are maximally expressive while being equivariant only to permutations of nodes within each type. We demonstrate their effectiveness on heterogeneous graph node classification and link prediction benchmarks. | Reject | This paper proposes a design of GNNs that are amenable to operating on heterogeneous graphs. The proposed model introduces numerous operations over the adjacency matrix and combines them as a final aggregation result. Experiments are conducted to show that the proposed method outperforms some baseline methods.
The submission suffers from, an incremental novelty, missing important references, and unconvinced experiments.
All reviewers tend to reject this submission before and after the rebuttal. | train | [
"ApA6WKb96YP",
"lmkLKObReil",
"b8EKd8CA72",
"D85X-DV_LYl",
"-2fv4Ml6x7N"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly thank the reviewers for their well-considered and practical feedback. In particular, all reviews noted a lack of sufficient explanations of why our method performs better on some datasets and tasks while underperforming in others. We are investigating this, and will address these concerns in a future r... | [
-1,
5,
5,
5,
5
] | [
-1,
4,
4,
2,
4
] | [
"iclr_2022_fTYeefgXReA",
"iclr_2022_fTYeefgXReA",
"iclr_2022_fTYeefgXReA",
"iclr_2022_fTYeefgXReA",
"iclr_2022_fTYeefgXReA"
] |
iclr_2022_5MbRzxoCAql | Fight fire with fire: countering bad shortcuts in imitation learning with good shortcuts | When operating in partially observed settings, it is important for a control policy to fuse information from a history of observations. However, a naive implementation of this approach has been observed repeatedly to fail for imitation-learned policies, often in surprising ways, and to the point of sometimes performing worse than when using instantaneous observations alone. We observe that behavioral cloning policies acting on single observations and observation histories each have their strengths and drawbacks, and combining them optimally could achieve the best of both worlds. Motivated by this, we propose a simple model combination approach inspired by human decision making: we first compute a coarse action based on the instantaneous observation, and then refine it into a final action using historical information. Our experiments show that this outperforms all baselines on CARLA autonomous driving from images and various MuJoCo continuous control tasks.
| Reject | The reviewers unisono do not accept the paper, because it is (a) not well-written; (b) experimentally not convincing, but addresses a nice problem. I suggest that the authors address the issues in a subsequent paper, and resubmit to one of the main conferences. | train | [
"lggBHdU-_e_",
"frjxy-O7dv",
"NdAaaQbMcbB",
"WNlpYhscuql",
"o26guNWMCx",
"mzShqtZVCcu",
"dKrq2Yj1ZHl",
"bjJZQnevVHZ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate your interest and kind suggestions.\n\n1. “the BC-SO policy could be trained independently”\n\nWe performed this ablation experiment where we train BC-SO first. Please refer to “w/o end-to-end” in Table3 In our ablation. Training the BC-SO policy first does indeed yield similar performance to... | [
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"mzShqtZVCcu",
"bjJZQnevVHZ",
"dKrq2Yj1ZHl",
"o26guNWMCx",
"iclr_2022_5MbRzxoCAql",
"iclr_2022_5MbRzxoCAql",
"iclr_2022_5MbRzxoCAql",
"iclr_2022_5MbRzxoCAql"
] |
iclr_2022_0d1mLPC2q2 | Understanding the Success of Knowledge Distillation -- A Data Augmentation Perspective | Knowledge distillation (KD) is a general neural network training approach that uses a teacher model to guide a student model. Many works have explored the rationale for its success. However, its interplay with data augmentation (DA) has not been well understood so far. In this paper, we are motivated by an interesting observation in classification: KD loss can take more advantage of a DA method than cross-entropy loss \emph{simply by training for more iterations}. We present a generic framework to explain this interplay between KD and DA. Inspired by it, we enhance KD via stronger data augmentation schemes named TLmixup and TLCutMix. Furthermore, an even stronger and efficient DA approach is developed specifically for KD based on the idea of active learning. The findings and merits of our method are validated with extensive experiments on CIFAR-100, Tiny ImageNet, and ImageNet datasets. We achieve new state-of-the-art accuracy by using the original KD loss armed with stronger augmentation schemes, compared to existing state-of-the-art methods that employ more advanced distillation losses. We also show that, by combining our approaches with the advanced distillation losses, we can advance the state-of-the-art even further. In addition to very promising performance, this paper importantly sheds light on explaining the success of knowledge distillation. The interaction of KD and DA methods we have discovered can inspire more powerful KD algorithms. | Reject | All of the reviewers recommended rejecting this paper.
There were concerns that the underlying research questions being probed were not expressed clearly enough.
Reviewers were concerned that the experimental work was not sufficient to warrant acceptance.
Other concerns included the technical depth of the paper, the degree to which related work was discussed, placed in context and compared with empirically.
The AC recommends rejecting this paper. | train | [
"K2tHdVPU2C_",
"KqvUQxIcJX2",
"ftlVyBIpgHE",
"2GcAjR1YXQe",
"yxos1jEH7N",
"Z3X1dpW5tAC",
"H4d4CB9ArqG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" As acknowledged by the author response, this paper could be much improved by adding rigorous explanations of how interplay of KD losses and CE losses is related to the success of KD, providing insightful observations why the proposed method has a better top-1 accuracy with a worse top-5 accuracy, etc.",
"- This... | [
-1,
3,
-1,
-1,
-1,
3,
5
] | [
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"ftlVyBIpgHE",
"iclr_2022_0d1mLPC2q2",
"H4d4CB9ArqG",
"Z3X1dpW5tAC",
"KqvUQxIcJX2",
"iclr_2022_0d1mLPC2q2",
"iclr_2022_0d1mLPC2q2"
] |
iclr_2022_pQ02Y-onvZA | $\sbf{\delta^2}$-exploration for Reinforcement Learning | Effectively tackling the \emph{exploration-exploitation dilemma} is still a major challenge in reinforcement learning.
Uncertainty-based exploration strategies developed in the bandit setting could theoretically offer a principled way to trade off exploration and exploitation, but applying them to the general reinforcement learning setting is impractical due to their requirement to represent posterior distributions over models, which is computationally intractable in generic sequential decision tasks.
Recently, \emph{Sample Average Uncertainty (SAU)} was develop as an alternative method to tackle exploration in bandit problems in a scalable way.
What makes SAU particularly efficient is that it only depends on the value predictions, meaning that it does not need to rely on maintaining model posterior distributions.
In this work we propose \emph{$\delta^2$-exploration}, an exploration strategy that extends SAU from bandits to the general sequential Reinforcement Learning scenario.
We empirically study $\delta^2$-exploration in the tabular as well as in the Deep Q-learning case, proving its strong practical advantage and wide adaptability to complex reward models such as those deployed in modern Reinforcement Learning. | Reject | Although the reviewers found the idea of the work interesting, they all think it is not ready for publication. The experiments do not properly support the claims. Discussion on the connection to some related work is missing. And also the proposed method is not well motivated. I suggest the authors to take the reviewers' comments into account, revise their work and prepare it for future venues. | train | [
"csHzdZlKi5",
"C-X3t3J-Wg_",
"pLeSDF2LUD",
"K2UqbmtMk25",
"iJcpIwooiiS",
"rcueii66NHZ",
"PbwbbUrwzj1",
"zZZG36vvkJf",
"LuhEpQHahIb",
"oB55QndWcV",
"_tTnPFXtKcx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies exploration bonus in practical deep RL based on Sample Average Uncertainty (SAU) and upper confidence bound (UCB). SAU is a recently studied novel uncertainty quantification that works for rather arbitrary estimators. Previous paper has studied how to use SAU to derive UCB-type bonus in multi-ar... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_pQ02Y-onvZA",
"zZZG36vvkJf",
"K2UqbmtMk25",
"iJcpIwooiiS",
"_tTnPFXtKcx",
"oB55QndWcV",
"LuhEpQHahIb",
"csHzdZlKi5",
"iclr_2022_pQ02Y-onvZA",
"iclr_2022_pQ02Y-onvZA",
"iclr_2022_pQ02Y-onvZA"
] |
iclr_2022_KDAEc2nai83 | Human-Level Control without Server-Grade Hardware | Deep Q-Network (DQN) marked a major milestone for reinforcement learning, demonstrating for the first time that human-level control policies could be learned directly from raw visual inputs via reward maximization. Even years after its introduction, DQN remains highly relevant to the research community since many of its innovations have been adopted by successor methods. Nevertheless, despite significant hardware advances in the interim, DQN's original Atari 2600 experiments remain extremely costly to replicate in full. This poses an immense barrier to researchers who cannot afford state-of-the-art hardware or lack access to large-scale cloud computing resources. To facilitate improved access to deep reinforcement learning research, we introduce a DQN implementation that leverages a novel concurrent and synchronized execution framework designed to maximally utilize a heterogeneous CPU-GPU desktop system. With just one NVIDIA GeForce GTX 1080 GPU, our implementation reduces the training time of a 200-million-frame Atari experiment from 25 hours to just 9 hours. The ideas introduced in our paper should be generalizable to a large number of off-policy deep reinforcement learning methods. | Reject | This paper introduces a variant of DQN optimized for desktop environments to make large scale experiments more feasible for anyone.
This paper was close. The reviewers appreciated the effort and motivation, but in the end the reviewers all seemed to think that the paper was not ready. The main contribution is framed as making DQN training more feasible, but the reviewers expected the paper to show examples of what the workflow for another architecture would look like and ideally present results for domains beyond Atari. In addition, several reviewers thought the paper could be more precise about (1) ruling out speed differences due to hardware and low-level software, and (2) contextualizing the speedups reported---does 3x matter, what should we expect, etc.
This is certainly an interesting direction. The AC personally thinks that if the authors take some steps to address the points above this will be a great and potentially high impact paper. | test | [
"1l9s-6ZKdMj",
"U_ieyboTw3f",
"zF0_kvUjKlv",
"nzo0pHPvtFG",
"OCo1dFcYys",
"twuLA9u0u2B",
"p54xc9rXmLR",
"YbSiCuKQLN4",
"xOLlMsyxxUN",
"zMCnON2puC",
"gAZa2dj2H9u",
"yxDTIGeIkQn",
"-1p-AhmVAJ0",
"yZZT-k-GD9C",
"YuHdO5CNWQj",
"OEBjtErVgLk",
"ebtKF7B2elf",
"JibvM5-glwh",
"_OCsP1wfId_... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_rev... | [
" Thanks for your response and clarifications!\n\n*deep RL methods do commonly use the type of parallelism we described previously*\n\nI am glad you were able to get to the bottom of this question. I would recommend clarifying this point in the next update of the paper.\n\n*Our hardware also uses an Intel i7 proces... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"U_ieyboTw3f",
"nzo0pHPvtFG",
"OCo1dFcYys",
"xOLlMsyxxUN",
"zMCnON2puC",
"iclr_2022_KDAEc2nai83",
"YuHdO5CNWQj",
"OEBjtErVgLk",
"gAZa2dj2H9u",
"twuLA9u0u2B",
"-1p-AhmVAJ0",
"JibvM5-glwh",
"yZZT-k-GD9C",
"_OCsP1wfId_",
"j6ypjBkLkG",
"ebtKF7B2elf",
"iclr_2022_KDAEc2nai83",
"iclr_2022... |
iclr_2022_c9IvZqZ8SNI | Learning Structure from the Ground up---Hierarchical Representation Learning by Chunking | From learning to play the piano to speaking a new language, reusing and recombining previously acquired representations enables us to master complex skills and easily adapt to new environments. Inspired by the Gestalt principle of grouping by proximity and theories of chunking in cognitive science, we propose a hierarchical chunking model (HCM). HCM learns representations from non-i.i.d sequential data from the ground up by first discovering the minimal atomic sequential units as chunks. As learning progresses, a hierarchy of chunk representation is acquired by chunking previously learned representations into more complex representations guided by sequential dependence. We provide learning guarantees on an idealized version of HCM, and demonstrate that HCM learns meaningful and interpretable representations in visual, temporal, visual-temporal domains and language data. Furthermore, the interpretability of the learned chunks enables flexible transfer between environments that share partial representational structure. Taken together, our results show how cognitive science in general and theories of chunking in particular could inform novel and more interpretable approaches to representation learning. | Reject | This paper develops an approach to learning hierarchical representations from sequential data. The reviewers were very positive about the overall approach, finding it well motivated and interesting with strong potential, and thought that the paper was extremely well written with clear examples throughout. There was a good back-and-forth between the reviewers and the authors, discussing several aspects of the paper and providing constructive suggestions for improvement. In particular, the reviewers suggested improvements in terms of independence testing, comparison to further baselines, further experiments, and other improvements as detailed in the reviews. The authors were extremely receptive of these suggestions, which is to be commended and is very much appreciated, and in a response state that they are planning to take the time needed to revise this paper before publication. | train | [
"jKYq4UJH1_O",
"Oj4klj-TrGR",
"whvSXoFVSIe",
"gFz2scg1Si7",
"r8Nf58Xe767",
"BmxIBpklRhR",
"aHAUgBUMok3",
"EfDljkb8HxO",
"laAPNX91UF",
"wTixXDmf7p",
"4o4QVslSUV",
"dii89aXe-V7",
"mFiFOXmBtBR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for a very thoughtful reply, the clarifications are greatly appreciated. \n\nI think reviewer zfT1 provided some great suggestions for comparisons with SOTA. \n\nWith regards to chunking low-level features, perhaps some sort vector quantization method could be of use (e.g. VQ-VAE) for lea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"r8Nf58Xe767",
"aHAUgBUMok3",
"gFz2scg1Si7",
"BmxIBpklRhR",
"wTixXDmf7p",
"mFiFOXmBtBR",
"dii89aXe-V7",
"4o4QVslSUV",
"iclr_2022_c9IvZqZ8SNI",
"iclr_2022_c9IvZqZ8SNI",
"iclr_2022_c9IvZqZ8SNI",
"iclr_2022_c9IvZqZ8SNI",
"iclr_2022_c9IvZqZ8SNI"
] |
iclr_2022_1iDVz-khM4P | Neural Networks Playing Dough: Investigating Deep Cognition With a Gradient-Based Adversarial Attack | Discovering adversarial examples has shaken our trust in the reliability of deep learning. Even though brilliant works have been devoted to understanding and fixing this vulnerability, fundamental questions (e.g. the mysterious generalization of adversarial examples across models and training sets) remain unanswered. This paper tests the hypothesis that it is not the neural networks failing in learning that causes adversarial vulnerability, but their different perception of the presented data. And therefore, adversarial examples should be semantic-sensitive signals which can provide us with an exceptional opening to understanding neural network learning. To investigate this hypothesis, I performed a gradient-based attack on fully connected feed-forward and convolutional neural networks, instructing them to minimally evolve controlled inputs into adversarial examples for all the classes of the MNIST and Fashion-MNIST datasets. Then I abstracted adversarial perturbations from these examples. The perturbations unveiled vivid and recurring visual structures, unique to each class and persistent over parameters of abstraction methods, model architectures, and training configurations. Furthermore, these patterns proved to be explainable and derivable from the corresponding dataset. This finding explains the generalizability of adversarial examples by, semantically, tying them to the datasets. In conclusion, this experiment not only resists interpretation of adversarial examples as deep learning failure but on the contrary, demystifies them in the form of supporting evidence for the authentic learning capacity of neural networks. | Reject | This work provides an empirical investigation on the adversarial attacking problem in deep neural networks. While it contains some interesting ideas, the work is still in the preliminary stage, lacking substantial support for the main points. Many of the ideas discussed in the paper have been explored in the past and hence more discussions on previous works would be needed. We encourage the authors to keep improving the work for future submission. | train | [
"PdIjyzm_BrM",
"UXW9BG7Z1Fm",
"K_NqORkECk_",
"h-D7-uXkw9",
"LwVcO8aJ8pI",
"4t-CDKwUuO",
"2RpC9IzgbuD",
"U_ScXOYJn7q",
"qMEgs5AEbsp"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the positive response, and I the Author all the best in future work on this topic.",
" I want to thank the reviewer for their time and feedback. I’ll try to address the feedback’s concerns in future works.",
" I want to use this opportunity to thank the reviewer for their time and thorough comment.... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
2,
4
] | [
"K_NqORkECk_",
"qMEgs5AEbsp",
"4t-CDKwUuO",
"U_ScXOYJn7q",
"2RpC9IzgbuD",
"iclr_2022_1iDVz-khM4P",
"iclr_2022_1iDVz-khM4P",
"iclr_2022_1iDVz-khM4P",
"iclr_2022_1iDVz-khM4P"
] |
iclr_2022_xf0B7-7MRo6 | AIR-Net: Adaptive and Implicit Regularization Neural Network for matrix completion | Conventionally, the matrix completion (MC) model aims to recover a matrix from partially observed elements. Accurate recovery necessarily requires a regularization encoding priors of the unknown matrix/signal properly. However, encoding the priors accurately for the complex natural signal is difficult, and even then, the model might not generalize well outside the particular matrix type. This work combines adaptive and implicit low-rank regularization that captures the prior dynamically according to the current recovered matrix. Furthermore, we aim to answer the question: how does adaptive regularization affect implicit regularization? We utilize neural networks to represent Adaptive and Implicit Regularization and named the proposed model \textit{AIR-Net}. Theoretical analyses show that the adaptive part of the AIR-Net enhances implicit regularization. In addition, the adaptive regularizer vanishes at the end, thus can avoid saturation issues. Numerical experiments for various data demonstrate the effectiveness of AIR-Net, especially when the locations of missing elements are not randomly chosen. With complete flexibility to select neural networks for matrix representation, AIR-Net can be extended to solve more general inverse problems. | Reject | There is a consensus that the contribution is not strong enough to effectively
argue for an important novel lead which would justify publication at ICLR.
Authors have also not engaged with the reviewers.
For these rejections, this paper cannot be endorsed for publication at ICLR 2022. | train | [
"Rk1emqKK0lT",
"6GCzRaDB_Vg",
"UXhTTSlYCw",
"HwZE6mqbJCY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the problem of matrix completion using neural networks and deep matrix factorization as implicit and explicit regularization. The paper proposes a general framework and studies a specific case, namely, when the regularization is a form of Dirichlet Energy on the rows and columns of the matrix, a... | [
5,
3,
5,
5
] | [
3,
4,
3,
4
] | [
"iclr_2022_xf0B7-7MRo6",
"iclr_2022_xf0B7-7MRo6",
"iclr_2022_xf0B7-7MRo6",
"iclr_2022_xf0B7-7MRo6"
] |
iclr_2022_1QxveKM654 | Genome Sequence Reconstruction Using Gated Graph Convolutional Network | A quest to determine the human DNA sequence from telomere to telomere started three decades ago and was finally finished in 2021. This accomplishment was a result of a tremendous effort of numerous experts with an abundance of data, various tools, and often included manual inspection during genome reconstruction. Therefore, such method could hardly be used as a general approach to assembling genomes, especially when the assembly speed is important. Motivated by this achievement and aspiring to make it more accessible, we investigate a previously untaken path of applying geometric deep learning to the central part of the genome assembly---untangling a large assembly graph from which a genomic sequence needs to be reconstructed. A graph convolutional network is trained on a dataset generated from human genomic data to reconstruct the genome by finding a path through the assembly graph. We show that our model can compute scores from the lengths of the overlaps between the sequences and the graph topology which, when traversed with a greedy search algorithm, outperforms the greedy search over the overlap lengths only. Moreover, our method reconstructs the correct path through the graph in the fraction of time required for the state-of-the-art de novo assemblers. This favourable result paves the way for the development of powerful graph machine learning algorithms that can solve the de novo genome assembly problem much quicker and possibly more accurately than human handcrafted techniques. | Reject | The paper demonstrates that one phase of de novo assembly, specifically the layout phase, can be replaced with graph-neural-network based methods. The paper clarifies in the rebuttal that it focuses on building a method for assembling high-quality long reads.
All four reviewers rated the paper as below the acceptance threshold. The reviewers largely agree that the idea of using GNNs to assemble a genome from reads is novel, interesting, and has the potential to be very useful.
The reviewers raise the following concerns: The paper only considers synthetic data, and the synthetic reads used in the simulations are error-free. In practice, reads are not error-free, and thus simulations on real data or at the very least on reads with errors are needed. The authors acknowledge that, and state that they'll provide such experiments in future work. In summary, the reviewers found the experiments to be insufficient to support the claims, even though it is understood by the reviewers and me that the paper only presents a proof-of-concept idea. I agree with the reviewers that simulations on erroneous reads, ideally real data, would be needed for acceptance.
I recommend to reject the paper, since the paper provides insufficient experiments to understand the merits of the proposed approach. | test | [
"zZKBN7tWXVL",
"4o7tljaxB7",
"RDXojl2eHSr",
"hUpEKjsd8DW",
"Cn2mNZq24e",
"tRzJpH5fKjP",
"3LL_MK8IHyE",
"ZnkbPVBdr_7",
"4FTbZnNTyV",
"vwMMpYsVCFg",
"P-UMYtYSkWM",
"mouXoh3S7v"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for their response to my comments. I still believe the authors would need to do more to demonstrate their algorithm before it would make for a compelling publication in ICLR. Therefore, I maintain my score.",
" Thank the authors for the response, and I agree that the proposed idea is pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4,
4
] | [
"hUpEKjsd8DW",
"3LL_MK8IHyE",
"Cn2mNZq24e",
"mouXoh3S7v",
"P-UMYtYSkWM",
"vwMMpYsVCFg",
"4FTbZnNTyV",
"iclr_2022_1QxveKM654",
"iclr_2022_1QxveKM654",
"iclr_2022_1QxveKM654",
"iclr_2022_1QxveKM654",
"iclr_2022_1QxveKM654"
] |
iclr_2022_d2XZsOT-_U_ | Match Prediction Using Learned History Embeddings | Contemporary ranking systems that are based on win/loss history, such as Elo or TrueSkill represent each player using a scalar estimate of ability (plus variance, in the latter case). While easily interpretable, this approach has a number of shortcomings: (i) latent attributes of a player cannot be represented, and (ii) it cannot seamlessly incorporate contextual information (e.g. home-field advantage). In this work, we propose a simple Transformer-based approach for pairwise competitions that recursively operates on game histories, rather than modeling players directly. By characterizing each player entirely by its history, rather than an underlying scalar skill estimate, it is able to make accurate predictions even for new players with limited history. Additionally, it is able to model both transitive and non-transitive relations and can leverage contextual information. When restricted to the same information as Elo and Glicko, our approach significantly outperforms them on predicting the outcome of real-world Chess, Baseball and Ice Hockey games. %Further gains can be achieved when game meta-data is added.
| Reject | This paper presents a new method that uses transformers to predict the result of pairwise competitions given each players’ history of past game plays. The reviewers thought this had notable potential benefits for practice. However the reviewers’ also had some significant concerns with the current work in terms of the evaluations used, which were primarily correlation instead of prediction accuracy or calibration etc. There was also some concern about other aspects of the presentation. We hope that the reviewers’ responses are useful to the authors in revising their work for future submissions as this method has the potential to be very useful for many domains. | train | [
"xed4DUSeTdV",
"CS4bi1GbXP_",
"uw0-gQEdzPG",
"9kQLMghqOrD",
"EX-sz7rkvwa",
"iRPKym623QC"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. We apologize for not including the appendix. We only tune one hyper-parameter in the baselines BBT and Glicko, and that is the initial standard deviation of player rating (referred to as init_rd in the sport package [18]). For the EMA baseline the only parameter is the half-life.\n\n| | Glicko rd | BBT rd... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"EX-sz7rkvwa",
"iRPKym623QC",
"9kQLMghqOrD",
"iclr_2022_d2XZsOT-_U_",
"iclr_2022_d2XZsOT-_U_",
"iclr_2022_d2XZsOT-_U_"
] |
iclr_2022_MqEcDNQwOSA | Reconstructing Word Embeddings via Scattered $k$-Sub-Embedding | The performance of modern neural language models relies heavily on the diversity of the vocabularies. Unfortunately, the language models tend to cover more vocabularies, the embedding parameters in the language models such as multilingual models used to occupy more than a half of their entire learning parameters. To solve this problem, we aim to devise a novel embedding structure to lighten the network without considerably performance degradation. To reconstruct $N$ embedding vectors, we initialize $k$ bundles of $M (\ll N)$ $k$-sub-embeddings to apply Cartesian product. Furthermore, we assign $k$-sub-embedding using the contextual relationship between tokens from pretrained language models. We adjust our $k$-sub-embedding structure to masked language models to evaluate proposed structure on downstream tasks. Our experimental results show that over 99.9$\%+$ compressed sub-embeddings for the language models performed comparably with the original embedding structure on GLUE and XNLI benchmarks. | Reject | Overall, the paper proposes an interesting idea to share parameters across words and reduce the size of the embedding which hasn't been explored in the past with promising results on XNLI task. However, all the reviewers agree that the novelty of this paper is not enough. In addition, the clarity and experiments are not sufficient enough too. | train | [
"Y41fIHS7jy2",
"sMT03pMIMMp",
"PWo7YiY8CvT",
"YawfF9hgiRu",
"dI9Olq_Hnna",
"gj_KFNyGJSQ",
"nEdX9O-tDHO",
"a0j3eSI94GM",
"ZAUPgd5wbPq"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thoughtful response to my review.\n\nRegarding the comment: \n>\"We think that the novelty of this work can be found in adjusting in modern language models. Unlike the previous studies we test our embedding structure in BERT based models.\"\n- There is previous work that looks at emb... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
5
] | [
"PWo7YiY8CvT",
"a0j3eSI94GM",
"ZAUPgd5wbPq",
"nEdX9O-tDHO",
"gj_KFNyGJSQ",
"iclr_2022_MqEcDNQwOSA",
"iclr_2022_MqEcDNQwOSA",
"iclr_2022_MqEcDNQwOSA",
"iclr_2022_MqEcDNQwOSA"
] |
iclr_2022_U-GB_gONqbo | Scalable Hierarchical Embeddings of Complex Networks | Graph representation learning has become important in order to understand and predict intrinsic structures in complex networks. A variety of embedding methods has in recent years been developed including the Latent Distance Modeling (LDM) approach. A major challenge is scaling network embedding approaches to very large networks and a drawback of LDM is the computational cost invoked evaluating the full likelihood having O(N^2) complexity, making such analysis of large networks infeasible. We propose a novel multiscale hierarchical estimate of the full likelihood of LDMs providing high details where the likelihood approximation is most important while scaling in complexity at O(NlogN). The approach relies on a clustering procedure approximating the Euclidean norm of every node pair according to the multiscale hierarchical structure imposed. We demonstrate the accuracy of our approximation and for the first time embed very large networks in the order of a million nodes using LDM and contrast the predictive performance to prominent scalable graph embedding approaches. We find that our approach significantly outperforms these existing scalable approaches in the ability to perform link prediction, node clustering, and classification utilizing a surprisingly low embedding dimensionality of two to three dimensions whereas the extracted hierarchical structure facilitates network visualization and interpretation. The developed scalable hierarchical embedding approach enables accurate low dimensional representations of very large networks providing detailed visualizations that can further our understanding of their properties and structure. | Reject | This paper proposes SH-LDM, which approximates the LDM model with a hierarchy of clusters. The authors should discuss the details about clustering and how this algorithm can benefit from sparsity in a more rigorous language.
The authors should review the rich literature on scaling up distance-based methods such as kNN and kernel methods, which this paper belongs to. The title is also misleading; the paper mainly discusses scalable link prediction rather than learning new embeddings.
The reviewers have raised several questions about the experiments. For example, the main results should be a table for comparing the speed rather than the accuracy of the algorithms. Also, the original LDM should be included in the accuracy tables. The settings in the experiments, such as embedding dimensions, are not appropriate for large graphs. | train | [
"BnjygzClVoz",
"VCyQ-EqFkoF",
"jbiydLM6FV2",
"fWRAq3_uhr",
"jg3bEZ2Jq3",
"Zb0-qypXX9",
"5qDfiQAHBCbU",
"LmaENCszjct",
"NRu1LEWGAF5",
"_umMfAim7Hl",
"xHxF30ZoO-Y",
"PhizJqa6ahi",
"bcqEGPeH3CP",
"fkwL5eHOKp7",
"ELtLZYZT8jL",
"lvCBcWaBzQ7",
"HW6NSmLYxcP",
"jo71WiAFWZk"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I am not convinced by some of the responses though, e.g., Q2. The current version needs substantial improvement to meet the ICLR standard. I will keep the current score.",
" We thank MkFe for answering our rebuttal.\n\n### if there is no consistency, between the underlying structure a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"fkwL5eHOKp7",
"jbiydLM6FV2",
"LmaENCszjct",
"5qDfiQAHBCbU",
"5qDfiQAHBCbU",
"HW6NSmLYxcP",
"bcqEGPeH3CP",
"ELtLZYZT8jL",
"ELtLZYZT8jL",
"lvCBcWaBzQ7",
"lvCBcWaBzQ7",
"HW6NSmLYxcP",
"HW6NSmLYxcP",
"jo71WiAFWZk",
"iclr_2022_U-GB_gONqbo",
"iclr_2022_U-GB_gONqbo",
"iclr_2022_U-GB_gONqbo... |
iclr_2022_1OHZX4YDqhT | FedNAS: Federated Deep Learning via Neural Architecture Search | Federated Learning (FL) is an effective learning framework used when data cannotbe centralized due to privacy, communication costs, and regulatory restrictions.While there have been many algorithmic advances in FL, significantly less effort hasbeen made on model development, and most works in FL employ predefined modelarchitectures discovered in the centralized environment. However, these predefinedarchitectures may not be the optimal choice for the FL setting since the user datadistribution at FL users is often non-identical and independent distribution (non-IID). This well-known challenge in FL has often been studied at the optimizationlayer. Instead, we advocate for a different (and complementary) approach. Wepropose Federated Neural Architecture Search (FedNAS) for automating the modeldesign process in FL. More specifically, FedNAS enables scattered workers tosearch for a better architecture in a collaborative fashion to achieve higher accuracy. Beyond automating and improving FL model design, FedNAS also provides anew paradigm for personalized FL via customizing not only the model weightsbut also the neural architecture of each user. As such, we also compare FedNASwith representative personalized FL methods, including perFedAvg (based on meta-learning), Ditto (bi-level optimization), and local fine-tuning. Our experiments ona non-IID dataset show that the architecture searched by FedNAS can outperformthe manually predefined architecture as well as existing personalized FL methods.To facilitate further research and real-world deployment, we also build a realisticdistributed training system for FedNAS, which will be publicly available andmaintained regularly. | Reject | This paper proposes a personalized federated learning framework based on neural architecture search, in which the local clients perform NAS to search for a better architecture for the private local data. Specifically, the authors extend MiLeNAS, which is an existing NAS algorithm, to be run in the federated learning setting, and use FedAvg for model aggregation. The proposed FedNAS framework is validated against personalized federated learning methods with predefined architectures, such as perFedAvg, Ditto, and local fine-tuning, and is shown to largely outperform them on non-IID settings with label skew and LDA distribution. FedNAS’s collaborative search for the optimal architecture also yields a better performing global model than FedAvg.
The paper received borderline ratings. Three out of four reviewers are learning negative, while one is leaning negative. The below is the summary of pros and cons of the paper mentioned by the reviewers:
Pros
- The idea of using NAS for personalized federated learning seems novel and interesting.
- The proposed FedNAS framework is shown to be effective in tackling the data heterogeneity problem, which is a fundamental problem with federated learning.
- The authors have released the code for reproducibility.
Cons
- The technical contribution of the work seems limited, since the proposed FedNAS straightforwardly combines an existing NAS method (MiLeNAS) with federated averaging, and there is no challenge mentioned for this new problem of federated NAS.
- The choice of a specific NAS method (MiLeNAS) is not well justified, and other NAS methods should be also considered.
- The motivation is unclear: It is not clear whether the authors aim to perform collaborative automotive design or solve personalized federated learning.
- There is no convergence analysis.
While some of the concerns have been addressed away in the authors’ responses during the rebuttal period, the reviewers did not change their ratings, and the final consensus was to reject the paper.
I agree with the authors that combining federated learning with NAS, and applying it for personalized federated learning is a novel idea that intuitively makes sense. However, I agree with the reviewers that the current method is a straightforward combination of an existing NAS method and an existing FL algorithm, the authors should identify new challenges posed by the combination of the two methods, and identify them.
Further, performing NAS on edge devices may be possible, but not the best solution, since it could result in large computational overhead. While the authors mention that MiLeNAS is computational suitable in such settings, there should be a proper investigation of the accuracy-efficiency tradeoff, showing how well FedNAS performs against others with the same computational budget (or training time / energy consumption).
Overall, this is a paper that proposes a novel and interesting idea that seems to work, but the paper does not sufficiently examine challenges posed by the new problem. I suggest the authors identify the new challenges and examine the efficiency issue mentioned, and further develop their method, if necessary. | train | [
"0rPJu4pQr_9",
"sZIqSusIBen",
"RASmV9ybQp7",
"REsL0gHcEr",
"AuJilW4FlVs",
"cIBlOho5tcE",
"AGF7AJ6vPXt",
"gnq1oNx95NQ",
"Qu5Mwh18YRX",
"tP5l5Cq8IYU",
"q0NpsArO-p",
"b2d5dfse5ND"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Here are the final accuracy values obtained from our exploration.\n\n\n| Method | Average Validation Accuracy |\n|----------------------|-----------------------------|\n| FedNAS | 90.64% |\n| FedAvg + DARTs model | 87.11% |\n\nHyper-parameter u... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"sZIqSusIBen",
"REsL0gHcEr",
"iclr_2022_1OHZX4YDqhT",
"Qu5Mwh18YRX",
"iclr_2022_1OHZX4YDqhT",
"b2d5dfse5ND",
"q0NpsArO-p",
"tP5l5Cq8IYU",
"RASmV9ybQp7",
"iclr_2022_1OHZX4YDqhT",
"iclr_2022_1OHZX4YDqhT",
"iclr_2022_1OHZX4YDqhT"
] |
iclr_2022_vtDzHJOsmfJ | Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint | Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss ($\textsf{EL}$), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the $\textit{global}$ optimum of this non-convex problem. In particular, we first propose the $\mathtt{ELminimizer}$ algorithm, which finds the optimal $\textsf{EL}$ fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to $\mathtt{ELminimizer}$ and finds a sub-optimal $\textsf{EL}$ fair predictor using $\textit{unconstrained}$ convex programming tools. Experiments on real-world data show the effectiveness of our algorithms. | Reject | The paper considers learning classifiers under a fairness constraint which enforces the loss to be equal on certain subgroups. Reviewers found the work to be well-motivated, but raised concerns on the lack of discussion and comparison to relevant prior work. Notable examples in the fairness literature are Donini et al., "Empirical Risk Minimization under Fairness Constraints", Celis et al., "Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees", while in the more broader constrained optimization literature, Kumar et al. "Implicit Rate-Constrained Optimization of Non-decomposable Objectives". The authors are encouraged to incorporate reviewers' detailed comments for a revised version of this work. | train | [
"sBY2g826HMU",
"UDW3bBiXpUB",
"irhBBGuGuS9",
"qKSDeivsRZE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This paper studies the problem of fair supervised learning under the Equalized Loss (EL) fairness notion, which is formulated as a non-convex constrained optimization problem. The authors introduce two algorithms that find the global (sub-)optimal solution by solving a sequence of convex (constrained) optimiza... | [
5,
6,
3,
3
] | [
4,
3,
3,
5
] | [
"iclr_2022_vtDzHJOsmfJ",
"iclr_2022_vtDzHJOsmfJ",
"iclr_2022_vtDzHJOsmfJ",
"iclr_2022_vtDzHJOsmfJ"
] |
iclr_2022_K-hiHQXEQog | Autoregressive Latent Video Prediction with High-Fidelity Image Generator | Video prediction is an important yet challenging problem; burdened with the tasks of generating future frames and learning environment dynamics. Recently, autoregressive latent video models have proved to be a powerful video prediction tool, by separating the video prediction into two sub-problems: pre-training an image generator model, followed by learning an autoregressive prediction model in the latent space of the image generator. However, successfully generating high-fidelity and high-resolution videos has yet to be seen. In this work, we investigate how to train an autoregressive latent video prediction model capable of predicting high-fidelity future frames with minimal modification to existing models, and produce high-resolution (256x256) videos. Specifically, we scale up prior models by employing a high-fidelity image generator (VQ-GAN) with a causal transformer model, and introduce additional techniques of top-$k$ sampling and data augmentation to further improve video prediction quality. Despite the simplicity, the proposed method achieves competitive performance to state-of-the-art approaches on standard video prediction benchmarks with fewer parameters, and enables high-resolution video prediction on complex and large-scale datasets. Videos are available at the anonymized website https://sites.google.com/view/harp-anonymous | Reject | The submission proposes to learn a causal transformer model over a pretrained VQ-GAN representation to generate videos. While the paper is well written and clear, proposing a simple idea, the novelty of this method is not well explained compared to pre-existing publications see ([reviews JjT4](https://openreview.net/forum?id=K-hiHQXEQog¬eId=22EyPvpodh), [3gJ8](https://openreview.net/forum?id=K-hiHQXEQog¬eId=9ikn_nBC_Sf), [6x6m](https://openreview.net/forum?id=K-hiHQXEQog¬eId=zCX9VP8I5uL), and [pKCo](https://openreview.net/forum?id=K-hiHQXEQog¬eId=uRgHWX4C5yX)) especially since it's lacking [ablation](https://openreview.net/forum?id=K-hiHQXEQog¬eId=uRgHWX4C5yX) or comparative (see [reviews 6x6m](https://openreview.net/forum?id=K-hiHQXEQog¬eId=zCX9VP8I5uL), and[pKCo](https://openreview.net/forum?id=K-hiHQXEQog¬eId=uRgHWX4C5yX)) experiments.
The authors have expressed their consideration of reviewers requests but will not satisfy them in time for this conference. Therefore I am currently recommending this submission for rejection. | train | [
"zCX9VP8I5uL",
"iZtqulRGnFC",
"7ssN3aVcS8i",
"22EyPvpodh",
"9ikn_nBC_Sf",
"uRgHWX4C5yX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a new video prediction model, HARP, whose training operates in two stages: pretraining an image autoencoder on the frames with discrete latent states, then learning a transformer predictor on the resulting latent space instead of the pixel space. The authors complement this procedure with dat... | [
3,
-1,
5,
3,
5,
3
] | [
4,
-1,
4,
4,
4,
4
] | [
"iclr_2022_K-hiHQXEQog",
"iclr_2022_K-hiHQXEQog",
"iclr_2022_K-hiHQXEQog",
"iclr_2022_K-hiHQXEQog",
"iclr_2022_K-hiHQXEQog",
"iclr_2022_K-hiHQXEQog"
] |
iclr_2022_EVqFdCB5PfV | Iterative Hierarchical Attention for Answering Complex Questions over Long Documents | We propose a new model, DocHopper, that iteratively attends to different parts of long, hierarchically structured documents to answer complex questions. Similar to multi-hop question-answering (QA) systems, at each step, DocHopper uses a query q to attend to information from a document, combines this “retrieved” information with q to produce the next query. However, in contrast to most previous multi-hop QA systems, DocHopper is able to “retrieve” either short passages or long sections of the document, thus emulating a multi-step process of “navigating” through a long document to answer a question. To enable this novel behavior, DocHopper does not combine document information with q by concatenating text to the text of q, but by combining a compact neural representation of q with a compact neural representation of a hierarchical part of the document -- potentially a large part. We experiment with DocHopper on four different QA tasks that require reading long and complex documents to answer multi-hop questions, and show that DocHopper outperforms all baseline models and achieves state-of-the-art results on all datasets. Additionally, DocHopper is efficient at inference time, being 3 - 10 times faster than the baselines. | Reject | Strength
* The paper is relatively clearly written.
* The proposed method appears to be sound.
Weakness
* The novelty of the work seems to be limited.
* The experiment part needs significant improvements. The comparison with existing methods may not be fair. Evaluation of efficiency should be given. There are also detailed investigations that need to be conducted, as indicated by the reviewers.
* There are technical issues that need to be addressed. | train | [
"UT3L6gN8qZi",
"lE5e9fFwilI",
"wNDx7JBDLvJ",
"ER7Gmedgpe8",
"PnROvx5ure8",
"swVK-BI-7U_",
"7BV8cWdLJuZ",
"5C-Ckh7i7GD",
"SoCwBSVmL5S"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a simple attention-based model for conversational and multi-hop QA tasks. The model use BERT-like pre-trained LM ETC separately encodes questions and paragraph (i.e., a collection of sentences). Besides the encodings on sentence-level, the final context encodings also contain extra paragraph emb... | [
5,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_EVqFdCB5PfV",
"ER7Gmedgpe8",
"SoCwBSVmL5S",
"UT3L6gN8qZi",
"5C-Ckh7i7GD",
"7BV8cWdLJuZ",
"iclr_2022_EVqFdCB5PfV",
"iclr_2022_EVqFdCB5PfV",
"iclr_2022_EVqFdCB5PfV"
] |
iclr_2022_SCSonHu4p0W | Knowledge Based Multilingual Language Model | Knowledge enriched language representation learning has shown promising performance across various knowledge-intensive NLP tasks. However, existing knowledge based language models are all trained with monolingual knowledge graph data, which limits their application to more languages. In this work, we present a novel framework to pretrain knowledge based multilingual language models (KMLMs). We first generate a large amount of code-switched synthetic sentences and reasoning-based multilingual training data using the Wikidata knowledge graphs. Then based on the intra- and inter-sentence structures of the generated data, we design pretraining tasks to facilitate knowledge learning, which allows the language models to not only memorize the factual knowledge but also learn useful logical patterns. Our pretrained KMLMs demonstrate significant performance improvements on a wide range of knowledge-intensive cross-lingual NLP tasks, including named entity recognition, factual knowledge retrieval, relation classification, and a new task designed by us, namely, logic reasoning. Our code and pretrained language models will be made publicly available. | Reject | This paper studies the important problem of adding structured knowledge (in this case from Wikidata) to pretrained language models. The reviewers do not see this paper as ready for ICLR and recommend a number of revisions. Unfortunately the authors did not respond during the author response period. The area chair hence agrees with the reviewers. | train | [
"TtXCQ7Q35m",
"dtJS22Yx8qp",
"uGsbbJdnAVb",
"OQv6KFJgRBl",
"8ABSp4b9sHB",
"uES1rfn2y0q",
"o7nbmFPxQuD",
"Iz0dqH9a7f1",
"dKILBtBtn87"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have looked at the informative reviews from the other reviewers and I agree with most of the points raised. I maintain my previous recommendation that the paper can be further improved.\nThank you.",
" I have read several issues in the different reviews and I agree with all of the points raised. The paper can... | [
-1,
-1,
-1,
5,
-1,
5,
5,
3,
3
] | [
-1,
-1,
-1,
3,
-1,
4,
3,
5,
4
] | [
"OQv6KFJgRBl",
"Iz0dqH9a7f1",
"o7nbmFPxQuD",
"iclr_2022_SCSonHu4p0W",
"iclr_2022_SCSonHu4p0W",
"iclr_2022_SCSonHu4p0W",
"iclr_2022_SCSonHu4p0W",
"iclr_2022_SCSonHu4p0W",
"iclr_2022_SCSonHu4p0W"
] |
iclr_2022_kamUXjlAZuw | On Learning with Fairness Trade-Offs | Previous literature has shown that bias mitigating algorithms were sometimes prone to overfitting and had poor out-of-sample generalisation. This paper is first and foremost concerned with establishing a mathematical framework to tackle the specific issue of generalisation. Throughout this work, we consider fairness trade-offs and objectives mixing statistical loss over the whole sample and fairness penalties on categories (which could stem from different values of protected attributes), encompassing partial de-biasing. We do so by adopting two different but complementary viewpoints: first, we consider a PAC-type setup and derive probabilistic upper bounds involving sample-only information; second, we leverage an asymptotic framework to derive a closed-form limiting distribution for the difference between the empirical trade-off and the true trade-off. While these results provide guarantees for learning fairness metrics across categories, they also point out to the key (but asymmetric) role played by class imbalance. To summarise, learning fairness without having access to enough category-level samples is hard, and a simple numerical experiment shows that it can lead to spurious results. | Reject | This paper establishes the guarantee for the generalization of fairness-aware learning in binary classification under PAC-learning and a more practical asymptotic framework. The paper is nicely written, and theorems and proofs are well organized. However, novelty of the contribution seems to be insufficient. A future version of the paper may benefit from additional theoretical results or more diverse experiments. | train | [
"8DdzpcmJFjb",
"BOmW4GtHu1j",
"sbJ5qopOntw",
"PMZQ0b_6WrB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper claims to study out-of-sample generalization w/ both (unconstrained) loss and the fairness consideration. While there are multiple theoretical results presented, the connection between them could be further elaborated, so that the takeaway messages can be clearly conveyed. The paper is hard to parse from... | [
3,
5,
3,
3
] | [
4,
2,
4,
4
] | [
"iclr_2022_kamUXjlAZuw",
"iclr_2022_kamUXjlAZuw",
"iclr_2022_kamUXjlAZuw",
"iclr_2022_kamUXjlAZuw"
] |
iclr_2022_T_uSMSAlgoy | On the Latent Holes 🧀 of VAEs for Text Generation | In this paper, we provide the first focused study on the discontinuities (aka. holes) in the latent space of Variational Auto-Encoders (VAEs), a phenomenon which has been shown to have a detrimental effect on model capacity. When investigating la- tent holes, existing works are exclusively centred around the encoder network and they merely explore the existence of holes. We tackle these limitations by proposing a highly efficient Tree-based Decoder-Centric (TDC) algorithm for latent hole identification, with a focal point on the text domain. In contrast to past studies, our approach pays attention to the decoder network, as a decoder has a direct impact on the model’s output quality. Furthermore, we provide, for the first time, in-depth empirical analysis of the latent hole phenomenon, investigating several important aspects such as how the holes impact VAE algorithms’ performance on text generation, and how the holes are distributed in the latent space. | Reject | This paper studies discontinuities (i.e., holes) in the latent space of text VAE. Analysis of previous hole detection methods are conducted, and a new efficient hole detection algorithm is proposed. It is an interesting work, but the paper in its current form has a few weaknesses/flaws regarding the proposed algorithm, experiment designs and the resulting conclusions. Reviewers have made various constructive suggestions, which the authors acknowledged. | train | [
"0Nggw0JVYzO",
"MxzukGoIPef",
"IEKYqLRqGwQ",
"ny3_EazThC",
"wUQQfwgNxmd",
"uJvYXXtSfTT",
"hG8cMUHeJkJ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Regarding __R1__: I don't doubt that PPL is valid for comparing different generation methods on the same dataset. I just don't think numbers for the same method should be compared across datasets, since some corpora have an systematically higher or lower PPL than others. What does that tell us about the model?\n\... | [
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"IEKYqLRqGwQ",
"hG8cMUHeJkJ",
"uJvYXXtSfTT",
"wUQQfwgNxmd",
"iclr_2022_T_uSMSAlgoy",
"iclr_2022_T_uSMSAlgoy",
"iclr_2022_T_uSMSAlgoy"
] |
iclr_2022_U_Jog0t3fAu | Iterative Sketching and its Application to Federated Learning | Johnson-Lindenstrauss lemma is one of the most valuable tools in machine learning, since it enables the reduction to the dimension of various learning problems. In this paper, we exploit the power of Fast-JL transform or so-called sketching technique and apply it to federated learning settings. Federated learning is an emerging learning scheme which allows multiple clients to train models without data exchange. Though most federated learning frameworks only require clients and the server to send gradient information over the network, they still face the challenges of communication efficiency and data privacy. We show that by iteratively applying independent sketches combined with additive noises, one can achieve the above two goals simultaneously. In our designed framework, each client only passes a sketched gradient to the server, and de-sketches the average-gradient information received from the server to synchronize. Such framework enjoys several benefits: 1). Better privacy, since we only exchange randomly sketched gradients with low-dimensional noises, which is more robust against emerging gradient attacks; 2). Lower communication cost per round, since our framework only communicates low-dimensional sketched gradients, which is particularly valuable in a small-bandwidth channel; 3). No extra overall communication cost. We provably show that the introduced randomness does not increase the overall communication at all.
| Reject | The paper studies federated learning with various sketching techniques used for communication.
The main concerns from the reviewers are:
1) the presentation can be improved;
2) the novelty and related work section is not satisfactory since there have been papers on sketched federated learning;
3) there is no numerical study to validate the efficacy of the method.
I suggest the authors to take into consideration the feedback from the reviewers in the revision of the paper. | train | [
"-YTWjeYM1Q0",
"VOVDjYNSGy6",
"EaG49kquHca",
"St729tl0bFk",
"M7YduW5tUCs",
"YEG8-89qs2w",
"9V5znadxxhw",
"GsLhFVLIGR2",
"0nflvUErdOp",
"T4KO7mQP9x",
"X_I4xuPUxw",
"soidK7Uos4Z"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a sketching approach to reduce the communication cost of federate learning, and prove the convergence of their proposed method for convex function. By adding gaussian noise to the gradient, it can also guarantee different privacy. Let me first summarize the main idea of this paper. The idea is c... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_U_Jog0t3fAu",
"GsLhFVLIGR2",
"St729tl0bFk",
"YEG8-89qs2w",
"0nflvUErdOp",
"soidK7Uos4Z",
"-YTWjeYM1Q0",
"X_I4xuPUxw",
"T4KO7mQP9x",
"iclr_2022_U_Jog0t3fAu",
"iclr_2022_U_Jog0t3fAu",
"iclr_2022_U_Jog0t3fAu"
] |
iclr_2022_32OdIHsu1_ | DL-based prediction of optimal actions of human experts | Expert systems have been developed to emulate human experts’ decision-making. Once developed properly, expert systems can assist or substitute human experts, but they require overly expensive knowledge engineering/acquisition. Notably, deep learning (DL) can train highly efficient computer vision systems only from examples instead of relying on carefully selected feature sets by human experts. Thus, we hypothesize that DL can be used to build expert systems that can learn human experts’ decision-making from examples only without relying on overly expensive knowledge engineering. To address this hypothesis, we train DL agents to predict optimal strategies (actions or action sequences) for the popular game `Angry Birds’, which requires complex problem-solving skills. In our experiments, after being trained with screenshots of different levels and pertinent 3-star guides, DL agents can predict strategies for unseen levels. This raises the possibility of building DL-based expert systems that do not require overly expensive knowledge engineering. | Reject | This paper trains a neural network to predict expert strategies (described in natural language) in the game of Angry Birds. While the reviewers agreed this was potentially interesting, there was also a consensus that the scope of the paper was too narrow, that the writing was imprecise, and that the evaluations too few and too qualitative. I agree the paper does not seem thorough enough for ICLR, and recommend rejection. | train | [
"17NwP6Z3boF",
"VsrtpGM01ko",
"aWiRNkkrv5_",
"veuzuBVyEZ7",
"VPPSvg-C7X",
"XYY9b6aXaey",
"CsL4r4dGYe",
"t5DGY9ATe9_"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer's efforts and feedback. \n\n1. Reinforcement learning (RL) has been used to train DL models to play AB (or other games). Although RL is a model of human learning, it remains unclear if RL can fully capture humans’ learning capability. More specifically, we organized a sequence ... | [
-1,
-1,
-1,
-1,
3,
1,
3,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"t5DGY9ATe9_",
"CsL4r4dGYe",
"XYY9b6aXaey",
"VPPSvg-C7X",
"iclr_2022_32OdIHsu1_",
"iclr_2022_32OdIHsu1_",
"iclr_2022_32OdIHsu1_",
"iclr_2022_32OdIHsu1_"
] |
iclr_2022_FvfV64rovnY | Explaining Scaling Laws of Neural Network Generalization | The test loss of well-trained neural networks often follows precise power-law scaling relations with either the size of the training dataset or the number of parameters in the network. We propose a theory that explains and connects these scaling laws. We identify variance-limited and resolution-limited scaling behavior for both dataset and model size, for a total of four scaling regimes. The variance-limited scaling follows simply from the existence of a well-behaved infinite data or infinite width limit, while the resolution-limited regime can be explained by positing that models are effectively resolving a smooth data manifold. In the large width limit, this can be equivalently obtained from the spectrum of certain kernels, and we present evidence that large width and large dataset resolution-limited scaling exponents are related by a duality. We exhibit all four scaling regimes in the controlled setting of large random feature and pretrained models and test the predictions empirically on a range of standard architectures and datasets. We also observe several empirical relationships between datasets and scaling exponents: super-classing image tasks does not change exponents, while changing input distribution (via changing datasets or adding noise) has a strong effect. We further explore the effect of architecture aspect ratio on scaling exponents. | Reject | This paper investigates the scaling laws of neural networks with respect to the number of training samples $D$ and parameters $P$ for some estimators in two regimes: the variance-limited regime and the resolution-limited regime. The theoretical results are supported by some numerical experiments.
Unfortunately, the paper has several critical issues, in particular, in its novelty and technical correctness.
1. The theoretical analyses lack much of their rigor. The assumptions and problem setups are not precisely introduced. Accordingly, the statement of each theorem is presented in an inaccurate way. Moreover, some theoretical consequences contain technical flaws (e.g., $1/P$ should be replaced by $1/\sqrt{P}$ without an appropriate assumption on the loss function such as strong convexity and smoothness).
2. Many of the presented results are already known in the literatures. It is unfortunate that the authors did no cite relevant existing literatures and did not discuss its novelty compared with the existing work.
For those reasons, this paper lacks its novelty and the quality of the paper is not sufficient to be accepted.
I recommend the authors to thoroughly survey the literature of the statistical learning theory from classic nonparametric regression analyses to recent advances on overparameterization. | train | [
"Ede9T3UEVa",
"_3y7oFdxI2S",
"n5dbXmBUV1P",
"iOt0dUUkjan",
"ESbOZd6wMpz",
"WdkXfZ2Dohv",
"o7sRq70OoTP",
"DEiZtqEZXIA",
"YPNVi3BsF5G",
"SazFeBPghb0",
"AWkMACvGMPY",
"rHsGi2rCGzj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed reply. It is unfortunate that the authors chose not to revise the paper during the rebuttal period. In my opinion the current submission requires a major revision to be accepted. I'm therefore inclined to keep my score. \nDue to the limited time, I will only comment on a few points from t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"AWkMACvGMPY",
"DEiZtqEZXIA",
"AWkMACvGMPY",
"iclr_2022_FvfV64rovnY",
"AWkMACvGMPY",
"AWkMACvGMPY",
"rHsGi2rCGzj",
"rHsGi2rCGzj",
"SazFeBPghb0",
"iclr_2022_FvfV64rovnY",
"iclr_2022_FvfV64rovnY",
"iclr_2022_FvfV64rovnY"
] |
iclr_2022_onwTC5W0XJ | Causally Focused Convolutional Networks Through Minimal Human Guidance | Convolutional Neural Networks (CNNs) are the state of the art in image classification mainly due to their ability to automatically extract features from the images and in turn, achieve accuracy higher than any method in history. However, the flip side is, they are correlational models which aggressively learn features that highly correlate with the labels. Such features may not be causally related to the labels as per human cognition. For example, in a subset of images, cows can be on grassland, but classifying an image as cow based on the presence of grassland is incorrect. To marginalize out the effect of all possible contextual features we need to gather a huge training dataset, which is not always possible. Moreover, this prohibits the model to justify the decision. This issue has some serious implications in certain domains such as medicine, where the amount of data can be limited but the model is expected to justify its decisions. In order to mitigate this issue, our proposal is to focus CNN to extract features that are causal from a human perspective. We propose a mechanism to accept guidance from humans in the form of activation masks to modify the learning process of CNN. The amount of additional guidance can be small and can be easily formed. Through detailed analysis, we show that this method not only improves the learning of causal features but also helps in learning efficiently with less data. We demonstrate the effectiveness of our method against multiple datasets using quantitative as well as qualitative results. | Reject | Although this paper is on an interesting topic, there is a consensus that this paper is below the bar for acceptance. My advice is to take take criticisms of the reviewers seriously, add the extra experiments, rewrite the paper and then submit it to a different conference. If the authors feel that the reviewers misunderstood their paper, please remember that the level to which they were able to understand it is also a function of how the paper is written. | train | [
"OUfxUvqt8it",
"3CQD_-GIEK9",
"NOmx9Bu-eXE",
"qyU8aWfvxM",
"e3fDdDzEVll",
"8PJ-S_Ywex0",
"qjD9KBeSAiq",
"AV98zSjDPWZ",
"mVMRVsyP2Je"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their response. However, I think the authors did not address my concerns. \n \nFirst, I do not think my review comments contain any misunderstandings. Based on the definition of the activation masks, they are just the ground truth for semantic segmentation. In addition, the L_{cf} in Equ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"8PJ-S_Ywex0",
"qyU8aWfvxM",
"mVMRVsyP2Je",
"qjD9KBeSAiq",
"mVMRVsyP2Je",
"AV98zSjDPWZ",
"iclr_2022_onwTC5W0XJ",
"iclr_2022_onwTC5W0XJ",
"iclr_2022_onwTC5W0XJ"
] |
iclr_2022_T73sfhfzk07 | GRODIN: Improved Large-Scale Out-of-Domain detection via Back-propagation | Uncertainty estimation and out-of-doman (OOD) input detection are critical for improving the safety and robustness of machine learning. Unfortunately, most methods for detecting OOD examples have been evaluated on small tasks while typical methods are computationally expensive. In this paper we propose a new gradient-based method called GRODIN for OOD detection. The proposed method is conceptually simple, computationally cheaper than ensemble methods and can be directly applied to any existing and deployed model without re-training. We evaluate GRODIN on models trained on CIFAR-10 and ImageNet datasets, and show it's strong performance on various OOD ImageNet datasets such as ImageNet-O, ImageNet-A, ImageNet-R, ImageNet-C. | Reject | The paper proposes a gradient-based method for OOD detection. While the paper has some interesting contributions, all the reviewers felt that the current version falls below the ICLR acceptance threshold. I encourage the authors to revise and resubmit to a different venue. | train | [
"DgDlOgmxu7m",
"eLoOG56QtbQ",
"TPekJtCItAi",
"CeIbZH1e_Zp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an anomaly detection score based on the derivative of the log-likelihood: $ \\mid\\mid \\nabla_{\\theta} \\log p_{\\theta}(\\hat{c} | \\mathbf{x}) \\mid \\mid_{2}^{2} $ where $\\hat{c}$ is the predicted class (as we want the method to be applicable to test points). The intuition is that if $\\... | [
3,
3,
5,
3
] | [
4,
5,
4,
4
] | [
"iclr_2022_T73sfhfzk07",
"iclr_2022_T73sfhfzk07",
"iclr_2022_T73sfhfzk07",
"iclr_2022_T73sfhfzk07"
] |
iclr_2022_WXy4C-RjET | Logit Attenuating Weight Normalization | Over-parameterized deep networks trained using gradient-based optimizers is a popular way of solving classification and ranking problems. Without appropriately tuned regularization, such networks have the tendency to make output scores (logits) and network weights large, causing training loss to become too small and the network to lose its adaptivity (ability to move around and escape regions of poor generalization) in the weight space. Adaptive optimizers like Adam, being aggressive at optimizing the train loss, are particularly affected by this. It is well known that, even with weight decay (WD) and normal hyper-parameter tuning, adaptive optimizers lag behind SGD a lot in terms of generalization performance, mainly in the image classification domain.
An alternative to WD for improving a network's adaptivity is to directly control the magnitude of the weights and hence the logits. We propose a method called Logit Attenuating Weight Normalization (LAWN), that can be stacked onto any gradient-based optimizer. LAWN initially starts off training in a free (unregularized) mode and, after some initial epochs, it constrains the weight norms of layers, thereby controlling the logits and improving adaptivity. This is a new regularization approach that does not use WD anywhere; instead, the number of initial free epochs becomes the new hyper-parameter. The resulting LAWN variant of adaptive optimizers gives a solid lift to generalization performance, making their performance equal or even exceed SGD's performance on benchmark image classification and recommender datasets. Another important feature is that LAWN also greatly improves the adaptive optimizers when used with large batch sizes.
| Reject | The reviewers unanimously recommend rejecting this submission and I concur with this recommendation. The submission essentially introduces a regularization technique to solve the alleged problem of Adam getting worse out-of-sample error for typical image classification problems, e.g. training ResNets on ImageNet. Reviewers raised a variety of issues with the submission. Some found the experiments unconvincing, some were concerned that the submission duplicated closely related work without engaging with and citing that work, and some were concerned by what they viewed as insufficient analysis and comparisons. To me, the most severe issue with the submission is that the experimental evidence for its claims is not sufficiently convincing and the problem it purports to solve has not been convincingly demonstrated, making the work hard to motivate. The other issues raised by the reviewers are less damaging in my view.
Although this is a meta review and not a full de novo review, I would be remiss to not raise a few of the severe issues I see with the results that makes it hard for them to be convincing.
The Adam results in table 1 are far weaker than they should be, raising questions about the experiments as a whole. For example, https://arxiv.org/abs/2102.06356 reports 76.4% top 1 accuracy for ResNet-50 on ImageNet with Adam without increasing the epsilon parameter to a larger value as Choi et al. 2019 did (who also report good Adam results for ResNet-50 on ImageNet). This should also lead us to question one premise of the paper that there is some problem with adaptive optimizers for image classification.
Ok, but perhaps LAWN helps validation error even if there is no gap between SGD and Adam? Sadly, to demonstrate this subordinate claim, LAWN would have to be compared carefully with state of the art regularization techniques and compared with results that use any optimizer, not just Adam. With modern regularization techniques, it isn't hard to get 77%+ top 1 validation accuracy on ImageNet with ResNet-50. See, for example https://arxiv.org/abs/2010.01412v1 which gets 77.5% in 100 epochs and as high as 79.1 with longer training. Since LAWN is claiming to improve generalization, it must be compared with other regularization techniques. It is a type error to primarily compare it with optimizers so even if there weren't concerns with the performance of the existing baselines, there would need to be additional comparisons.
The claims about fixing issues that arise at large batch sizes are prima facie problematic since there isn't strong evidence of an actual problem at the batch sizes considered in the submission. | train | [
"JLoSs2xN339",
"b-GqO7liMi8",
"fivs0zTNiBg",
"4lbiQCf3flS",
"LaQxxVTNXbE",
"6-C9A2dWFhr",
"icJAPGambPf",
"1nbtJ16Efy",
"bAQ6MBiQHIh"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The author propose a new regularization method for training the deep neural networks, instead of weight decay.\nThe proposed method is named Logit Attenuating Weight Normalization (LAWN), which constrains the weight norm in \ntraining with the projected gradient. The experimental results show LAWN is more effectiv... | [
3,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_WXy4C-RjET",
"JLoSs2xN339",
"JLoSs2xN339",
"bAQ6MBiQHIh",
"1nbtJ16Efy",
"icJAPGambPf",
"iclr_2022_WXy4C-RjET",
"iclr_2022_WXy4C-RjET",
"iclr_2022_WXy4C-RjET"
] |
iclr_2022_kQMXLDF_z20 | Tackling Oversmoothing of GNNs with Contrastive Learning | Graph neural networks (GNNs) integrate the comprehensive relation of graph data and the representation learning capability of neural networks, which is one of the most popular deep learning methods and achieves state-of-the-art performance in many applications, such as natural language processing and computer vision. In real-world scenarios, increasing the depth (i.e., the number of layers) of GNNs is sometimes necessary to capture more latent knowledge of the input data to mitigate the uncertainty caused by missing values.
However, involving more complex structures and more parameters will decrease the performance of GNN models. One reason called oversmoothing is recently proposed, whose research still remains nascent. In general, oversmoothing makes the final representations of nodes indiscriminative to hurt the node classification and link prediction performance.
In this paper, we first survey the current de-oversmoothing methods and propose three major metrics to evaluate a de-oversmoothing method, i.e., constant divergence indicator, easy-to-determine divergence indicator, and model-agnostic strategy. Then, we propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining the three mentioned metrics. With the contrastive learning manner, we provide the theoretical analysis of the effectiveness of the proposed method. Last but not least, we design extensive experiments to illustrate the empirical performance of TGCL comparing with state-of-the-art baselines. | Reject | This paper introduces a new layer for graph neural networks that aims to reduce the oversmoothing issue common to this model type. The reviewers find the paper well organized and easy to follow, and they recognize the importance of the problem that is addressed. However, they also identify critical errors in the mathematical derivations: the authors did not provide a response to the reviews, and hence these errors remain unaddressed. In addition, multiple reviewers indicate they find the experimental evaluation insufficient. For these reasons I'm recommending rejecting this paper. | train | [
"ZRMoAjk3bQD",
"ffbS4h1rLw",
"u2ED-h5I5r",
"jrLkIjk3b4m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper summarizes the current techniques tackling over-smoothing, where three matrices are first introduced to describe better and classify the techniques' characteristics. To this end, TGCL, a model-agnostic regularization term based on contrastive learning, is proposed to deal with over-smoothing, satisfying... | [
3,
6,
5,
3
] | [
4,
4,
4,
5
] | [
"iclr_2022_kQMXLDF_z20",
"iclr_2022_kQMXLDF_z20",
"iclr_2022_kQMXLDF_z20",
"iclr_2022_kQMXLDF_z20"
] |
iclr_2022_qLqeb9AjD2o | Confidence-aware Training of Smoothed Classifiers for Certified Robustness | Any classifier can be "smoothed out" under Gaussian noise to build a new classifier that is provably robust to $\ell_2$-adversarial perturbations, viz., by averaging its predictions over the noise, namely via randomized smoothing. Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i.e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs. In this paper, we propose a simple training method leveraging this trade-off for obtaining more robust smoothed classifiers, in particular, through a sample-wise control of robustness over the training samples. We enable this control feasible by investigating the correspondence between robustness and prediction confidence of smoothed classifiers: specifically, we propose to use the "accuracy under Gaussian noise" as an easy-to-compute proxy of adversarial robustness for each input. We differentiate the training objective depending on this proxy to filter out samples that are unlikely to benefit from the worst-case (adversarial) objective. Our experiments following the standard benchmarks consistently show that the proposed method, despite its simplicity, exhibits improved certified robustness upon existing state-of-the-art training methods. | Reject | This authors seek to improve upon previous work on randomized smoothing for certifiably robust models. They develop loss functions inspired by the notion of distinguishing hard and easy samples while training the base classifier that is randomly smoothed and conduct experiments evaluating their proposed losses on benchmark datasets.
While the reviewers agree that the paper contains interesting ideas, the paper in its current form is unacceptable for publication because:
1) Missing large scale experiments: All prior work on randomized smoothing report results on ImageNet, and this was seen as one of the main advantages of randomized smoothing. Since the authors do not report this, it brings into question the robustness and scalability of improvements obtained.
2) Computational complexity and improvements: The authors' approach has significant computational complexity and the final improvements obtained are marginal. This makes it difficult to justify the use of a more expensive method. | train | [
"YLeOzGbvSBS",
"ueU9heE2XpL",
"dXfRHWvOM7v",
"pSEQ0-UL8R_",
"uquoWg2uPee",
"M1FAdsSxG0",
"Irqx7x6ZY-u",
"_955q8FI3NF",
"7D-qG2Bilkb",
"aAUO9TFova5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their responses. Some of my concerns have been clarified, but I still have some reservations: \n\n- For the hyperparameter choosing, it is absolutely fine to choose different hyperparameters for different $\\sigma$, but they should not be directly chosen based on performance ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"M1FAdsSxG0",
"uquoWg2uPee",
"aAUO9TFova5",
"Irqx7x6ZY-u",
"7D-qG2Bilkb",
"_955q8FI3NF",
"iclr_2022_qLqeb9AjD2o",
"iclr_2022_qLqeb9AjD2o",
"iclr_2022_qLqeb9AjD2o",
"iclr_2022_qLqeb9AjD2o"
] |
iclr_2022_zyrhwrd9EYs | To Impute or Not To Impute? Missing Data in Treatment Effect Estimation | Missing data is a systemic problem in practical scenarios that causes noise and bias when estimating treatment effects. This makes treatment effect estimation from data with missingness a particularly tricky endeavour. A key reason for this is that standard assumptions on missingness are rendered insufficient due to the presence of an additional variable, treatment, besides the individual and the outcome. Having a treatment variable introduces additional complexity with respect to why some variables are missing that is overlooked by previous work. In our work we identify a new missingness mechanism, which we term mixed confounded missingness (MCM), where some missingness determines treatment selection and other missingness is determined by treatment selection. Given MCM, we show that naively imputing all data leads to poor performing treatment effects models, as the act of imputation effectively removes information necessary to provide unbiased estimates. However, no imputation at all also leads to biased estimates, as missingness determined by treatment divides the population in distinct subpopulations, where estimates across these populations will be biased. Our solution is selective imputation, where we use insights from MCM to inform precisely which variables should be imputed and which should not. We empirically demonstrate how various learners benefit from selective imputation compared to other solutions for missing data. | Reject | In this paper, the authors propose a new type of (missing not at random) model they call the MCM (mixed confounded missingness).
The authors further discuss that given their model, naive imputation strategies do not work, and a model-tailored imputation strategy is needed.
The reviewers did not receive the paper favorably, with main complaints centering around: (a) outlining novelty compared to existing approaches to missing data, (b) whether imputation is a good strategy for dealing with missing data, and (c) whether the paper's results are actually sound.
Here's my perspective on these worries.
The paper aims to deal with missing data in a causal inference context (in other words, the target of inference is a causal effect, and our data happens to have entries missing not at random). Further, the paper aims to work within a graphical modeling formalism for missing data models. Finally, the paper points out that imputation is to be done with care if data is missing not at random (a point both myself, and reviewers agreed with).
Areas of improvement in the paper, in my mind, would be: (i) better literature review and putting authors' work in context of prior work, (ii) being clear about identification, and (iii) discussion of estimation strategies (not just imputation).
Dealing with missing data (in particular right censoring, but also more general types of missingness) in causal inference is a very old problem, with an established literature in statistics and public health. In fact, methods for dealing with both causal inference and missing data together are a part of standard graduate curriculum in epidemiology and biostatistics in many Universities.
(i) Literature review and context. Some papers the authors may find helpful to review:
James M. Robins, Andrea Rotnitzky, Daniel O. Scharfstein. Sensitivity Analysis for Selection bias and unmeasured Confounding in missing Data and Causal inference models. Part of the The IMA Volumes in Mathematics and its Applications book series (IMA, volume 116).
This paper discusses lots of relevant things, but in particular sensitivity analysis methods to violations of MAR in settings the authors worry about.
James M. Robins. Non-response models for the analysis of non-monotone non-ignorable missing data. Statistics in Medicine, 16:21–37, 1997.
This paper is an early example of an MNAR model that may be represented by a directed acyclic graph.
Karthika Mohan, Judea Pearl, and Jin Tian. Graphical models for inference with missing data. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1277–1285. Curran Associates, Inc., 2013.
Ilya Shpitser, Karthika Mohan, Judea Pearl. Missing data as a causal and probabilistic problem. In Proceedings of the Thirty First Conference on Uncertainty in Artificial Intelligence (UAI-15), pp. 802-811, AUAI Press, 2015.
Rohit Bhattacharya, Razieh Nabi and Ilya Shpitser. “Full Law Identification In Graphical Models Of Missing Data: Completeness Results.” In Proceedings of the Thirty-Seventh International Conference on Machine Learning (ICML-20), pp. 7153-7163, 2020.
These papers deal with general models of missing data using graphs.
Since the authors use graphical models as well, I urge them to put their contribution in context with this prior work.
(ii) Identification. The authors should clearly discuss whether treatment effects are identified under their model, and if so, by what function. If this function is not closed form (which can happen in missing data), this should be discussed as well. This should be contrasted with other missing data work that derives identification under MNAR, particularly using graphs.
(iii) Estimation. The authors chose to use imputation. Imputation is a sampling approach to inference in missing data. Others include maximum likelihood or Bayesian methods (via EM), or semi-parametric inference via influence functions. If the authors chose to concentrate on imputation, specifically, they should explain why (as other methods have noted advantaged, e.g. statistical efficiency, quantification of uncertainty, etc.).
Cautioning against naive imputation is a fine thing to do, but everyone working on missing data problems already knows naive imputation does not work for MNAR data. Please do not oversell your contributions. Saying things like: "MCM being the first formalisation of a missingness mechanism when there are treatments at play." is neither true, nor helpful for the peer review process.
With all that said, the MCM model has the potential to be an interesting MNAR model, and placed in proper context of existing work, could be a very interesting addition to the missing data literature. However, the draft needs a bit more work before it is ready for publication. | train | [
"S1TEwy40x-U",
"xSYis9h73Cd",
"7vDWF7g7yz0",
"FkKmOnpbJmb",
"iGYJM2X4hB",
"VFknqmS9uyf",
"VOowNNrJwbU",
"SsRrXLxdXzf",
"9h-ppp3VjUJ",
"vop8d7q0nC4",
"Sw15Q7x-eNY",
"-aEdXoBehI4"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for responding.\n\nWe believe we now understand your confusion. Given only the graphical structure in Fig 1(c) and no further knowledge of what the variables actually depict nor what form the arrows take, you are correct in stating that $\\tilde{X}$ alone does not block the backdoor paths between ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"xSYis9h73Cd",
"7vDWF7g7yz0",
"FkKmOnpbJmb",
"iGYJM2X4hB",
"Sw15Q7x-eNY",
"vop8d7q0nC4",
"-aEdXoBehI4",
"vop8d7q0nC4",
"vop8d7q0nC4",
"iclr_2022_zyrhwrd9EYs",
"iclr_2022_zyrhwrd9EYs",
"iclr_2022_zyrhwrd9EYs"
] |
iclr_2022_T-uEidE-Xpv | Contrastive Mutual Information Maximization for Binary Neural Networks | Neural network binarization accelerates deep models by quantizing their weights and activations into 1-bit. However, there is still a huge performance gap between Binary Neural Networks (BNNs) and their full-precision counterparts. As the quantization error caused by weights binarization has been reduced in earlier works, the activations binarization becomes the major obstacle for further improvement of the accuracy. In spite of studies about the full-precision networks highlighting the distributions of activations, few works study the distribution of the binary activations in BNNs. In this paper, we introduce mutual information as the metric to measure the information shared by the binary and the latent full-precision activations. Then we maximize the mutual information by establishing a contrastive learning framework while training BNNs. Specifically, the representation ability of the BNNs is greatly strengthened via pulling the positive pairs with binary and full-precision activations from the same input samples, as well as pushing negative pairs from different samples (the number of negative pairs can be exponentially large). This benefits the downstream tasks, not only classification but also segmentation and depth estimation, etc. The experimental results show that our method can be implemented as a pile-up module on existing state-of-the-art binarization methods and can remarkably improve the performance over them on CIFAR-10/100 and ImageNet, in addition to the good generalization ability on NYUD-v2. | Reject | ## Description
The paper applies ideas from contrastive representation learning to train binary neural networks. Namely, the algorithm promotes binary representations to be similar to the full-precision representations while at the same time it promotes binary representations to be dissimilar from full-precision representations corresponding to other input images. This is enforced for activations in all layers by the added contrastive loss (9).
## Decision
The main weakness of the paper pointed by reviewers were 1) overlap of the large part of derivation with the prior work [25] Tian et al. "Contrastive representation distillation", ICLR 2020; and 2) the meaning of the derivation when applied in the setting of the paper to binary and full precision weights and its soundedness. The authors proposed their arguments for 1). The reviewers board considered these arguments and did not agree (see below). Point 2) was not addressed by authors (no paper revision, justifications, proofs corresponding to the missing supplementary). It was discussed further and was found critical (see below), such that it is a clear reason for rejection regardless of 1). Overall, the idea is interesting and the method appears to be helpful experimentally, however the paper needs a major revision that would address the two points.
## Details
### Overlap with CRD
Reviewers were in a consensus on this issue, disagreeing with authors. Since the whole derivation chain of the contrastive loss already exists in the CRD work [25], it is redundant to repeat this derivation if not raising ethical concerns. Instead an original work should review or just refer to the existing derivation and only discuss the new context and e.g. change the critic function $\hat h$.
### Meaning of the derivation
The reviewers have questioned the soundness of the initial criterion of MI between binary and full precision activations, as it reduces to just the entropy of binary activations. In particular, it seems very different in meaning to the contrastive loss the paper optimizes in the end. Here is additional feedback from the discussion.
1. Maximizing the entropy of binary activations with respect to the data distribution makes some sense. If a single binary activation was considered, its entropy is maximized when it is in the state 1 exactly for 50% of the data. Which makes it discriminative of the input. A similar centering can be achieved by Batch Normalization put in front of the activation -- if the preactivation distribution was symmetric, then BN would achieve the max entropy for the sign of preactivation. Such network design is not uncommon.
Maximizing the entropy of the full vector of binary activations appears more difficult. However we can also understand it as the mutual information between the input image and the layer of binary activations. Thus the criterion is to retain as much information about the input as possible. This makes sense as a regularization (often neural networks are regularized by adding data reconstruction capabilities / loss), and is aligned well with goals such as re-using the features for other tasks (as in Sec .3.5) but contradicts to some other principles proposed in the literature, e.g. the information bottleneck (that the maximum information about the target rather than the input should be preserved).
Amongst methods that study the direction of maximizing the entropy in binary networks, reviewers mention IR-Net and Regularizing Activation Distribution for Training Binarized Deep Networks. The architecture with BN before activation is used in the latter work and some more recent works, e.g. BoolNet.
2. It is not clear whether optimizing the contrastive loss retains the same meaning as maximizing MI. The derivation from CRD paper used here applies several lower bounding steps. Maybe the strongest one is that the critic is chosen to be of a specific function rather than a universal approximator. However there is no obvious gap. In fact knowing that binary activations are just a sign mapping of full precision ones, should allow one to estimate $p(i=j| a^i_B, a^j_F)$ in a simple way.
3. In the estimator $h$ in (8) the authors make a mistake (applying their and CRD theory incorrectly):
$h$ should be the probability of a conditional Bernoulli variable estimating $p(i=j| a^i_B, a^j_F)$. It should not depend on $a^j_F$ for other values of $j$ than the given one. However in the denominator in (8) it does. Therefore this estimator, and as a result the specific NCE loss proposed, appear unjustified. If the critic from CRD eq. (19) is adopted, it is not clear whether it makes sense for a pair of binary and full precision descriptors (note that for $i=j$ the scalar product between the two is just $\|a_F\|_1$). It seems that the design of a meaningful critic is a serious gap the authors should address. Observing that the initial objective, the MI criterion, was in fact independent of full-precision states (as it is the entropy of binary states), one can propose that an appropriate critique should use binary states only, such as
$$
h(a_B^i,a_B^j) = \sigma(\left<a_B^i, a_B^j\right> + c ).
$$
When fixing $\hat h$ the result in (10) that the maximum likelihood estimator for $p(i=j | a_B^i, a_F^j)$ with a generic neural network can approximate this distribution arbitrary well becomes irrelevant.
When the paper speaks of randomness, e.g. "binary and full precision activations as random variables, considering "i=j" as a random variable, it is needed to specify the source of randomness or the distribution, i.e. to add "for a network input drawn from the data distribution" in the first case and "under i and j picked at random uniformly in the batch" in the second.
Theoretically, the paper would become more convincing, if the the entropy of binary activations was measured by independent tools from the literature after training with and without NCE loss and it was shown that indeed the method achieves an improvement in this objective, reconfirming that the principle and the derivation were sound. An ablation study on other modifications such as weight decay may be helpful to convince researchers that the main source of improvements in experiments is the new contrastive loss. Note that not all reviewers were convinced by current experimental results due to lack of descriptions / code to fully reproduce and or lack of such ablation studies. | val | [
"vZEmfJz_gFC",
"aLGH4wEx9Ww",
"wR3QjCtgmUg",
"z74JVxEpwKp",
"AwEp-XTZdqD",
"HN8ppJOVCg1",
"ohtf22GHkHh",
"wQpZm11Vr5Q",
"Cfw06q9dS4G",
"QIARTDa9qnI",
"1FvoHQXnYXA",
"wIN0VDCdvBr",
"npHAh1VTowh",
"C5Keoz4umBU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the reviewers for the detailed comment. However, I still think this paper is better viewed as an interesting application of the CRD to the area of network quantization, rather than as introducing a fresh new algorithm. Besides, Eq.(4-9) are not common to all contrastive learning methods, and should be cre... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"Cfw06q9dS4G",
"iclr_2022_T-uEidE-Xpv",
"HN8ppJOVCg1",
"1FvoHQXnYXA",
"npHAh1VTowh",
"QIARTDa9qnI",
"aLGH4wEx9Ww",
"C5Keoz4umBU",
"wIN0VDCdvBr",
"iclr_2022_T-uEidE-Xpv",
"npHAh1VTowh",
"iclr_2022_T-uEidE-Xpv",
"iclr_2022_T-uEidE-Xpv",
"iclr_2022_T-uEidE-Xpv"
] |
iclr_2022_eYyvftCgtD | GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures | Attention based language models have become a critical component in state-of-the-art natural language processing systems. However, these models have significant computational requirements, due to long training times, dense operations and large parameter count. In this work we demonstrate a set of modifications to the structure of a Transformer layer, producing a more efficient architecture. First, we rely on grouped transformations to reduce the computational cost of dense feed-forward layers, while preserving the expressivity of the model . Secondly, we add a grouped convolution module to complement the self-attention module, decoupling the learning of local and global interactions. We apply the resulting architecture to language representation learning and demonstrate its superior performance compared to BERT models of different scales. We further highlight its improved efficiency, both in terms of floating-point operations (FLOPs) and time-to-train. | Reject | The authors propose modifications to the Transformer architecture in BERT by using grouped FFN and an additional convolution module.
The paper doesn't have all the results and comparison that should be done for a model that has seen similar architecture modification in the previous papers. While it is not necessary to show improvements on multiple hardware systems, it is important to see comparisons to more, stronger baselines and metrics on the full downstream GLUE eval rather than just Squad to establish improvements.
A reject. | val | [
"S4lyJJzIiFH",
"NmJH2TJmPoR",
"S8WYST1Z8Bi",
"-n_wZvlgMWD",
"In1s7e_UqGZ",
"6O6x-gznyuJ",
"FOcAQFiJmxY",
"88ddljZoCo",
"fI5QeuY_Zz",
"lGRZvXy5vc",
"gCxelUm7m-w",
"LWJtErM4u-x",
"stPn5TpQR4u"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for providing more experimental results. The new results on SQuADv2 help to verify the effectiveness of the proposed model. However, I am still not fully convinced that the proposed architecture is superior to the state-of-the-art efficient transformer models, especially since these models also utilize ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"NmJH2TJmPoR",
"fI5QeuY_Zz",
"88ddljZoCo",
"FOcAQFiJmxY",
"6O6x-gznyuJ",
"stPn5TpQR4u",
"gCxelUm7m-w",
"LWJtErM4u-x",
"lGRZvXy5vc",
"iclr_2022_eYyvftCgtD",
"iclr_2022_eYyvftCgtD",
"iclr_2022_eYyvftCgtD",
"iclr_2022_eYyvftCgtD"
] |
iclr_2022_Uxppuphg5ZL | Constraint-based graph network simulator | In the rapidly advancing area of learned physical simulators, nearly all methods train a forward model that directly predicts future states from input states. However, many traditional simulation engines use a constraint-based approach instead of direct prediction. Here we present a framework for constraint-based learned simulation, where a scalar constraint function is implemented as a trainable function approximator, and future predictions are computed as the solutions to a constraint satisfaction problem. We implement our method using a graph neural network as the constraint function and gradient descent as the constraint solver. The architecture can be trained by standard backpropagation. We test the model on a variety of challenging physical domains, including simulated ropes, bouncing balls, colliding irregular shapes and splashing fluids. Our model achieves better or comparable performance to top learned simulators. A key advantage of our model is the ability to generalize to more solver iterations at test time to improve the simulation accuracy. We also show how hand-designed constraints can be added at test time to satisfy objectives which were not present in the training data, which is not possible with forward approaches. Our constraint-based framework is applicable to any setting in which forward learned simulators are used, and more generally demonstrates key ways that learned models can leverage popular methods in numerical methods. | Reject | The paper uses graph-based neural networks to ensure constraint-based simulation. Even though the approach is a good one, it is only incremental w.r.t. the work published by Yang et al at NeurIPS in 2020; then, the experimental section is not convincing enough.
While the authors indicate their dissatisfaction with one of the reviewers' assessment, the overall reviews of the paper are not very positive. | train | [
"d9lyggIg0Eq",
"mB78WNl5yMA",
"kW6x3SxXfp",
"t5KZGq7Zt_c",
"jGPy6eu8LTD",
"1rtWhoXkya2",
"p_DTlAn7RVg",
"XzUmoZVs9Am",
"hHD4Lz4MfIY",
"CpnmZu4IFkZ",
"eLMMzzpmHM",
"sheSN-BWiac",
"GWXiEA9qTkV",
"oq7ZQarzv_K",
"yPeQo-XN1el",
"rCsFem4DH_8",
"dep6fpZmZpq",
"EBvtlgJJLKC",
"7Q4n25nvkvM... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"officia... | [
"This paper aims to add explicit/human-defined constraints to learning-based simulation frameworks, where a learned constraint function implicitly regularizes the dynamics, and future predictions are generated via a constraint solver. The authors built the framework on top of graph neural networks (GNNs) to capture... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_Uxppuphg5ZL",
"iclr_2022_Uxppuphg5ZL",
"oq7ZQarzv_K",
"iclr_2022_Uxppuphg5ZL",
"cRGVUfas0Jd",
"mB78WNl5yMA",
"mB78WNl5yMA",
"cRGVUfas0Jd",
"cRGVUfas0Jd",
"cRGVUfas0Jd",
"cRGVUfas0Jd",
"cRGVUfas0Jd",
"mB78WNl5yMA",
"mB78WNl5yMA",
"CmsQ9GVZuxV",
"CmsQ9GVZuxV",
"d9lyggIg0Eq",... |
iclr_2022_UORhn0DGIT | Heterogeneous Wasserstein Discrepancy for Incomparable Distributions | Optimal Transport (OT) metrics allow for defining discrepancies between two probability measures. Wasserstein distance is for longer the celebrated OT-distance frequently-used in the literature, which seeks probability distributions to be supported on the $\text{\it same}$ metric space. Because of its high computational complexity, several approximate Wasserstein distances have been proposed based on entropy regularization or on slicing, and one-dimensional Wassserstein computation. In this paper, we propose a novel extension of Wasserstein distance to compare two incomparable distributions, that hinges on the idea of $\text{\it distributional slicing}$, embeddings, and on computing the closed-form Wassertein distance between the sliced distributions. We provide a theoretical analysis of this new divergence, called $\text{\it heterogeneous Wasserstein discrepancy (HWD)}$, and we show that it preserves several interesting properties including rotation-invariance. We show that the embeddings involved in HWD can be efficiently learned. Finally, we provide a large set of experiments illustrating the behavior of HWD as a divergence in the context of generative modeling and in query framework. | Reject | The aim of this paper is to propose a novel "GW"-like discrepancy function between probability measures living in different spaces (here restricted to be Euclidean, with a squared euclidean distance as the base metric). While interesting (notably the idea of learning distinct maps mapping a random direction in a latent space onto two spaces) there are a few issues with presentation, incremental nature of work and importantly a few shortcomings in the empirical evaluation as detailed by reviewers. Hopefully these can be used to improve the draft for a future version. | train | [
"6_T4qjs1FUi",
"UL9EM2V7qL6",
"M9RR3KmXI8I"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces the so-called Heterogeneous Wasserstein Discrepancy (HWD) between two probability measures supported on Euclidean spaces of different dimensions, a case scenario appearing in several applications. The HWD is built on ideas from the sliced Wasserstein distance and measure embedding : the two d... | [
6,
3,
3
] | [
3,
3,
5
] | [
"iclr_2022_UORhn0DGIT",
"iclr_2022_UORhn0DGIT",
"iclr_2022_UORhn0DGIT"
] |
iclr_2022_MRGFutr0p5e | Graph Barlow Twins: A self-supervised representation learning framework for graphs | The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones while requiring fewer hyperparameters and substantially shorter computation time (ca. 30 times faster than BGRL). | Reject | The paper proposes to use the recently introduced "Barlow-twins" contrastive learning objective, to the case of graph networks. The main concern raised by reviewers was the limited novelty of this work, which they argued mostly combines existing lines of work, and does not introduce sufficiently new concepts. This was also discussed between the authors and the reviewers.
Having read the paper and the reviews, I tend to agree with the reviewers that this paper is more of a combination of existing works, and their relatively straightforward application to the graph network domain. Thus, although the empirical results are encouraging, I agree the paper has limited novelty, and falls below the ICLR acceptance bar. | train | [
"e4lnx4rjl79",
"QYz1gE7h2F7",
"8x1yhH7M2oh",
"9ztlY1HXweS",
"BFoHbTww-L",
"emqVpNAEJC9",
"cR3Xg4dkEVl",
"aMkmg59WBT-",
"kReNgHYO49b"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\n\nthank you for your review. We would like to address your concern with the experimental evaluation of the OGB Products dataset. Indeed, we limited the number of epochs to 100 for both BGRL and G-BT. However, we noticed that our model keeps improving with more epochs, which is not the case for BGR... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"aMkmg59WBT-",
"kReNgHYO49b",
"kReNgHYO49b",
"kReNgHYO49b",
"cR3Xg4dkEVl",
"cR3Xg4dkEVl",
"iclr_2022_MRGFutr0p5e",
"iclr_2022_MRGFutr0p5e",
"iclr_2022_MRGFutr0p5e"
] |
iclr_2022_Y3cm4HJ3Ncs | Learning-to-Count by Learning-to-Rank: Weakly Supervised Object Counting & Localization Using Only Pairwise Image Rankings | Object counting and localization in dense scenes is a challenging class of image analysis problems that typically requires labour intensive annotations to learn to solve. We propose a form of weak supervision that only requires object-based pairwise image rankings. These annotations can be collected rapidly with a single click per image pair and supply a weak signal for object quantity. However, the problem of actually extracting object counts and locations from rankings is challenging. Thus, we introduce adversarial density map generation, a strategy for regularizing the features of a ranking network such that the features correspond to an object proposal map where each proposal must be a Gaussian blob that integrates to 1. This places a soft integer and soft localization constraint on the representation, which encourages the network to satisfy the provided ranking constraints by detecting objects. We then demonstrate the effectiveness of our method for exploiting pairwise image rankings as a weakly supervised signal for object counting and localization on several datasets, and show results with a performance that approaches that of fully supervised methods on many counting benchmark datasets while relying on data that can be collected with a fraction of the annotation burden. | Reject | Even though reviewers found some responses by the authors satisfactory, several concerns regarding the paper still remain. The authors are strongly encouraged to:
1) Explore how dataset size impacts accuracy.
2) Reason about annotation costs via empirical experiments.
3) Including benchmark datasets in experimental evaluations. | test | [
"s3OkcFcWcZI",
"G--nV4nHjM3",
"MHRFYf5T1CX",
"EI5PXDCCvkL",
"64IaAXSz0v2",
"1ZUUFVbVEzg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"In this work, authors propose to learn to count objects in weakly supervised setting where only pairwise ranking information is used as supervision. The annotation cost is expected to be much lower than the widely-used dot annotation. Besides, an adversarial density map regularization method is proposed to enforce... | [
3,
3,
6,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1
] | [
"iclr_2022_Y3cm4HJ3Ncs",
"iclr_2022_Y3cm4HJ3Ncs",
"iclr_2022_Y3cm4HJ3Ncs",
"s3OkcFcWcZI",
"G--nV4nHjM3",
"MHRFYf5T1CX"
] |
iclr_2022_TNxKD3z_tPZ | Persistent Homology Captures the Generalization of Neural Networks Without A Validation Set | The training of neural networks is usually monitored with a validation (holdout) set to estimate the generalization of the model. This is done instead of measuring intrinsic properties of the model to determine whether it is learning appropriately. In this work, we suggest studying the training of neural networks with Algebraic Topology, specifically Persistent Homology (PH). Using simplicial complex representations of neural networks, we study the PH diagram distance evolution on the neural network learning process with different architectures and several datasets. Results show that the PH diagram distance between consecutive neural network states correlates with the validation accuracy, implying that the generalization error of a neural network could be intrinsically estimated without any holdout set. | Reject | This paper got uniformly strongly negative reviews. The issue of estimating or bounding generalization accuracy from performance on the training set has a huge history and literature. After considerable discussion the reviewers uniformly find this paper lacking in making a contribution to that literature. | train | [
"53ChJiluxyG",
"K5g5YMjIozF",
"gbqqQQ6g7yT",
"UtVKPEECIk",
"Pg-fzI8woRA",
"90Vonx6dsTz",
"c7KoUkXE9y2",
"txSFQz7N6SU",
"AZgj2ETLFXK",
"X45BKa6mY6q",
"0t9J4IYaScm",
"0EO9Qzj3qxy",
"to7wvwUil5B",
"9nWwlDW64Cl",
"rc_OTUWNFzC",
"p89isGaAb0w",
"Zd0cOaLF69D",
"vYYJuyh2toX",
"OfLVudLoHG... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
"This paper analyses the training of neural networks from a topological\nperspective, presenting a pipeline that can measure (pseudo) distances\nbetween the network's weights during training. Such information is then\nemployed to study the generalisation error of a neural network.\n\nIn contrast to existing methods... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_TNxKD3z_tPZ",
"gbqqQQ6g7yT",
"c7KoUkXE9y2",
"53ChJiluxyG",
"R18ptY2HKOa",
"R18ptY2HKOa",
"iclr_2022_TNxKD3z_tPZ",
"X45BKa6mY6q",
"X45BKa6mY6q",
"53ChJiluxyG",
"0EO9Qzj3qxy",
"53ChJiluxyG",
"R18ptY2HKOa",
"R18ptY2HKOa",
"R18ptY2HKOa",
"Zd0cOaLF69D",
"vYYJuyh2toX",
"OfLVud... |
iclr_2022_rN9tjzY9UD | Adaptive Learning of Tensor Network Structures | Tensor Networks (TN) offer a powerful framework to efficiently represent very high-dimensional objects. TN have recently shown their potential for machine learning applications and offer a unifying view of common tensor decomposition models such as Tucker, tensor train (TT) and tensor ring (TR). However, identifying the best tensor network structure from data for a given task is challenging. In this work, we leverage the TN formalism to develop a generic and efficient adaptive algorithm to jointly learn the structure and the parameters of a TN from data. Our method is based on a simple greedy approach starting from a rank one tensor and successively identifying the most promising tensor network edges for small rank increments. Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function. Experiments on tensor decomposition, tensor completion and model compression tasks demonstrate the effectiveness of the proposed algorithm. In particular, our method outperforms the state-of-the-art evolutionary topology search [Li and Sun, 2020] for tensor decomposition of images (while being orders of magnitude faster) and finds efficient tensor network structures to compress neural networks outperforming popular TT based approaches [Novikov et al., 2015]. | Reject | The paper considers the important problem of tensor network optimization. Unfortunately the authors did not respond to the reviewers comments. Hence, several concerns remain about the proposed greedy algorithm, including its relationship with prior work and the issue of the ALS method being stuck in local minima for important classes of problems. We strongly encourage the authors to carefully examine the reviewers points and revise their work accordingly. | train | [
"eJuU3iaOWy",
"6UxnOv7lVV2",
"YRUWoMZW6BN",
"YKVPi3scZZP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a greedy algorithm to solve the tensor network optimization problem in a heuristic manner. The major contribution of the method is the greedy algorithm to efficiently derive the tensor network structure. This paper is interesting in that it presents a greedy algorithm for learning tensor networ... | [
5,
3,
3,
5
] | [
4,
5,
4,
3
] | [
"iclr_2022_rN9tjzY9UD",
"iclr_2022_rN9tjzY9UD",
"iclr_2022_rN9tjzY9UD",
"iclr_2022_rN9tjzY9UD"
] |
iclr_2022_bVkRc9NDHcK | Variable Length Variable Quality Audio Steganography | Steganography is the task of hiding and recovering secret data inside a non-secret container data while making imperceptible changes to the container. When using steganography to hide audio inside an image, current approaches neither allow the encoding of a signal with variable length nor allow making a trade-off between secret data reconstruction quality and imperceptibility in the changes made to the container image. To address this problem, we propose VLVQ (Variable Length Variable Quality Audio Steganography), a deep learning based steganographic framework capable of hiding variable-length audio inside an image by training the network to iteratively encode and decode the audio data from the container image. Complementary to the standard reconstruction loss, we propose an optional conditional loss term that allows the users to make quality trade-offs between audio and image reconstruction on inference time, without needing to train a separate model for each trade-off setups. Our experiments on ImageNet and AudioSet demonstrate VLVQ’s ability to retain reasonable image quality (28.99 $psnr$) and audio reconstruction quality (23.79 $snrseg$) while encoding 19 seconds of audio. We also show VLVQ’s capability to generalize to signals longer than what is seen during training. | Reject | This paper presents a steganographic approach called Variable Length Variable Quality Audio Steganography (VLVQ) that encodes variable length audio data inside images with varying quality trade-offs. However, according to the reviewers, the proposal made in this paper is not novel enough, there are many details missing in the paper, and the experimental study is far from comprehensive and conclusive. Afte the reviewers provided their comments, the authors did not submit their rebuttals. Therefore, as a result, we do not think the paper is ready for publication at ICLR. | test | [
"TlCfCj5saJR",
"JR-eTGh4di",
"2z-PyW2xSIy",
"ckp0fjYhOIM",
"TBYJHwgQen1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for all the different viewpoints mentioned in the review, however, after going through all the comments/review feedback I have noticed that all the reviewers recommended negative against the paper. So, my final decision remains as \"marginally below acceptance threshold\".",
"This paper presents a ste... | [
-1,
5,
5,
5,
3
] | [
-1,
4,
3,
2,
5
] | [
"ckp0fjYhOIM",
"iclr_2022_bVkRc9NDHcK",
"iclr_2022_bVkRc9NDHcK",
"iclr_2022_bVkRc9NDHcK",
"iclr_2022_bVkRc9NDHcK"
] |
iclr_2022_Lwclw6u3Pcw | Characterizing and Measuring the Similarity of Neural Networks with Persistent Homology | Characterizing the structural properties of neural networks is crucial yet poorly understood, and there are no well-established similarity measures between networks. In this work, we observe that neural networks can be represented as abstract simplicial complex and analyzed using their topological 'fingerprints' via Persistent Homology (PH). We then describe a PH-based representation proposed for characterizing and measuring similarity of neural networks. We empirically show the effectiveness of this representation as a descriptor of different architectures in several datasets. This approach based on Topological Data Analysis is a step towards better understanding neural networks and a useful similarity measure. | Reject | *Summary:* Compare neural networks and tasks using TDA, particularly persistence diagrams.
*Strengths:*
- Some reviewers found this a fresh perspective.
- Distance calculation using TDA can offer advantages and a theoretical basis.
*Weaknesses:*
- Insufficient motivation and experimental evidence for utility of the proposed approach.
- Computational cost and hyperparameter choices in PD computation.
- Difficulty of interpreting proposed distance matrices.
*Discussion:*
ZGgm found the paper interesting and that it offered a fresh perspective, but that the purpose of the comparison was not sufficiently well motivated. The authors provide some explanations, particularly about the method allowing to compare networks of different sizes, but ZGgm found their comments were not adequately addressed. rtBj found that even though the authors made efforts to address their comments, the paper still requires substantial improvements. HwgX appreciated the authors’ responses but considers that the paper needs to be improved with additional validation. They expressed doubts about the adequacy of the approach and found that although it improves upon certain methods, it is insufficiently verified.
*Conclusion:*
All reviewers agree that this work has some strengths but also significant weaknesses and does not reach the acceptance bar for this conference. Main weaknesses are insufficient motivation and experimental evidence. The reviewers made several suggestions on how the paper could be improved. I agree with the reviewers and hence I must reject this article. | test | [
"HuYunqldYh5",
"fSne-OwL9U",
"9MdbKRFTGfX",
"f48thpiA9bK",
"TdNzaOdCREF",
"7Bx780mFfWr",
"8-OKqsNZg7K",
"J7vyEF79dw-",
"53jAyptnmus",
"WMG5E5u8Z0",
"kNaxx2r8H1s",
"atrGDywDQ_H",
"_51mCDefFM9",
"jFqwXgfKqBN",
"V5Yk0potirM",
"6jkzn9v-7Bd",
"xRStNhchQ_0"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
" Dear ZGgm reviewer.\n\nWe tried to address your comments. Could you please copy the comments that have not been addressed? alternatively you could reply to our attempt to address your comments.\n\nWe tried to do our best. Thanks for your understanding.",
" I do not see the authors did any effort to address my c... | [
-1,
-1,
5,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"fSne-OwL9U",
"jFqwXgfKqBN",
"iclr_2022_Lwclw6u3Pcw",
"8-OKqsNZg7K",
"iclr_2022_Lwclw6u3Pcw",
"atrGDywDQ_H",
"J7vyEF79dw-",
"53jAyptnmus",
"9MdbKRFTGfX",
"xRStNhchQ_0",
"TdNzaOdCREF",
"TdNzaOdCREF",
"V5Yk0potirM",
"6jkzn9v-7Bd",
"iclr_2022_Lwclw6u3Pcw",
"iclr_2022_Lwclw6u3Pcw",
"iclr... |
iclr_2022__faKHAwA8O | Representation Consolidation from Multiple Expert Teachers | A library of diverse expert models transfers better to a novel task than a single generalist model. However, growing such a library indefinitely is impractical. Hence, we explore the problem of learning a consolidated image feature representation from a collection of related task-specific teachers that transfer well on novel recognition tasks. This differs from traditional knowledge distillation in which a student model is trained to emulate the input/output functionality of a teacher. Indeed, we observe experimentally that standard distillation of task-specific teachers, or using these teacher representations directly, **reduces** downstream transferability compared to a task-agnostic generalist model. We show that a simple multi-head, multi-task distillation method using an unlabeled proxy dataset and adding a generalist teacher is sufficient to consolidate representations from task-specific teacher(s). We improve downstream performance, outperforming the teacher (or best of all teachers) as well as the strong baseline of ImageNet pre-trained features. Our method almost reaches the performance of a multi-task joint training oracle, reaping the benefit of the teachers without replaying their training data. | Reject | Authors present an approach to consolidate multiple teachers into a single student model that can be adapted to new tasks. The method involves using a proxy dataset to facilitate distillation to prevent having to replay images from the teacher datasets. A multi-task multi-head objective is utilized, agnostic to the loss function, in which two are studied. Downstream task performance is used as the performance measure.
Pros:
- The problem of how to best leverage multiple teachers for a downstream task is important and interesting.
- Presents a method to generate distilled students that can be finetuned to tasks that demonstrates performance gains over baselines (imagenet alone or task specific teacher).
- Easy to follow and implement.
- Analysis across multiple datasets.
Cons:
- Multiple reviewers expressed concerns about current level of novelty / contribution. In some sense, it is natural to expect that combinations of task-related and generalist distillation would improve performance.
- Main results demonstrate improvements in performance when teacher and tasks are related to one another. But authors do not address how to select task-specific teachers for distillation. Related tasks and their matching to the target task are assumed to be known. Authors cited related prior works that attempt to do this matching, but do not apply it to their study for a full solution.
- Authors do not study variations of generalist teachers. How does changing the generalist teacher impact performance?
- Some reviewers expressed concern presentation is not clear. In particular, the style of figures may not be appropriate to best convey results and analyses of this type of work. Comparing different approaches is difficult looking at thin lines. Tables are perhaps better suited to convey these results.
- Multiple reviewers expressed concerns full-finetuning results are not convincing (Fig 4), though few-shot results look more convincing
Authors and reviewers had interaction, but reviewers maintained their recommendation of weak reject. All reviews unanimous in their decisions. Authors are encouraged to take into consideration all the comments and submit to another venue. | val | [
"xzwil3GhfP5",
"Cx60PgTSfrg",
"TWeWaeJ-td",
"4CPM_0jPnz",
"QIcbeZn6G0",
"B-ATNs_JA8j",
"j0cNZWfHTIu",
"xPMozI_UQfo",
"Nx596B8xqYD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for the response. After reading other reviewers' comments and responses, I will maintain the original score.",
"The paper introduces a representation consolidation method to properly aggregate the pre-trained knowledge from multiple teachers for transfer learning. It claims that a generali... | [
-1,
3,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"TWeWaeJ-td",
"iclr_2022__faKHAwA8O",
"Nx596B8xqYD",
"Cx60PgTSfrg",
"xPMozI_UQfo",
"j0cNZWfHTIu",
"iclr_2022__faKHAwA8O",
"iclr_2022__faKHAwA8O",
"iclr_2022__faKHAwA8O"
] |
iclr_2022_S7vWxSkqv_M | Evaluating Predictive Distributions: Does Bayesian Deep Learning Work? | Posterior predictive distributions quantify uncertainties ignored by point estimates.
This paper introduces \textit{The Neural Testbed}, which provides tools for the systematic evaluation of agents that generate such predictions.
Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs.
Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community.
We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process.
Our results reveal the importance of evaluation beyond marginal predictions.
Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance.
We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data.
As part of this effort, we opensource The Neural Testbed, including all implementations from this paper. | Reject | The paper describes a new testbed to evaluate Bayesian techniques in the context of joint predictive distribution. Since this is not the first paper that considers marginal vs joint distribution evaluation, the paper should include a thorough discussion of the differences with prior work. The paper simply states that it refutes Wang et al.'s previous observation that joint distributions do not distinguish techniques much more than marginals. However, the paper does not really explain why their observation is correct and Wang's observation should be discarded. Since this is the core contribution of the paper and it is doubtful, this is problematic. The discussion of epistemic/aleatoric uncertainty also seems superfluous and therefore distract the reader. | train | [
"5yfj2OET1yj",
"bDHE_MFTEE7",
"ernyoeXgHL",
"mZDe0aWxfgn",
"B72QSMGC915",
"4yhoQfuK96W",
"eMHgyT3LKP",
"ggyOzAKlK4g",
"KePH6qMymCE",
"r8ugKAZlqCA",
"Yfjmyb0bevJ",
"EyxJPpQVrKN",
"m-oBnAcdGTJ",
"drKiLtXQKo4",
"KRPEEUY7NWN",
"Og0oOMeWEIn",
"adyhJr54c7",
"VkO09YBhwDb",
"QA8SBOaXQGN"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" To follow up on the requests for un-normalized levels per agent:\n\n- accuracy: https://ibb.co/jggDDSZ (all around 80%)\n- calibration: https://ibb.co/CJMbTHB (all around (0.075)\n\nWe believe that the average *accuracy* of around 80% shows that the problems are not totally trivial for these agents, since even af... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ernyoeXgHL",
"4yhoQfuK96W",
"B72QSMGC915",
"EyxJPpQVrKN",
"eMHgyT3LKP",
"VkO09YBhwDb",
"QA8SBOaXQGN",
"iclr_2022_S7vWxSkqv_M",
"r8ugKAZlqCA",
"Yfjmyb0bevJ",
"adyhJr54c7",
"m-oBnAcdGTJ",
"drKiLtXQKo4",
"KRPEEUY7NWN",
"Og0oOMeWEIn",
"ggyOzAKlK4g",
"mpoDN5Rsr97",
"W6vghH1jSzO",
"Hc... |
iclr_2022_ezbMFmQY7L | C5T5: Controllable Generation of Organic Molecules with Transformers | Methods for designing organic materials with desired properties have high potential impact across fields such as medicine, renewable energy, petrochemical engineering, and agriculture. However, using generative models for this task is difficult because candidate compounds must satisfy many constraints, including synthetic accessibility, intellectual property attributes, ``chemical beauty'' (Bickerton et al., 2020), and other considerations that are intuitive to domain experts but can be challenging to quantify. We propose C5T5, a novel self-supervised pretraining method that works in tandem with domain experts by making zero-shot select-and-replace edits, altering organic substances towards desired property values. C5T5 operates on IUPAC names---a standardized molecular representation that intuitively encodes rich structural information for organic chemists but that has been largely ignored by the ML community. Our technique requires no edited molecule pairs to train and only a rough estimate of molecular properties, and it has the potential to model long-range dependencies and symmetric molecular structures more easily than graph-based methods. We demonstrate C5T5's effectiveness on four physical properties relevant for drug discovery, showing that it learns successful and chemically intuitive strategies for altering molecules towards desired property values.
| Reject | This work proposes to use a transformer model and language model inspired self-supervised training techniques to generate local modifications of organic molecules. The use of IUPAC names coupled with language inspired pre-training is indeed an interesting idea worthy of exploration. The paper has a lot of promises in this regard but needs more work to deliver it through the finish line. In the rebuttal, the authors have provided strong arguments toward the advantages of using IUPAC representation. While these arguments make sense, they are more or less conceptual and better and more clear empirical evidences are required to back them up. | test | [
"m5HLRnObd_0",
"G5zNlEXK6G",
"HNmbr9BsrHC",
"_Z0gNkN_4-F",
"zgJdg2SjLBA",
"dMTleS7ssYp",
"YINVLSJQalt",
"OPnIYO9cC72",
"qE6Lfjqtcu",
"DE3sqv1oTJJ",
"upUbumEZD4E",
"V9hRg3ZS_Pu"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the thorough replies to all the reviewer's comments. I think the paper has improved somewhat with the additional discussion, but it is not yet at a level of theoretical novelty or applied performance gains to warrant acceptance, and issues such as comparing with more established baselines.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dMTleS7ssYp",
"OPnIYO9cC72",
"_Z0gNkN_4-F",
"zgJdg2SjLBA",
"DE3sqv1oTJJ",
"V9hRg3ZS_Pu",
"V9hRg3ZS_Pu",
"upUbumEZD4E",
"iclr_2022_ezbMFmQY7L",
"iclr_2022_ezbMFmQY7L",
"iclr_2022_ezbMFmQY7L",
"iclr_2022_ezbMFmQY7L"
] |
iclr_2022_pgkwZxLW8b | Efficient Image Representation Learning with Federated Sampled Softmax | Learning image representations on decentralized data can bring many benefits in cases where data cannot be aggregated across data silos. Softmax cross entropy loss is highly effective and commonly used for learning image representations. Using a large number of classes has proven to be particularly beneficial for the descriptive power of such representations in centralized learning. However, doing so on decentralized data with Federated Learning is not straightforward, as the demand on computation and communication increases proportionally to the number of classes. In this work we introduce Federated Sampled Softmax, a novel resource-efficient approach for learning image representation with Federated Learning. Specifically, the FL clients sample a set of negative classes and optimize only the corresponding model parameters with respect to a sampled softmax objective that approximates the global full softmax objective. We analytically examine the loss formulation and empirically show that our method significantly reduces the number of parameters transferred to and optimized by the client devices, while performing on par with the standard full softmax method. This work creates a possibility for efficiently learning image representations on decentralized data with a large number of classes in a privacy preserving way. | Reject | The paper revisits representation learning for extreme settings (large number of class categories) in a federated learning setup. The authors show how each client can sample a set of negative classes and optimize only the corresponding model parameters with respect to a sampled softmax objective that approximates the global full softmax objective. The authors investigate the interest of the approach for image classification and image retrieval.
The reviewers appreciated the interest of the approach to reduce communication and the experimental evaluation on several datasets. The reviewers also expressed concerns about privacy, a central concern in federated learning. One reviewer noted for instance that ‘since every sampled set of each client has to include the classes that the client has, the central server can infer the classes the client has’. The reviewers would also have liked to see a more comprehensive evaluation, in the absence of the theoretical guarantees. Finally, the reviewers expressed regarding accuracy/efficiency trade-offs, one reviewer commenting that “the proposed method degrades the accuracy”.
The authors submitted responses to the reviewers' comments. The authors discussed the challenges related to privacy. The authors also commented on other gradient sparsification communication-reducing competing approaches (FedAwS) and the choice of datasets. After reading the response, updating the reviews, and discussion, the reviewers found that ‘the current good results are only obtained on smaller-scale datasets with fewer classes [while] in machine learning, the phenomenon could be quite different at different scales’ and that ‘it is not clear if the proposed method can outperform TernGrad at the same amount of transferred data [and] TernGrad also has a better convergence proof compared to the proposed method’.
We encourage the paper to pursue their approach further taking into account the reviewers' comments, encouragements, and suggestions. Recent progress in privacy protection theoretical frameworks in FL (secure multi party computation, etc.), see the recent survey by Kairouz et al. in FnT in ML, should help the authors develop guarantees for their approach. Moreover the reviewers suggested a clear path towards further improvements of the experimental evaluation.
The revision of the paper will generate a stronger submission to a future venue.
Reject. | train | [
"jocMw1-Fg_w",
"pO_yXIwfGuN",
"gnq9sqCs1TM",
"fOsUpnK-WSk",
"jTglU52xMBE",
"wDNkyqkHTMT",
"pgSyR9HIb6",
"Q4_3zIH2SMm",
"fnD4GR-s73",
"PvIyXw0G8lt"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the rebuttal. The authors addressed some of my concerns. However, my main concerns on the novelty/contributions are still not fully addressed. No matter in a centralized or a federated setting, adding the positive classes seems quite obvious and straightforward. No further discussion is pr... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"gnq9sqCs1TM",
"wDNkyqkHTMT",
"fnD4GR-s73",
"PvIyXw0G8lt",
"pgSyR9HIb6",
"Q4_3zIH2SMm",
"iclr_2022_pgkwZxLW8b",
"iclr_2022_pgkwZxLW8b",
"iclr_2022_pgkwZxLW8b",
"iclr_2022_pgkwZxLW8b"
] |
iclr_2022_AawMbgacl0t | Image Functions In Neural Networks: A Perspective On Generalization | In this work, we show that training with SGD on ReLU neural networks gives rise to a natural set of functions for each image that are not perfectly correlated until later in training. Furthermore, we show experimentally that the intersection of paths for different images also changes during the course of training. We hypothesize that this lack of correlation and changing intersection may be a factor in explaining generalization, because it encourages the model to use different features at different times, and pass the same image through different functions during training. This may improve generalization in two ways. 1) By encouraging the model to learn the same image in different ways, and learn different commonalities between images, comparable to model ensembling. 2) By improving algorithmic stability, as for a particular feature, the model is not always reliant on the same set of images, so the removal of an image may not adversely affect the loss. | Reject | In an attempt to understand generalization, this paper aims at understanding the dynamics of functions presented by the network for different images in the training set. Authors look at activation patterns (whether a ReLU activation is on or off) as a way of characterizing the active paths in the network and approximating the function presented by the network for each image. Authors study different related statics (eg. correlation) and how they evolve during training including.
Pros:
- Understanding the dynamics of training, how diversity is encouraged by the training procedure and its relationship to generalization is an important problem.
- This paper takes an empirical approach and tries to make interesting empirical observations about the dynamics of the training.
Cons:
- The paper is poorly written in terms of structure, making clear arguments with enough evidence, notation, etc.
- Some empirical trends are shown but their connections to the main claim of the paper about generalization is very weak. The main attempt to connect the observations to generalization is Fig. 7 which shows model accuracy correlated with the ratio of early to mid overlap. This is problematic both because it only has 6 data points and also because a simple correlation analysis is not enough to establish this claim which is more about the cause of generalization.
Reviewers have pointed to various concerns including but not limited to clarity of the paper, lack of rigorous arguments, not providing enough evidence for the arguments, etc. Unfortunately, authors did not participate in the discussion period.
Given the above concerns, I recommend rejecting the paper. | train | [
"rJZlNSIHag",
"Aj1mXxxPmkE",
"Ir23inGAm8",
"F8W7ypYUYIH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a different look at why neural networks generalize despite optimizing to zero training error, over-parameterization, etc. The contribution is mostly experimental in the sense of computing various statistics of a model during training and correlating those statistics with generalization perform... | [
3,
3,
3,
3
] | [
3,
4,
2,
3
] | [
"iclr_2022_AawMbgacl0t",
"iclr_2022_AawMbgacl0t",
"iclr_2022_AawMbgacl0t",
"iclr_2022_AawMbgacl0t"
] |
iclr_2022_VnurXbqxr0B | STRIC: Stacked Residuals of Interpretable Components for Time Series Anomaly Detection | We present a residual-style architecture for interpretable forecasting and anomaly detection in multivariate time series.
Our architecture is composed of stacked residual blocks designed to separate components of the signal such as trends, seasonality, and linear dynamics.
These are followed by a Temporal Convolutional Network (TCN) that can freely model the remaining components and can aggregate global statistics from different time series as context for the local predictions of each time series. The architecture can be trained end-to-end and automatically adapts to the time scale of the signals.
After modeling the signals, we use an anomaly detection system based on the classic CUMSUM algorithm and a variational approximation of the $f$-divergence to detect both isolated point anomalies and change-points in statistics of the signals.
Our method outperforms state-of-the-art robust statistical methods on typical time series benchmarks where deep networks usually underperform. To further illustrate the general applicability of our method, we show that it can be successfully employed on complex data such as text embeddings of newspaper articles. | Reject | This paper studies the important problem of time series anomaly detection using deep neural networks (DNNs). Unlike many other DNN models, it focuses on incorporating in its model architecture interpretable components that are inspired by previous studies based on both conventional statistical methods and more recent DNN models.
While the paper has merits as pointed out by the reviewers (esp. TtBt), a number of concerns have also been raised, including the choice of datasets (e.g., by reviewers rnBY and zX4p). We appreciate the authors’ effort by adding some preliminary results of further experiments, but addressing all the concerns thoroughly will need a lot more work to get a scholarly paper that is more ready for publication. We believe this work has potential to be accepted for publication in a reputable venue if the concerns are thoroughly addressed after substantial revision. | train | [
"oROvXjhzdwk",
"BR4Awe9ZHGF",
"mpUhlJqFgID",
"aX69n3oALPh",
"bb2nH2WWUnO",
"7xxftWKz1oN",
"mtdUq4pqtIj",
"TwPcQh5VCP",
"WLSgUFVNDE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for responding to the comments in my original review. With the changes incorporated in the revised version, it confirms my relatively positive view on the paper.",
" Thank you for the detailed reply to my concerns. The authors are encouraged to improve the paper with an improved discussion o... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
5
] | [
"7xxftWKz1oN",
"mtdUq4pqtIj",
"bb2nH2WWUnO",
"iclr_2022_VnurXbqxr0B",
"aX69n3oALPh",
"TwPcQh5VCP",
"WLSgUFVNDE",
"iclr_2022_VnurXbqxr0B",
"iclr_2022_VnurXbqxr0B"
] |
iclr_2022_4sz0AcJ8HUB | SERCNN: Stacked Embedding Recurrent Convolutional Neural Network in Depression Detection on Twitter | Conventional approach of self-reporting-based screening for depression is not scalable, expensive, and requires one to be fully aware of their mental health. Motivated by previous studies that demonstrated great potentials for using social media posts to monitor and predict one's mental health status, this study utilizes natural language processing and machine learning techniques on social media data to predict one's risk of depression. Most existing works utilize handcrafted features, and the adoption of deep learning in this domain is still lacking. Social media texts are often unstructured, ill-formed, and contain typos, making handcrafted features and conventional feature extraction methods inefficient. Moreover, prediction models built on these features often require a high number of posts per individual for accurate predictions. Therefore, this study proposes a Stacked Embedding Recurrent Convolutional Neural Network (SERCNN) for a more optimized prediction that has a better trade-off between the number of posts and accuracy. Feature vectors of two widely available pretrained embeddings trained on two distinct datasets are stacked, forming a meta-embedding vector that has a more robust and richer representation for any given word. We adapt Lai et al. (2015) RCNN approach that incorporates both the embedding vector and context learned from the neural network to form the final user representation before performing classification. We conducted our experiments on the Shen et al. (2017) depression Twitter dataset, the largest ground truth dataset used in this domain. Using SERCNN, our proposed model achieved a prediction accuracy of 78% when using only ten posts from each user, and the accuracy increases to 90% with an F1-measure of 0.89 when five hundred posts are analyzed. | Reject | This paper tackles a very important problem of detecting depression on Twitter. As the reviewers expressed in their reviews. this paper will be of interest for the community of researchers applying ML models to mental health domain. It is unfortunate that the authors did not respond to the reviewers' concerns and questions. I strongly encourage the authors to improve the paper based on the authors comments and questions and resubmit to a future venue. | train | [
"DZyCljsdfh",
"83n6yg8mq1",
"yrPnw8XWj_o",
"qr39c3jqLIT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a concatenation approach to combine multiple social media texts through a stacked embedding layer and demonstrates its effect in depression prediction based on text. The motivation and the results of the paper are very interesting and definitely will be of importance in a more applied community ... | [
3,
5,
3,
3
] | [
2,
4,
5,
5
] | [
"iclr_2022_4sz0AcJ8HUB",
"iclr_2022_4sz0AcJ8HUB",
"iclr_2022_4sz0AcJ8HUB",
"iclr_2022_4sz0AcJ8HUB"
] |
iclr_2022_vPK-G5HbnWg | PACE: A Parallelizable Computation Encoder for Directed Acyclic Graphs | Optimization of directed acyclic graph (DAG) structures has many applications, such as neural architecture search (NAS) and probabilistic graphical model learning. Encoding DAGs into real vectors is a dominant component in most neural-network-based DAG optimization frameworks. Currently, most popular DAG encoders use an asynchronous message passing scheme which sequentially processes nodes according to the dependency between nodes in a DAG. That is, a node must not be processed until all its predecessors are processed. As a result, they are inherently not parallelizable. In this work, we propose a Parallelizable Attention-based Computation structure Encoder (PACE) that processes nodes simultaneously and encodes DAGs in parallel. We demonstrate the superiority of PACE through encoder-dependent optimization subroutines that search the optimal DAG structure based on the learned DAG embeddings. Experiments show that PACE not only improves the effectiveness over previous sequential DAG encoders with a significantly boosted training and inference speed, but also generates smooth latent (DAG encoding) spaces that are beneficial to downstream optimization subroutines. | Reject | Thank you for your first (hopefully of many!) submissions to ICLR.
This work describes a method for allowing nodes to be processed concurrently instead of sequentially, allowing for a reduction in computation time.
The reviewers identified a number of concerns about the paper (lack of citations and baselines, an additional experiments demonstrating scale, and a number of clarifications and motivation in the text). The authors addressed the majority of these concerns due the rebuttal. I'm afraid a promise of a revised manuscript is not a sufficient substitute for the reviewers seeing a revised manuscript, and due the nature of the feedback, a revision is needed, which the reviewers have not seen to check their concerns are fully addressed. Therefore, at this stage, unfortunately, I recommend rejection. | train | [
"OUe-E6-2c6P",
"CAJEaQTxMSm",
"yYgLEOnvLQ",
"5j7uq4a1MWD",
"8c2zxr7151",
"nbPrl1ArJ6Y",
"GW5UHiPr93i",
"KzSl1e-50Si",
"PJNKwgOtNB2",
"zYJJQRYvEuA",
"V_79FGsiM0t",
"Kb5-0lFx09q",
"mO9M2mQsVKi",
"vat8mwb1wd4",
"EA18qLoJ5ad",
"9OjiYs8GZJo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an encoder architecture for directed acyclic graphs which is parallelizable in computation and thus more efficient than asynchronous message passing alternatives. The model is based on representing the graph in its canonical form, i.e. a unique representation of its isomorphism class, which is ... | [
6,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_vPK-G5HbnWg",
"iclr_2022_vPK-G5HbnWg",
"5j7uq4a1MWD",
"iclr_2022_vPK-G5HbnWg",
"mO9M2mQsVKi",
"iclr_2022_vPK-G5HbnWg",
"EA18qLoJ5ad",
"9OjiYs8GZJo",
"OUe-E6-2c6P",
"5j7uq4a1MWD",
"5j7uq4a1MWD",
"OUe-E6-2c6P",
"9OjiYs8GZJo",
"EA18qLoJ5ad",
"iclr_2022_vPK-G5HbnWg",
"iclr_2022_... |
iclr_2022_aJ_GcB4vcT0 | Unsupervised Learning of Neurosymbolic Encoders | We present a framework for the unsupervised learning of neurosymbolic encoders, i.e., encoders obtained by composing neural networks with symbolic programs from a domain-specific language. Such a framework can naturally incorporate symbolic expert knowledge into the learning process and lead to more interpretable and factorized latent representations than fully neural encoders. Also, models learned this way can have downstream impact, as many analysis workflows can benefit from having clean programmatic descriptions. We ground our learning algorithm in the variational autoencoding (VAE) framework, where we aim to learn a neurosymbolic encoder in conjunction with a standard decoder. Our algorithm integrates standard VAE-style training with modern program synthesis techniques. We evaluate our method on learning latent representations for real-world trajectory data from animal biology and sports analytics. We show that our approach offers significantly better separation than standard VAEs and leads to practical gains on downstream tasks. | Reject | The authors develop a technique for unsupervised learning of neurosymbolic encoders. Some of the difficulty with the paper came from the accessibility to a broader machine learning audience, though there is related work such as Shah 2020 in machine learning. The other difficulty came from the experiments: there was both a question about the metrics and the task. Quoting a reviewer
"Current evaluation seems not very convincing to me. The authors only show that with the help of symbolic program, the method could get representations with better cluster quality (program helps representation learning). But I think a more intersting perspective is to see whether the learned program itself is helpful. For example, whether it could be used to predict future trajectory (such as 3-body problem), or even help solving some high-level reasoning tasks."
and another reviewer
"Maybe something comparing the programs of experts to what the latent representation learned?"
Making the paper more accessible and improving the experiments will improve its quality. | train | [
"buMHwN7OM4Y",
"Nyx3y8soNGT",
"XPeyY1pU_w3",
"NBNvR1sJ9s1",
"sDXwYkapzM-",
"8YnwoHWA1_D",
"SxOXqQkdJQB",
"nhfBvBZXZ6L",
"zHslqxab8Jc",
"MyK3_gcPvNF",
"I_DmoVDizn"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for your response, and we provide additional clarifications below.\n\n> However, my concerns were about other tasks themselves and comparisons with TVAE variants in Table 3 and they are still unclear.\n\nSimilar to other papers on unsupervised learning ([1, 2, 3]), we evaluate our learned pr... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"Nyx3y8soNGT",
"NBNvR1sJ9s1",
"iclr_2022_aJ_GcB4vcT0",
"I_DmoVDizn",
"MyK3_gcPvNF",
"XPeyY1pU_w3",
"zHslqxab8Jc",
"iclr_2022_aJ_GcB4vcT0",
"iclr_2022_aJ_GcB4vcT0",
"iclr_2022_aJ_GcB4vcT0",
"iclr_2022_aJ_GcB4vcT0"
] |
iclr_2022_gEynpztqZug | Mako: Semi-supervised continual learning with minimal labeled data via data programming | Lifelong machine learning (LML) is a well-known paradigm mimicking the human learning process by utilizing experiences from previous tasks. Nevertheless, an issue that has been rarely addressed is the lack of labels at the individual task level. The state-of-the-art of LML largely addresses supervised learning, with a few semi-supervised continual learning exceptions which require training additional models, which in turn impose constraints on the LML methods themselves. Therefore, we propose Mako, a wrapper tool that mounts on top of supervised LML frameworks, leveraging data programming. Mako imposes no additional knowledge base overhead and enables continual semi-supervised learning with a limited amount of labeled data. This tool achieves similar performance, in terms of per-task accuracy and resistance to catastrophic forgetting, as compared to fully labeled data. We ran extensive experiments on LML task sequences created from standard image classification data sets including MNIST, CIFAR-10 and CIFAR-100, and the results show that after utilizing Mako to leverage unlabeled data, LML tools are able to achieve $97\%$ performance of supervised learning on fully labeled data in terms of accuracy and catastrophic forgetting prevention. Moreover, when compared to baseline semi-supervised LML tools such as CNNL, ORDisCo and DistillMatch, Mako significantly outperforms them, increasing accuracy by $0.25$ on certain benchmarks. | Reject | This submission proposes "Mako", which enables continual learning when only a limited amount of labeled data is available (along with a good deal of unlabeled data). Reviewers shared concerns about difficulty in understanding which components of the proposed system were novel, especially given that the most important components seemed to be proposed in past work. Reviewers also had difficulty getting insight on which parts of the system were most useful, and further requested additional experiments on harder benchmarks. There consensus was therefore to reject the paper. | train | [
"-9j_vc9gAFu",
"vP3KQWYt-0g",
"7IMnI9DORqJ",
"tJder0l1Vax",
"_UXaRa1GvBt",
"ccjgVgyJ3ES",
"fyaGl2-YeBk",
"nMm2A_VgTk",
"jXLNhQAAgfF",
"T1hB22vshV3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ responses, after reading the rebuttal and other reviewers’ comments, I still hold my initial ratings.",
" Thank you for your update. I read the rebuttal and other reviewers' comments and still think that my concerns have not been sufficiently addressed, so I will keep my previous score. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"nMm2A_VgTk",
"7IMnI9DORqJ",
"jXLNhQAAgfF",
"T1hB22vshV3",
"nMm2A_VgTk",
"fyaGl2-YeBk",
"iclr_2022_gEynpztqZug",
"iclr_2022_gEynpztqZug",
"iclr_2022_gEynpztqZug",
"iclr_2022_gEynpztqZug"
] |
iclr_2022_JYQYysrNT3M | Reinforcement Learning with Ex-Post Max-Min Fairness | We consider reinforcement learning with vectorial rewards, where the agent receives a vector of $K\geq 2$ different types of rewards at each time step. The agent aims to maximize the minimum total reward among the $K$ reward types. Different from existing works that focus on maximizing the minimum expected total reward, i.e. \emph{ex-ante max-min fairness}, we maximize the expected minimum total reward, i.e. \emph{ex-post max-min fairness}. Through an example and numerical experiments, we show that the optimal policy for the former objective generally does not converge to optimality under the latter, even as the number of time steps $T$ grows. Our main contribution is a novel algorithm, Online-ReOpt, that achieves near-optimality under our objective, assuming an optimization oracle that returns a near-optimal policy given any scalar reward. The expected objective value under Online-ReOpt is shown to converge to the asymptotic optimum as $T$ increases. Finally, we propose offline variants to ease the burden of online computation in Online-ReOpt, and we propose generalizations from the max-min objective to concave utility maximization. | Reject | This paper studies an RL problem with vector rewards, where the goal is to maximize the expected minimum total reward (ex-post max-min fairness). This is different from prior works on a similar topic, where the goal is to maximize the minimum expected total reward (ex-ante max-min fairness). The authors propose an algorithm for solving the problem with $O(T^{2 / 3})$ regret and evaluate it.
This paper received two borderline reject and two reject reviews. The reviewers recognize the novelty of the objective. However, they are also concerned with its motivation and that the proposed algorithm relies on strong assumptions, such as that the used oracle knows the underlying reward and transition models, or at least has some estimate of them. At the end, the scores of this paper are not good enough for acceptance. Therefore, it is rejected. | train | [
"bb0_-a3fAcX",
"jSL5-s2cXft",
"79rrM31Ncq",
"YUKPR7sUiB7",
"97eGZF2hM0F",
"fZ0deZ_V51R",
"nboLWc4sq_L",
"QabIFiPDlNK",
"JI2Oa-AgcKP",
"tgCpwGI1pgZ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the thoughtful suggestions, and let us input some additional comments:\n\n1. The reviewer's suggestion is in alignment with our design idea that, in order to have $\\bar{V}$ converge to $E[\\bar{V}^\\pi]$ for a certain policy $\\pi$, we generally have to follow another non-stationary policy $\\pi'$ dif... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"jSL5-s2cXft",
"97eGZF2hM0F",
"tgCpwGI1pgZ",
"JI2Oa-AgcKP",
"QabIFiPDlNK",
"nboLWc4sq_L",
"iclr_2022_JYQYysrNT3M",
"iclr_2022_JYQYysrNT3M",
"iclr_2022_JYQYysrNT3M",
"iclr_2022_JYQYysrNT3M"
] |
iclr_2022_fwJWhOxuzV9 | Semi-supervised Offline Reinforcement Learning with Pre-trained Decision Transformers | Pre-training deep neural network models using large unlabelled datasets followed by fine-tuning them on small task-specific datasets has emerged as a dominant paradigm in natural language processing (NLP) and computer vision (CV). Despite the widespread success, such a paradigm has remained atypical in reinforcement learning (RL).
In this paper, we investigate how we can leverage large reward-free (i.e. task-agnostic) offline datasets of prior interactions to pre-train agents that can then be fine-tuned using a small reward-annotated dataset. To this end, we present Pre-trained Decision Transformer (PDT), a simple yet powerful algorithm for semi-supervised Offline RL. By masking reward tokens during pre-training, the transformer learns to autoregressivley predict actions based on previous state and action context and effectively extracts behaviors present in the dataset. During fine-tuning, rewards are un-masked and the agent learns the set of skills that should be invoked for the desired behavior as per the reward function. We demonstrate the efficacy of this simple and flexible approach on tasks from the D4RL benchmark with limited reward annotations. | Reject | This paper explored pre-training for deep offline reinforcement learning, developing a method that first pre-trained decision transformers on trajectories without rewards, and then fine-tuned on limited data with rewards. The reviewers were pleased with the overall research questions and directions, but found that they were substantial shortcomings in the experimental setup and results that make this paper not yet suitable for inclusion. The approach is relatively simple and straightforward, which is actually a good thing, but that means that it must be correspondingly investigated and developed with convincing empirical results. Unfortunately, there are a number of open questions about the experimental set up, and the results are not convincing that the method is effective against alternatives, as detailed in the reviews. There was no author rebuttal. | test | [
"fyL0m2EceVu",
"RGhUe9gMCOv",
"0swsDhWBzTN",
"yn7m90RgbRo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes pre-trained decision transformers (PDT) that are trained on an offline dataset collected by an expert/semi-expert agent with reward information masked out. PDT is then evaluated by fine-tuning on the same task. Extending decision transformer to work on unlabelled datasets is an interesting res... | [
3,
3,
5,
5
] | [
4,
3,
4,
4
] | [
"iclr_2022_fwJWhOxuzV9",
"iclr_2022_fwJWhOxuzV9",
"iclr_2022_fwJWhOxuzV9",
"iclr_2022_fwJWhOxuzV9"
] |
iclr_2022_8CEJlHbKoP4 | Learning a metacognition for object detection | In contrast to object recognition models, humans do not blindly trust their perception when building representations of the world, instead recruiting metacognition to detect percepts that are unreliable or false, such as when we realize that we mistook one object for another. We propose METAGEN, an unsupervised model that enhances object recognition models through a metacognition. Given noisy output from an object-detection model, METAGEN learns a meta-representation of how its perceptual system works and uses it to infer the objects in the world responsible for the detections. METAGEN achieves this by conditioning its inference on basic principles of objects that even human infants understand (known as Spelke principles: object permanence, cohesion, and spatiotemporal continuity). We test METAGEN on a variety of state-of-the-art object detection neural networks. We find that METAGEN quickly learns an accurate metacognitive representation of the neural network, and that this improves detection accuracy by filling in objects that the detection model missed and removing hallucinated objects. This approach enables generalization to out-of-sample data and outperforms comparison models that lack a metacognition. | Reject | This work investigates a metacognition model for object detection. Reviewer RHiF wrote the best summary for this work:
The paper proposes a new method to incorporate Spelke's principles of object perception as constraints to improve the performance of an out-of-the-box object detector. This is done via defining a hierarchical generative model which defines "metacognitive" priors over the a set of observations. Through joint inference over these metacognitive priors and new unobserved states, the method outputs better object detections. The authors show improved performance on a synthetic dataset which contains scenes rendered in a virtual environment.
All reviewers agree that this is a really novel and interesting approach of enforcing consistency constraints in object detection, but had various issues with the experiments. At its current state, I believe it would make a very strong workshop paper, but not read for the ICLR 2020 conference. The authors found the reviews to be helpful, in particular, advice about the dataset construction and metric definitions, and I believe that future versions of this work will be significantly improved. We look forward to reading a revised version of this work in a high impact journal or future ML conference, good luck! | train | [
"G0n16HHrwU",
"KvYs99SwaVW",
"xptab57VVMn",
"IG6NtJnUw8",
"bEhHRiVGht",
"2GFTq4YyfFg",
"XEHe8JIxjXu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thx for the reply of the author. This paper definitely proposes a very interesting paradigm for object detection, and also broaden my understanding. I look forward to the refined version in the future when the proposed concerns (e.g., datasets and evaluation) are resolved. ",
" I just want to say thank you to t... | [
-1,
-1,
6,
-1,
5,
3,
5
] | [
-1,
-1,
4,
-1,
4,
3,
3
] | [
"IG6NtJnUw8",
"IG6NtJnUw8",
"iclr_2022_8CEJlHbKoP4",
"iclr_2022_8CEJlHbKoP4",
"iclr_2022_8CEJlHbKoP4",
"iclr_2022_8CEJlHbKoP4",
"iclr_2022_8CEJlHbKoP4"
] |
iclr_2022_bHqI0DvSIId | Neural Simulated Annealing | Simulated annealing (SA) is a stochastic global optimisation technique applicable to a wide range of discrete and continuous variable problems. Despite its simplicity, the development of an effective SA optimiser for a given problem hinges on a handful of carefully handpicked components; namely, neighbour proposal distribution and temperature annealing schedule. In this work, we view SA from a reinforcement learning perspective and frame the proposal distribution as a policy, which can be optimised for higher solution quality given a fixed computational budget. We demonstrate that this Neural SA with such a learnt proposal distribution outperforms SA baselines with hand-selected parameters on a number of problems: Rosenbrock's function, the Knapsack problem, the Bin Packing problem, and the Travelling Salesperson problem. We also show that Neural SA scales well to large problems while again outperforming popular off-the-shelf solvers in terms of solution quality and wall clock time. | Reject | This work presents the Neural Simulated Annealing (NSA) approach as a heuristic for general combinatorial optimization problems. After revising the paper and reading the comments from the reviewers, here are the general comments:
- In general, the paper is clear enough. The contributions are stated in a proper way.
- The novelty is rather limited, but the key idea of using neural networks in SA, and training it with RL, has merit.
- This approach has merit but the novelty is very limited.
- The NSA improves the vanilla SA, but the benchmark reveals that NSA is not enough competitive with other state-of-the-art methods.
- The benchmark does not reveal enough information about the NSA against the SOTA methods.
- The work needs technical improvements and validation is required before accepting the work. | val | [
"PPCiuy4clMD",
"shAERZ2-zN3",
"U9qLZr9w0gc",
"pqNwgLUSQam",
"76gLMhyMY3N",
"fn51jDIbVlL",
"BAvJHrzD0p",
"WwMErxtDJM",
"jZBpNHpK1qp",
"6gJ9_0KdAKH",
"5DfV_d1CrOH",
"5GsBuu02xLu",
"GM4cnjwgxWT",
"KxI8AB7j0qp",
"EcQZEPH02rT",
"wg-rBQMhpHs"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed response!",
" We thank the reviewer once more for the discussion and clear feedback. This has been a very insightful review, and we appreciate the reviewer taking the time to join in the discussion.\n\n> the algorithm neither achieves SOTA performance on considered problem... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"GM4cnjwgxWT",
"pqNwgLUSQam",
"fn51jDIbVlL",
"U9qLZr9w0gc",
"BAvJHrzD0p",
"6gJ9_0KdAKH",
"5DfV_d1CrOH",
"iclr_2022_bHqI0DvSIId",
"6gJ9_0KdAKH",
"EcQZEPH02rT",
"WwMErxtDJM",
"wg-rBQMhpHs",
"KxI8AB7j0qp",
"iclr_2022_bHqI0DvSIId",
"iclr_2022_bHqI0DvSIId",
"iclr_2022_bHqI0DvSIId"
] |
iclr_2022_G9M4FU8Ggo | Neural Architecture Search via Ensemble-based Knowledge Distillation | Neural Architecture Search (NAS) automatically searches for well-performed network architectures from a given search space. The One-shot NAS method improves the training efficiency by sharing weights among the possible architectures in the search space, but unfortunately suffers from insufficient parameterization of each architecture due to interferences from other architectures. Recent works attempt to alleviate the insufficient parameterization problem by knowledge distillation, which let the learning of all architectures (students) be guided by the knowledge (i.e., parameters) from a better-parameterized network (teacher), which can be either a pre-trained one (e.g., ResNet50) or some searched out networks with good accuracy performance up to now.
However, all these methods fall short in providing a sufficiently outstanding teacher, as they either depend on a pre-trained network that does not fit the NAS task the best, or the selected fitting teachers are still undertrained and inaccurate. In this paper, we take the first step to propose an ensemble-based knowledge distillation method for NAS, called EnNAS, which assembles an outstanding teacher by aggregating a set of architectures currently searched out with the most diversity (high diversity brings highly accurate ensembles); by doing so, EnNAS can deliver a high-quality knowledge distillation with outstanding teacher network (i.e., the ensemble network) all the time. Eventually, compared with existing works, on the real-world dataset ImageNet, EnNAS improved the top-1 accuracy of architectures searched out by 1.2% on average and 3.3% at most. | Reject | All reviewers recommend rejection, and I'm following this recommendation. | train | [
"Yn9UQ_Giq4",
"QsLQvJaJfH3",
"5YwtnKj6P__"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an efficient concrete novel Neural Architecture KD one-shot NAS algorithm that uses a diversity-based sampling algorithm to assemble outstanding ensemble-based teachers for Knowledge Distillation. They demonstrate their method on Imagenet with several different search spaces and significantly i... | [
3,
5,
3
] | [
4,
5,
4
] | [
"iclr_2022_G9M4FU8Ggo",
"iclr_2022_G9M4FU8Ggo",
"iclr_2022_G9M4FU8Ggo"
] |
iclr_2022_rczz7TUKIIB | Loss meta-learning for forecasting | Meta-learning of loss functions for supervised learning has been used to date for classification tasks, or as a way to enable few-shot learning. In this paper, we show how a fairly simple loss meta-learning approach can substantially improve regression results. Specifically, we target forecasting of time series and explore case studies grounded on real-world data, and show that meta-learned losses can benefit the quality of the prediction both in cases that are apparently naive and in practical scenarios where the performance metric is complex, time-correlated, non-differentiable, or not known a-priori.
| Reject | The meta-learning framework based on learning the loss function for time series forecasting is an interesting and important topic. However, the reviewers think the literature, baselines, and experimental results need significant improvement. | train | [
"JRmpTiVAFEn",
"BgZDwfI2wl",
"a10ALsndQGL",
"5KVGQ8ycUmh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a loss learning framework for the timeseries forecasting regression problem. They introduce two learning blocks, one for producing the next step prediction and the other for learning the loss function. The experimental results show that the proposed method slightly increase the performance comp... | [
1,
3,
6,
3
] | [
4,
5,
3,
4
] | [
"iclr_2022_rczz7TUKIIB",
"iclr_2022_rczz7TUKIIB",
"iclr_2022_rczz7TUKIIB",
"iclr_2022_rczz7TUKIIB"
] |
iclr_2022_97WDkHzofx | Interventional Black-Box Explanations | Deep Neural Networks (DNNs) are powerful systems able to freely evolve on their own from training data. However, like any highly parametrized mathematical model, capturing the explanation of any prediction of such models is rather difficult. We believe that there exist relevant mechanisms inside the structure of post-hoc DNNs that supports transparency and interpretability. To capture these mechanisms, we quantify the effects of parameters (pieces of knowledge) on models' predictions using the framework of causality. We introduce a general formalism of the causal diagram to express cause-effect relations inside the DNN's architecture. Then, we develop a novel algorithm to construct explanations of DNN's predictions using the $do$-operator. We call our method, Interventional Black-Box Explanations. On image classification tasks, we explain the behaviour of the model and extract visual explanations from the effects of the causal filters in convolution layers. We qualitatively demonstrate that our method captures more informative concepts compared to traditional attribution-based methods.
Finally, we believe that our method is orthogonal to logic-based explanation methods and can be leveraged to improve their explanations. | Reject | The reviewers are in consensus. I recommend that the authors take their recommendations into consideration in revising their manuscript. | train | [
"YoAOyxMVQPq",
"KbjmYVLDoFk",
"eVU8ipiXKb1",
"AdXVEqRNkZ",
"F3wS7-1tb7U"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their time and comments on the paper. All the reviewers appreciated the idea of our paper and agreed on the lack of rigorous details and sufficient evaluations. With respect to the reviewers evaluations, we decided to withdraw our paper and invest more time on improving our work taking ... | [
-1,
3,
3,
3,
3
] | [
-1,
3,
3,
3,
4
] | [
"iclr_2022_97WDkHzofx",
"iclr_2022_97WDkHzofx",
"iclr_2022_97WDkHzofx",
"iclr_2022_97WDkHzofx",
"iclr_2022_97WDkHzofx"
] |
iclr_2022_Pfj3SXBCbVQ | On the Effectiveness of Quasi Character-Level Models for Machine Translation | Neural Machine Translation (NMT) models often use subword-level vocabularies to deal with rare or unknown words. Although some studies have shown the effectiveness of purely character-based models, these approaches have resulted in highly expensive models in computational terms. In this work, we explore the advantages of quasi character-level Transformers for low-resource NMT, as well as their ability to mitigate the catastrophic forgetting problem. We first present an empirical study on the effectiveness of these models as a function of the size of the training set. As a result, we found that for data-poor environments, quasi character-level Transformers present a competitive advantage over their large subword-level versions. Similarly, we study the generalization of this phenomenon in different languages, domains, and neural architectures. Finally, we conclude this work by studying the ability of these models to mitigate the effects of catastrophic forgetting in machine translation. Our work suggests that quasi character-level Transformers have a competitive advantage in data-poor environments and, although they do not mitigate the catastrophic forgetting problem, they greatly help to achieve greater consistency between domains. | Reject | This paper demonstrates the hypothesis that a very small word piece vocabulary (giving a "quasi character level" model) outperforms current methods of neural MT in truly low resource scenarios, and provides some auxiliary studies around word piece frequency and domain transfer. It considers LSTM, CNN, and Transformer NMT models. This is useful information for people working in low resource scenarios to know.
The paper got 3 reviews by people with very strong machine translation expertise. There was a general consensus that the paper was insufficiently aware of prior work on this topic and the paper had problems in experiment construction which raised issues about the comprehensiveness of the result. That is, while this paper adopts a more extremely small vocabulary, Sennrich and Zhang (2017) already showed that a much smaller subword vocabulary can give much stronger results for low resource MT (while Araabi and Monz questioned whether this was as true for Transformer NMT. Meanwhile Cherry et al. (2018) and Kreutzer and Sokolov (2018) argued already the benefits of (almost) character-level NMT. On the experimental side, both not having results on genuinely low-resource scenarios and the commented of Reviewer FBrF that the problem with larger subword vocals here may be mainly due to the small corpus size used for constructing the subword vocabulary are both quite important. Moreover, as mainly an MT experimental study, this paper seems better suited to a more specialized audience of MT researchers at an ACL, WMT, AMTA, etc. venue.
I recommend rejecting this paper as not sufficiently novel, with experiments that need further work, and lacking strong interest to a broader representation learning audience. | train | [
"T0MN7rE6EpO",
"Y3rU5L_b6gw",
"iM6594BlUE",
"r6PZ0qIjmVa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper inspects the behavior of quasi-character level NMT models, where \"quasi-character level\" is meant to depict subword vocabularies which are an order of magnitude smaller than typical subword vocabulary sizes (e.g. 32K, 64K) used to train state of the art NMT models.\n\nThe dimensions of inspection are:... | [
3,
3,
3,
5
] | [
5,
5,
5,
4
] | [
"iclr_2022_Pfj3SXBCbVQ",
"iclr_2022_Pfj3SXBCbVQ",
"iclr_2022_Pfj3SXBCbVQ",
"iclr_2022_Pfj3SXBCbVQ"
] |
iclr_2022_E9e18Ms5TeV | A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes | Recently the LARS and LAMB optimizers have been proposed for training neural networks faster using large batch sizes. LARS and LAMB add layer-wise normalization to the update rules of Heavy-ball momentum and Adam, respectively, and have become popular in prominent benchmarks and deep learning libraries. However, without fair comparisons to standard optimizers, it remains an open question whether LARS and LAMB have any benefit over traditional, generic algorithms. In this work we demonstrate that standard optimization algorithms such as Nesterov momentum and Adam can match or exceed the results of LARS and LAMB at large batch sizes. Our results establish new, stronger baselines for future comparisons at these batch sizes and shed light on the difficulties of comparing optimizers for neural network training more generally. | Reject | This paper experimentally shows that the commonly used standard solvers such as Nesterov momentum or Adam can achieve the same performance as optimizers such as LARS and LAMB specially proposed for large batches training.
Large batch training is a very important topic, and if the author's argument is true, it might be an interesting discovery.
However, all reviewers were concerned about its limited technical contribution. Not only that, but above all, when tuning the optimizer using the same computational resource (for a new task), it seems unclear whether the standard optimizer can achieve as much performance as large batch optimizers (currently they tune the standard optimizers, fixing the hyperparameter for large batch solvers to match their performances). The authors did not answer the reviewer's questions about it, and they did not answer the reviewer's other questions in great detail. Through discussion among reviewers, all reviews agreed on this concern and agreed to reject this paper.
The quality of the paper will be greatly improved if this concern is resolved. | train | [
"MSPZTPaov24",
"bAITonBZ74",
"ujcJuUk2Jhj",
"hwESdx5k6gO",
"1f3FtRGsTo7",
"W31xCYv8ON",
"BwgH_-A74aJ",
"tZZbiMGZwwo",
"HBVAiwlM0_V",
"v5jOjhjQrKo",
"AiIuXT5X8LW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'd first address the first four bullet points. I strongly believe that these details can be clarified further **in the paper itself**. I would suggest adding the pseudocode to the main paper. The authors spend a lot of time talking about philosophical issues (which are admittedly important) in the paper, but as ... | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"W31xCYv8ON",
"BwgH_-A74aJ",
"tZZbiMGZwwo",
"iclr_2022_E9e18Ms5TeV",
"iclr_2022_E9e18Ms5TeV",
"v5jOjhjQrKo",
"hwESdx5k6gO",
"hwESdx5k6gO",
"iclr_2022_E9e18Ms5TeV",
"iclr_2022_E9e18Ms5TeV",
"iclr_2022_E9e18Ms5TeV"
] |
iclr_2022_wzJnpBhRILm | Extreme normalization: approximating full-data batch normalization with single examples | While batch normalization has been successful in speeding up the training of neural networks, it is not well understood. We cast batch normalization as an approximation of the limiting case where the entire dataset is normalized jointly, and explore other ways to approximate the gradient from this limiting case. We demonstrate an approximation that removes the need to keep more than one example in memory at any given time, at the cost of a small factor increase in the training step computation, as well as a fully per-example training procedure, which removes the extra computation at the cost of a small drop in the final model accuracy. We further use our insights to improve batch renormalization for very small minibatches. Unlike previously proposed methods, our normalization does not change the function class of the inference model, and performs well in the absence of identity shortcuts. | Reject | The paper studies the introduction of a variant of batch normalization (BN) to train deep neural network. The underlying idea is a two-step approach for per-sample based normalization, relying on augmenting the computational graph to handle "several samples" nodes.
The reviewers have mentioned that the idea of altering the computational graph is interesting and potentially novel.
Yet, the numerical experiments were not enough precise or solid to back up the claims by the authors, that their proposed BN alternative is of practical interest.
It was also raised that the paper lacks theoretical supports: no formal analysis, most explanations are ad hoc, etc. | train | [
"UPWa11hB9X",
"94gLF2aEiJ3",
"NJDN5UuSV8D",
"ElQ3GE8hKNv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an improved version of batch normalization that's intended to work well on small training batch sizes. The technique is similar to batch renormalization, but adopts several techniques to manipulate the backprop procedure. Experiments are are conducted primarily on the Inception V3 network, with... | [
3,
3,
5,
6
] | [
3,
4,
4,
2
] | [
"iclr_2022_wzJnpBhRILm",
"iclr_2022_wzJnpBhRILm",
"iclr_2022_wzJnpBhRILm",
"iclr_2022_wzJnpBhRILm"
] |
iclr_2022_6P6-N1gLQDC | Structural Causal Interpretation Theorem | Human mental processes allow for qualitative reasoning about causality in terms of mechanistic relations of the variables of interest, which we argue are naturally described by structural causal model (SCM). Since interpretations are being derived from mental models, the same applies for SCM. By defining a metric space on SCM, we provide a theoretical perspective on the comparison of mental models and thereby conclude that interpretations can be used for guiding a learning system towards true causality. To this effect, we present a theoretical analysis from first principles that results in a human-readable interpretation scheme consistent with the provided causality that we name structural causal interpretations (SCI). Going further, we prove that any existing neural induction method (NIM) is in fact interpretable. Our first experiment (E1) assesses the quality of such NIM-based SCI. In (E2) we observe evidence for our conjecture on improved sample-efficiency for SCI-based learning. After conducting a small user study, in (E3) we observe superiority in human-based over NIM-based SCI in support of our initial hypothesis. | Reject | The paper contains *fresh* new ideas connecting mental models and SCMs and providing interpretations (explanations) from DAG models learned from data, including those learned by using deep learning. The usefulness of the theory is illustrated with experiments. The paper contributes some theoretical results, but the presentation has serious issues. In general, the reviewers found the paper hard to follow due to a lack of clarity in some notations, definitions, and assumptions.
The paper was discussed in-depth and at length, including the reviewers, the AC, and the senior AC. After all, the gap between the current writing and what is expected from the camera-ready is a bit too large, and we feel it could be a disservice to the authors and community to have the paper accepted in its current form, without passing through another round of reviews. Unfortunately, we do not have any version of "conditional acceptance."
Having said that, we feel the paper has the potential for having a significant impact, and we appreciate the novelty of the proposed approach and the connection among different fields. To avoid issues in the future, we would like to suggest the authors pay attention to the detailed feedback provided by the reviewers, including the discussion and the conversation with the AC, following the exchange on Nov/28. Some examples of points that could make the presentation clearer include 1) clarifying the contributions and providing more examples of the theoretical results, 2) making explicit that the results work for Markovian and additivity models, and 3) perhaps changing the title accordingly. | train | [
"cgDaudetjFF",
"-LxIrRWsGk",
"_FzY4KcrVJ3",
"3zO-AxaF_-A",
"02ScBGJOfM",
"LiugvW14dIB",
"wU4eKXZnHyI",
"im_6jGCXNl1",
"rOW2JfmPBp-",
"i0u2-qXeBJL",
"te5XWKjjxBw",
"1u_n1i8fSVb",
"4_rpnax45KR",
"3ZRe81EJDsV",
"0qmIHRHQGQ0",
"5ZQEzJ7Wh19",
"HSOpN_NM3MV",
"tUFjhspjcu",
"i2QmdywhVfY"... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" PART 3:\n\n* > $\\color{orange}AC:$ \"I am not sure if Theorem 1 and Hypothesis 1 are the major contributions of the paper. The statements made are not supported by mathematical formality — namely “having access to many SCM-encodings of subjective mental models can ultimately lead in their overlap-agreement to (p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"_FzY4KcrVJ3",
"3zO-AxaF_-A",
"-LxIrRWsGk",
"iclr_2022_6P6-N1gLQDC",
"cqf8azvh0NN",
"iclr_2022_6P6-N1gLQDC",
"te5XWKjjxBw",
"HSOpN_NM3MV",
"4_rpnax45KR",
"iclr_2022_6P6-N1gLQDC",
"1u_n1i8fSVb",
"0qmIHRHQGQ0",
"i2QmdywhVfY",
"i0u2-qXeBJL",
"5ZQEzJ7Wh19",
"3ZRe81EJDsV",
"tUFjhspjcu",
... |
iclr_2022_YtdASzotUEW | Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection | Detecting out-of-distribution (OOD) examples is critical in many applications. We propose an unsupervised method to detect OOD samples using a $k$-NN density estimate with respect to a classification model's intermediate activations on in-distribution samples. We leverage a recent insight about label smoothing, which we call the {\it Label Smoothed Embedding Hypothesis}, and show that one of the implications is that the $k$-NN density estimator performs better as an OOD detection method both theoretically and empirically when the model is trained with label smoothing. Finally, we show that our proposal outperforms many OOD baselines and we also provide new finite-sample high-probability statistical results for $k$-NN density estimation's ability to detect OOD examples. | Reject | The problem considered in this paper is of general interest to all reviewers. However, while the reviewers in general appreciate the authors’ effort in providing theoretical analysis for a seemingly effective algorithm, they are unconvinced that the key technical claims are well justified (i.e. separation between theoretical analysis and the algorithm, which ultimately relies on the OOD score), the propositions are clear (e.g., key claims in the quality of kNN density estimator as an OOD detector not well supported by analysis/experiments), or that the experimental results are sufficiently compelling (e.g., lack of controlled experiments/ ablation study) to merit acceptance for the proposed solution. | train | [
"JtCWpsodKHn",
"Kp8jSMjS890",
"yaAVGzmEKZ1",
"YY_FgKGN_dD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method for Out-Of-Distribution (OOD) detection that relies on the k-nn density estimate in the feature space of a classifier model. Given a data example, the idea is to compute an OOD score based on the k-nn radius w.r.t to the training observations at every hidden layer. The mean of the scor... | [
6,
3,
5,
5
] | [
4,
4,
3,
3
] | [
"iclr_2022_YtdASzotUEW",
"iclr_2022_YtdASzotUEW",
"iclr_2022_YtdASzotUEW",
"iclr_2022_YtdASzotUEW"
] |
iclr_2022_AsyICRrQ7Lp | Bootstrapped Hindsight Experience replay with Counterintuitive Prioritization | Goal-conditioned environments are known as sparse rewards tasks, in which the agent gains a positive reward only when it achieves the goal. Such an setting results in much difficulty for the agent to explore successful trajectories. Hindsight experience replay (HER) replaces the goal in failed experiences with any practically achieved one, so that the agent has a much higher chance to see successful trajectories even if they are fake. Comprehensive results have demonstrated the effectiveness of HER in the literature. However, the importance of the fake trajectories differs in terms of exploration and exploitation, and it is usually inefficient to learn with a fixed proportion of fake and original data as HER did. In this paper, inspired by Bootstrapped DQN, we use multiple heads in DDPG and take advantage of the diversity and uncertainty among multiple heads to improve the data efficiency with relabeled goals. The method is referred to as Bootstrapped HER (BHER). Specifically, in addition to the benefit from the Bootstrapped version, we explicitly leverage the uncertainty measured by the variance of estimated Q-values from multiple heads. A common knowledge is that higher uncertainty will promote exploration and hence maximizing the uncertainty via a bonus term will induce better performance in Q-learning. However, in this paper, we reveal a counterintuitive conclusion that for hindsight experiences, exploiting lower uncertainty data samples will significantly improve the performance. The explanation behind this fact is that hindsight relabeling largely promotes exploration, and then exploiting lower uncertainty data (whose goals are generated by hindsight relabeling) provides a good trade-off between exploration and exploitation, resulting in further improved data efficiency. Comprehensive experiments demonstrate that our method can achieve state-of-the-art results in many goal-conditioned tasks. | Reject | I thank the authors for their submission and active participation in the discussion. The reviewers unanimously agree that this submission has significant issues, including comparison to baselines/ablations [BnLV,yX9d,PtA1], clarity [BnLV], justification of the method [nX4W]. Thus, I am recommending rejection of this paper. | train | [
"tUPED9JTAJ_",
"M943SSsby_h",
"zbjnx1Ep9Mf",
"YKZoQ6weg3",
"gKL58XEwmYB",
"O4H3NQNuVU",
"c7Wsr0SMzSe",
"F58a-H71GPy",
"S-HRIhhs_bx",
"rjZZcGDnWo",
"K82-YpI-cUI",
"dADDIHaVC45",
"EvVZunzD-S5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"# Summary & Contributions\n* The authors call attention to sparse-reward, goal-based tasks where Hindsight Experience Replay, through its provision of relabeled goals for otherwise failed experiences facilitates efficient learning.\n* To further improve upon HER, the paper focuses on two algorithmic innovations: (... | [
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2022_AsyICRrQ7Lp",
"iclr_2022_AsyICRrQ7Lp",
"YKZoQ6weg3",
"O4H3NQNuVU",
"F58a-H71GPy",
"c7Wsr0SMzSe",
"K82-YpI-cUI",
"dADDIHaVC45",
"tUPED9JTAJ_",
"EvVZunzD-S5",
"M943SSsby_h",
"iclr_2022_AsyICRrQ7Lp",
"iclr_2022_AsyICRrQ7Lp"
] |
iclr_2022_gLqnSGXVJ6l | Neural Combinatorial Optimization with Reinforcement Learning : Solving theVehicle Routing Problem with Time Windows | In contrast to the classical techniques for solving combinatorial optimization problems, recent advancements in reinforcement learning yield the potential to independently learn heuristics without any human interventions. In this context, the current paper aims to present a complete framework for solving the vehicle routing problem with time windows (VRPTW) relying on neural networks and reinforcement learning. Our approach is mainly based on an attention model (AM) that predicts the near-optimal distribution over different problem instances. To optimize its parameters, this model is trained in a reinforcement learning(RL) environment using a stochastic policy gradient and through a real-time evaluation of the reward, quantity to meet the problem business and logical constraints. Using synthetic data, the proposed model outperforms some existing baselines. This performance comparison was on the basis of the solution quality (total tour length) and the computation time (inference time) for small and medium-sized samples. | Reject | The reviewers unanimously think the paper has lack of novelty, its contributions are quite limited, and is not ready for publication. | train | [
"ne_cgHF5lsB",
"8qXgMcMRhHw",
"rwLfZ8CzFd",
"r9bBYgy2qOq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposed to solve a vehicle routing problem with time windows using neural network and reinforcement learning framework. An attention based encoder-decoder model is used to predict the distribution over problem instances while satisfying the problem constraints. Then a RL framework is trained to optimiz... | [
3,
1,
3,
3
] | [
4,
4,
4,
5
] | [
"iclr_2022_gLqnSGXVJ6l",
"iclr_2022_gLqnSGXVJ6l",
"iclr_2022_gLqnSGXVJ6l",
"iclr_2022_gLqnSGXVJ6l"
] |
iclr_2022_CpgtwW8GBxe | Label Refining: a semi-supervised method to extract voice characteristics without ground truth | A characteristic is a distinctive trait shared by a group of observations which may be used to identify them. In the context of voice casting for audiovisual productions, characteristic extraction has an important role since it can help explaining the decisions of a voice recommendation system, or give modalities to the user with the aim to express a voice search request. Unfortunately, the lack of standard taxonomy to describe comedian voices prevents the implementation of an annotation protocol. To address this problem, we propose a new semi-supervised learning method entitled Label Refining that consists in extracting refined labels (e.g. vocal characteristics) from known initial labels (e.g. character played in a recording). Our proposed method first suggests using a representation extractor based on the initial labels, then computing refined labels using a clustering algorithm to finally train a refined representation extractor. The method is validated by applying Label Refining on recordings from the video game MassEffect 3. Experiments show that, using a subsidiary corpus, it is possible to bring out interesting voice characteristics without any a priori knowledge. | Reject | This paper investigates a semi-supervised label refining approach to searching for similar voices for voice-dubbing. The apporach is based on generating refined labels using a clustering algorithm on the initial labels. Therefore, better voice characteristics can be extracted and used to select a new voice in the target language that closely matches the voice characteristics of the source language. Experiments are carried out on MassEffect as the main dataset and Skyrim as the second dataset and results show that the proposed approach slightly outperforms state of the art. While the topic under investigation is interesting and has its value to the applications such as voice casting, there are strong concerns raised by the reviewers. Reviewers find the paper difficult to follow. Some important pieces of information are either missing or only vaguely explained (e.g. non-expert initial labels, clear interpretation of p-vectors, etc.), which greatly hinders a deep understanding of the work. Some technical details such as network architecture and its training should be elaborated. This paper needs some good improvement in order to get accepted. No rebuttal is provided by the authors so all these concerns still stand. | val | [
"GOQ3JsOPed",
"2upBKtZ0CX8",
"sUS-gQhvhpm",
"_-vMCp9tnCL",
"bi9w-rZGCs7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a label refinement approach. Starting from an initial set of labels, k-means clustering is used to refine the labels. The approach is described in the context of dubbing/voice casting. The method tries to obtain voice characteristics which can be further used for dubbing/voice-casting. Experimen... | [
3,
3,
5,
3,
3
] | [
3,
3,
3,
3,
4
] | [
"iclr_2022_CpgtwW8GBxe",
"iclr_2022_CpgtwW8GBxe",
"iclr_2022_CpgtwW8GBxe",
"iclr_2022_CpgtwW8GBxe",
"iclr_2022_CpgtwW8GBxe"
] |
iclr_2022__DqUHcsQfaE | Inference-Time Personalized Federated Learning | In Federated learning (FL), multiple clients collaborate to learn a model through a central server but keep the data decentralized. Personalized federated learning (PFL) further extends FL to handle data heterogeneity between clients by learning personalized models. In both FL and PFL, all clients participate in the training process and their labeled data is used for training. However, in reality, novel clients may wish to join a prediction service after it has been deployed, obtaining predictions for their own unlabeled data.
Here, we defined a new learning setup, Inference-Time PFL (IT-PFL), where a model trained on a set of clients, needs to be later evaluated on novel unlabeled clients at inference time. We propose a novel approach to this problem IT-PFL-HN, based on a hypernetwork module and an encoder module. Specifically, we train an encoder network that learns a representation for a client given its unlabeled data. That client representation is fed to a hypernetwork that generates a personalized model for that client. Evaluated on four benchmark datasets, we find that IT-PFL-HN generalizes better than current FL and PFL methods, especially when the novel client has a large domain shift. We also analyzed the generalization error for the novel client, showing how it can be bounded using results from multi-task learning and domain adaptation. Finally, since novel clients do not contribute their data to training, they can potentially have better control over their data privacy; Indeed, we showed analytically and experimentally how novel clients can apply differential privacy to their data. | Reject | This paper proposes a personalized federated learning method using a hyper-network to encode unlabeled data from new clients. At inference time, new clients can use unlabeled data as input to this hyper-network in order to obtain a personalized version of the model. The key strength of the paper is that the idea is interesting and timely. Personalization has been studied for clients that participate from the beginning of training, but personalization of models for new clients that join later on has not been considered in most previous works. The experimental results also show a reasonable improvement over the baselines. However, the following concerns remain:
1) Novelty in comparison with reference [1]. Please add a detailed comparison when you revise the paper.
2) Explanation of the experimental results and comparison with baselines was deemed insufficient by some of the reviewers.
3) The generalization bound and the DP results seem standard extensions of existing works and do not add much novelty to the paper.
There wasn't much post-rebuttal discussion and the reviewers decided to stick to their original scores. Therefore, I recommend rejection of the paper. I hope that the authors will take the reviewers' constructive comments into account when revising the paper for a future resubmission. | train | [
"j5EBRBSiO1G",
"XVjDVZ_4GEa",
"ovyBOpXRGP",
"qPkxRFScxeX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to use hypernetwork to personalize the federated model by encoding new user data and using this embedding as a parametrization argument. The results demonstrate significant average improvement over the new clients. Furthermore, the authors evaluate the possible use of DP to encode the user embe... | [
5,
5,
3,
5
] | [
4,
3,
4,
4
] | [
"iclr_2022__DqUHcsQfaE",
"iclr_2022__DqUHcsQfaE",
"iclr_2022__DqUHcsQfaE",
"iclr_2022__DqUHcsQfaE"
] |
iclr_2022_eV5d4I3eso | Geometric Random Walk Graph Neural Networks via Implicit Layers | Graph neural networks have recently attracted a lot of attention and have been applied with great success to several important graph problems. The Random Walk Graph Neural Network model was recently proposed as a more intuitive alternative to the well-studied family of message passing neural networks. This model compares each input graph against a set of latent ``hidden graphs'' using a kernel that counts common random walks up to some length. In this paper, we propose a new architecture, called Geometric Random Walk Graph Neural Network (GRWNN), that generalizes the above model such that it can count common walks of infinite length in two graphs. The proposed model retains the transparency of Random Walk Graph Neural Networks since its first layer also consists of a number of trainable ``hidden graphs'' which are compared against the input graphs using the geometric random walk kernel. To compute the kernel, we employ a fixed-point iteration approach involving implicitly defined operations. Then, we capitalize on implicit differentiation to derive an efficient training scheme which requires only constant memory, regardless of the number of fixed-point iterations. The employed random walk kernel is differentiable, and therefore, the proposed model is end-to-end trainable. Experiments on standard graph classification datasets demonstrate the effectiveness of the proposed approach in comparison with state-of-the-art methods. | Reject | This paper shows how to back-propagate through a kernel between graphs that counts common random walks of infinite length between the graphs. Reviewers tend to agree that the paper is well-written and the technical contributions are sound. However, there are concerns about the significance and novelty of the method relative to related work, alongside mixed experimental results. Overall that puts it as a very borderline paper. In the rebuttals, the authors argued for the significance of the contribution, but reviewers were generally unconvinced. | train | [
"xS4qdoWbbvC",
"ZEPG9FYGHlG",
"328NwimdHR",
"YDbzbm6xO65",
"QE0USMO3RQn",
"qgsgaPp4h_T",
"SNEvfatVy_",
"EzBb6qFpKYA",
"0Pw8h97bp2j",
"idt8YeINctm",
"T397_cOIfv9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Having read the other reviews and the authors' responses, I am inclined to keep my score, even though I do think this paper is very much on the border of accept and reject. As I mentioned in my original review: I think the paper is well written and the idea is sound.\n\nHowever, I am still not convinced that the ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"EzBb6qFpKYA",
"iclr_2022_eV5d4I3eso",
"SNEvfatVy_",
"QE0USMO3RQn",
"idt8YeINctm",
"T397_cOIfv9",
"ZEPG9FYGHlG",
"0Pw8h97bp2j",
"iclr_2022_eV5d4I3eso",
"iclr_2022_eV5d4I3eso",
"iclr_2022_eV5d4I3eso"
] |
iclr_2022_dmq_-R2LhQk | The Manifold Hypothesis for Gradient-Based Explanations | When are gradient-based explanations meaningful? We propose a necessary criterion: explanations need to be aligned with the tangent space of the data manifold. To test this hypothesis, we employ autoencoders to estimate and generate data manifolds. Across a range of different datasets -- MNIST, EMNIST, CIFAR10, X-ray pneumonia and Diabetic Retinopathy detection -- we demonstrate empirically that the more an explanation is aligned with the tangent space of the data, the more interpretable it tends to be. In particular, popular post-hoc explanation methods such as Integrated Gradients and SmoothGrad tend to align their results with the data manifold. The same is true for the outcome of adversarial training, which has been claimed to lead to more interpretable explanations. Empirically, alignment with the data manifold happens early during training, and to some degree even when training with random labels. However, we theoretically prove that good generalization of neural networks does not imply good or bad alignment of model gradients with the data manifold. This leads to a number of interesting follow-up questions regarding gradient-based explanations.
| Reject | This paper studies the following hypothesis that gradient-based explanations are more meaningful the more they are aligned with the tangent space of the data manifold. The reviews are negative overall. The general feeling is that the paper reads like a set of subjective observations about the meaningfulness of explanation and relationship with data manifold + tangential theory. There isn’t a coherent story. | val | [
"ZTwRmpFYoR1",
"AQRQaNtWcda",
"jZOV5Zn9c9V",
"U1k6YQTu9R",
"e4WxInWK_yu"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for providing detailed reviews and comments on our paper. \n\nThe reviews note that we do not quantitatively evaluate the \"meaningfulness” of explanations to humans. While we believe that our paper contains empirical evidence in support of our hypothesis, we recognize that quantification ... | [
-1,
3,
3,
3,
6
] | [
-1,
5,
4,
5,
3
] | [
"iclr_2022_dmq_-R2LhQk",
"iclr_2022_dmq_-R2LhQk",
"iclr_2022_dmq_-R2LhQk",
"iclr_2022_dmq_-R2LhQk",
"iclr_2022_dmq_-R2LhQk"
] |
iclr_2022_f9AIc3mEprf | What classifiers know what they don't know? | Being uncertain when facing the unknown is key to intelligent decision making. However, machine learning algorithms lack reliable estimates about their predictive uncertainty. This leads to wrong and overly-confident decisions when encountering classes unseen during training. Despite the importance of equipping classifiers with uncertainty estimates ready for the real world, prior work has focused on small datasets and little or no class discrepancy between training and testing data. To close this gap, we introduce UIMNET: a realistic, ImageNet-scale test-bed to evaluate predictive uncertainty estimates for deep image classifiers. Our benchmark provides implementations of eight state-of-the-art algorithms, six uncertainty measures, four in-domain metrics, three out-domain metrics, and a fully automated pipeline to train, calibrate, ensemble, select, and evaluate models. Our test-bed is open-source and all of our results are reproducible from a fixed commit in our repository. Adding new datasets, algorithms, measures, or metrics is a matter of a few lines of code-in so hoping that UIMNET becomes a stepping stone towards realistic, rigorous, and reproducible research in uncertainty estimation. Our results show that ensembles of ERM classifiers as well as single MIMO classifiers are the two best alternatives currently available to measure uncertainty about both in-domain and out-domain classe. | Reject | This paper introduces an ImageNet-scale benchmark UIMNET for uncertainty estimation of deep image classifiers and evaluates prior works under the proposed benchmark. Two reviewers suggest reject, and one reviewer does acceptance. In the discussion period, the authors did not provide any response for many concerns of reviewers, e.g., weak baselines, weak novelty, and lack of justification for the current design. Hence, given the current status, AC recommends reject. | test | [
"cbYdUryViMv",
"5u7dIsoVSbF",
"lkgCvRmbDP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduce a benchmark UIMNET to evaluate predictive uncertainty estimates for deep image clasifiers. The authors provides implements of ten state-of-the-art algorithms and six uncertainty measures with four in-domain metrics and three out-domain metrics. For Strengths, the paper provides a solid test-bed... | [
6,
3,
5
] | [
4,
4,
4
] | [
"iclr_2022_f9AIc3mEprf",
"iclr_2022_f9AIc3mEprf",
"iclr_2022_f9AIc3mEprf"
] |
iclr_2022_0n1UvVzW99x | Synthetic Reduced Nearest Neighbor Model for Regression | Nearest neighbor models are among the most established and accurate approaches to machine learning. In this paper, we investigate Synthetic Reduced Nearest Neighbor (SRNN) as a novel approach to regression tasks. Existing prototype nearest neighbor models are initialized by training a k-means model over each class. However, such initialization is only applicable to classification tasks. In this work, we propose a novel initialization and expectation maximization approach for enabling the application of SRNN to regression. The proposed initialization approach is based on applying the k-means algorithm on the target responses of samples to create various clusters of targets. This is proceeded by learning several centroids in the input space for each cluster found over the targets. Essentially, the initialization consists of finding target clusters and running k-means in the space of feature vectors for the corresponding target cluster. The optimization procedure consists of applying an expectation maximization approach similar to the k-means algorithm that optimizes the centroids in the input space. This algorithm is comprised of two steps: (1) The assignment step, where assignments of the samples to each centroid is found and the target response (i.e., prediction) of each centroid is determined; and (2) the update/centroid step, where each centroid is updated such that the loss function of the entire model is minimized. We will show that the centroid step operates over all samples via solving a weighted binary classification. However, the centroid step is NP-hard and no surrogate objective function exists for solving this problem. Therefore, a new surrogate is proposed to approximate the solution for the centroid step. Furthermore, we consider the consistency of the model, and show that the model is consistent under mild assumptions. The bias-variance relationship in this model is also discussed. We report the empirical evaluation of the proposed SRNN regression model in comparison to several state-of-the-art techniques. | Reject | An algorithm for learning prototype based nearest neighbor regression model is presented. This algorithm minimizes an MSE on training examples w.r.t. the prototype centers and the prototype outputs by a block coordinate descent. The main contribution is the optimization algorithm finding the prototypes.
Major concerns in the reviews include missing mathematical rigor, poor description of the experiments, and unclear novelty. From my own reading I would like to add that the main theoretical contribution (Theorem 1) makes assumptions that are beyond any reasonable constraint, in particular as we know for more than 40 years, that such kind of assumptions are superfluous for many, many other algorithms.
In summary, a clear reject. | train | [
"zQw3347AMH5",
"NW8mSU7hEi",
"gs3Q0fb-ZC",
"rzM3dBD3RHX",
"CuyV-z-Aalp",
"xfmQT77RGRZ",
"MLeYHEXo0s",
"FZvNfGSl1Qd",
"RJp4QfPGaVf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The main contributions of this submission is to adapt Synthetic Reduced Nearest Neighbor (SRNN) for regression tasks. Technical details and comparative studies are provided accordingly. [COMMENTS AFTER REBUTTAL]\n\nI have read the rebuttals from authors and reviews from other reviewers. However, I still think tha... | [
5,
-1,
5,
-1,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_0n1UvVzW99x",
"xfmQT77RGRZ",
"iclr_2022_0n1UvVzW99x",
"zQw3347AMH5",
"RJp4QfPGaVf",
"FZvNfGSl1Qd",
"gs3Q0fb-ZC",
"iclr_2022_0n1UvVzW99x",
"iclr_2022_0n1UvVzW99x"
] |
iclr_2022_bq7smM1OJIX | Determining the Ethno-nationality of Writers Using Written English Text | Ethno-nationality is where nations are defined by a shared heritage, for instance it can be a membership of a common language, nationality, religion or an ethnic ancestry. The main goal of this research is to determine a person’s country-of-origin using English text written in less controlled environments, employing Machine Learning (ML) and Natural Language Processing (NLP) techniques. The current literature mainly focuses on determining the native language of English writers and a minimal number of researches have been conducted in determining the country-of-origin of English writers.
Further, most experiments in the literature are mainly based on the TOEFL, ICLE datasets which were collected in more controlled environments (i.e., standard exam answers). Hence, most of the writers try to follow some guidelines and patterns of writing. Subsequently, the creativity, freedom of writing and the insights of writers could be hidden. Thus, we believe it hides the real nativism of the writers. Further, those corpora are not freely available as it involves a high cost of licenses. Thus, the main data corpus used for this research was the International Corpus of English (ICE corpus). Up to this point, none of the researchers have utilised the ICE corpus for the purpose of determining the writers’ country-of-origin, even though there is a true potential.
For this research, an overall accuracy of 0.7636 for the flat classification (for all ten countries) and accuracy of 0.6224~1.000 for sub-categories were received. In addition, the best ML model obtained for the flat classification strategy is linear SVM with SGD optimizer trained with word (1,1) uni-gram model. | Reject | This paper proposes to use longstanding statistical learning techniques to identify the nationality of the author of a text.
Reviewers agreed that this work is a poor fit for ICLR, as there is nothing here that advances our understanding of representation learning. Reviewers were further concerned about the soundness of the claims, raising issues about data contamination and comparison with prior work.
Finally, reviewers pointed out (correctly in my view) that work that aims to infer protected identity characteristics of non-user human subjects should be held to an especially high ethical standard, and needs a highly persuasive cost-benefit analysis that defends why the problem is ethical to study at all. The available discussion of ethics is not up to this standard. | test | [
"NPzh06HjWJ",
"ajUWelUdnjF",
"MrA0k7To6fD",
"Kj2bAjkiEK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a simple classification-based approach to a newly defined problem: determining ethno-nationality of writers of English text data. The problem of determining ethno-nationality can be seen as a direct extension of the research problem of identifying the writers' native language based on written E... | [
3,
3,
3,
3
] | [
4,
4,
4,
5
] | [
"iclr_2022_bq7smM1OJIX",
"iclr_2022_bq7smM1OJIX",
"iclr_2022_bq7smM1OJIX",
"iclr_2022_bq7smM1OJIX"
] |
iclr_2022_WN2Sup7qLdw | Multi-Resolution Continuous Normalizing Flows | Recent work has shown that Neural Ordinary Differential Equations (ODEs) can serve as generative models of images using the perspective of Continuous Normalizing Flows (CNFs). Such models offer exact likelihood calculation, and invertible generation/density estimation. In this work we introduce a Multi-Resolution variant of such models (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image. We introduce a transformation between resolutions that allows for no change in the log likelihood. We show that this approach yields comparable likelihood values for various image datasets, using orders of magnitude fewer parameters than the prior methods, in significantly less training time, using only one GPU. | Reject | ### Description
The paper enhances flow-based generative models by putting them into a coarse-to fine multi-resolution framework. The key technical challenge as I understand is designing up-scaling conditional flow modules. Since the operation needs to be invertible, the paper carefully designs what degrees of freedom need to be injected in addition to the low resolution image to compose a higher resolution one.
### Decision
The paper received 5 expert and rather detailed reviews. I have read and understood the paper and all reviews. Reviewers remark that the paper is well written, addresses a challenging problem. However reviews were in a consensus on that the contribution of the paper is marginal. The average score was 4.4. The authors did not respond to reviewers and did not update the paper. There was no post-rebuttal discussion and or additional feedback from reviewers. Therefore, must reject.
### Comments
I have only minor comments on the writing and organization of the paper.
There are many self-repetitions in the text, restating what was already said above in same or very similar sentences. Some questions studied in appendices are not presented in the main papar. | train | [
"t9TLN3PPkpO",
"qnG0_vQfPjX",
"ViCLhHxnvzO",
"bfRJNJIszai",
"2pSMVzWT1Wq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper describes the novel approach to generate high-resolution images in progressive manner. The approach is based on set of conditional flows that take the image generated from previous stage as conditioning factor and generate the image with the higher resolution. The quality of the method is compared to the... | [
3,
3,
5,
5,
6
] | [
3,
4,
4,
3,
4
] | [
"iclr_2022_WN2Sup7qLdw",
"iclr_2022_WN2Sup7qLdw",
"iclr_2022_WN2Sup7qLdw",
"iclr_2022_WN2Sup7qLdw",
"iclr_2022_WN2Sup7qLdw"
] |
iclr_2022_HdnUQk9jbUO | Linear Convergence of SGD on Overparametrized Shallow Neural Networks | Despite the non-convex landscape, first-order methods can be shown to reach global minima when training overparameterized neural networks, where the number of parameters far exceed the number of training data. In this work, we prove linear convergence of stochastic gradient descent when training a two-layer neural network with smooth activations. While the existing theory either requires a high degree of overparameterization or non-standard initialization and training strategies, e.g., training only a single layer, we show that a subquadratic scaling on the width is sufficient under standard initialization and training both layers simultaneously if the minibatch size is sufficiently large and it also grows with the number of training examples. Via the batch size, our results interpolate between the state-of-the-art subquadratic results for gradient descent and the quadratic results in the worst case. | Reject | This paper shows SGD enjoys linear convergence for shallow neural networks under certain assumptions. However, reviewers reach the consensus that this paper lacks technical novelty. The meta reviewer agrees and thus decides to reject the paper. | test | [
"NnZ1qPVjwoU",
"QiKVdKnz31",
"RX3OmCBMi5p",
"8Fooi2nhPKH",
"s9Or4jNyZ2M",
"SepM9uXXlYw",
"q-EtJ5jVZ61",
"Dxbf9dUhWva",
"0VCJy-1QgZ",
"pqnFVnjzwr5",
"zOn9fEadHd_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper first analyzes the convergence of SGD under assumptions of PL condition on loss function and growth condition on stochastic gradients. Then, it considers a two-layer neural network with some special setting, and claims that this network with quadratic loss satisfies both PL condition and growth conditio... | [
1,
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3
] | [
5,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_HdnUQk9jbUO",
"iclr_2022_HdnUQk9jbUO",
"8Fooi2nhPKH",
"Dxbf9dUhWva",
"iclr_2022_HdnUQk9jbUO",
"QiKVdKnz31",
"s9Or4jNyZ2M",
"NnZ1qPVjwoU",
"NnZ1qPVjwoU",
"zOn9fEadHd_",
"iclr_2022_HdnUQk9jbUO"
] |
iclr_2022__j4hwbj6Opj | 3D Meta-Registration: Meta-learning 3D Point Cloud Registration Functions | Learning robust 3D point cloud registration functions with deep neural networks has emerged as a powerful paradigm in recent years, offering promising performance in producing spatial geometric transformations for each pair of 3D point clouds. However, 3D point cloud registration functions are often generalized from extensive training over a large volume of data to learn the ability to predict the desired geometric transformation to register 3D point clouds. Generalizing across 3D point cloud registration functions requires robust learning of priors over the respective function space and enables consistent registration in presence of significant 3D structure variations. In this paper, we proposed to formalize the learning of a 3D point cloud registration function space as a meta-learning problem, aiming to predict a 3D registration model that can be quickly adapted to new point clouds with no or limited training data. Specifically, we define each task as the learning of the 3D registration function which takes points in 3D space as input and predicts the geometric transformation that aligns the source point cloud with the target one. Also, we introduce an auxiliary deep neural network named 3D registration meta-learner that is trained to predict the prior over the respective 3D registration function space. After training, the 3D registration meta-learner, which is trained with the distribution of 3D registration function space, is able to uniquely parameterize the 3D registration function with optimal initialization to rapidly adapt to new registration tasks. We tested our model on the synthesized dataset ModelNet and FlyingThings3D, as well as real-world dataset KITTI. Experimental results demonstrate that 3D Meta-Registration achieves superior performance over other previous techniques (e.g. FlowNet3D). | Reject | This paper applies a metalearning strategy to point cloud registration, which refines 3D registration networks to improve performance on specific datasets/settings. Reviews for this paper recognized its potential interest but uniformly highlighted that the work is lacking in polish---both from an expository perspective and in terms of experiments. Questions included whether the experiments truly support the claim of generalization, and whether the work would be better considered as a method for scene flow. Authors did not rebut these points, so I am recommending rejection. | train | [
"y4PrvZCDKnX",
"3wcMPcFvPWE",
"nEhj5HM_aiz",
"8xjbYx53uGK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses point cloud registration from a meta-learning perspective to quickly adapt with limited training data. The main idea is using a meta-learner is to initialise a 3D registration learner. The meta-learner predicts a prior registration that can rapidly adapt to new registration problems. Experimen... | [
3,
3,
3,
5
] | [
4,
4,
4,
4
] | [
"iclr_2022__j4hwbj6Opj",
"iclr_2022__j4hwbj6Opj",
"iclr_2022__j4hwbj6Opj",
"iclr_2022__j4hwbj6Opj"
] |
iclr_2022_Ab0o8YMJ8a | Automated Channel Pruning with Learned Importance | Neural network pruning allows for significant reduction of model size and latency. However, most of the current network pruning methods do not consider channel interdependencies and a lot of manual adjustments are required before they can be applied to new network architectures. Moreover, these algorithms are often based on hand-picked, sometimes complicated heuristics and can require thousands of GPU computation hours. In this paper, we introduce a simple neural network pruning and fine-tuning framework that requires no manual heuristics, is highly efficient to train (2-6 times speed up compared to NAS-based competitors) and produces comparable performance. The framework contains 1) an automatic channel detection algorithm that groups the interdependent blocks of channels; 2) a non-iterative pruning algorithm that learns channel importance directly from feature maps while masking the coupled computational blocks using Gumbel-Softmax sampling and 3) a hierarchical knowledge distillation approach to fine-tune the pruned neural networks. We validate our pipeline on ImageNet classification, human segmentation and image denoising, creating lightweight and low latency models, easy to deploy on mobile devices. Using our pruning algorithm and hierarchical knowledge distillation for fine-tuning we are able to prune EfficientNet B0, EfficientNetV2 B0 and MobileNetV2 to 75% of their original FLOPs with no loss of accuracy on ImageNet. We release a set pruned backbones as Keras models - all of them proved beneficial when deployed in other projects. | Reject | A method for pruning neural networks is proposed. Reviewers raised several concerns, including poor technical presentation and insufficient experimental validation with respect to both baseline methods and ablation studies. All reviewer ratings lean toward reject and the authors did not provide a response. | train | [
"3c_lCEPPD-r",
"wisNMfjLjd8",
"3d6y_UYyNZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a hierarchical knowledge distillation method for neural network pruning. Experiments demonstrate that the whole pruning pipeline requires much less computational resources than some of the state-of-the-art NAS based solutions for finding efficient FLOPs / accuracy trade-offs. 1) The research ab... | [
5,
3,
5
] | [
3,
3,
5
] | [
"iclr_2022_Ab0o8YMJ8a",
"iclr_2022_Ab0o8YMJ8a",
"iclr_2022_Ab0o8YMJ8a"
] |
iclr_2022_s51gCxF70pq | Learning Temporally-Consistent Representations for Data-Efficient Reinforcement Learning | Deep reinforcement learning (RL) agents that exist in high-dimensional state spaces, such as those composed of images, have interconnected learning burdens. Agents must learn an action-selection policy that completes their given task, which requires them to learn a representation of the state space that discerns between useful and useless information. The reward function is the only supervised feedback that RL agents receive, which causes a representation learning bottleneck that can manifest in poor sample efficiency. We present $k$-Step Latent (KSL), a new representation learning method that enforces temporal consistency of representations via a self-supervised auxiliary task wherein agents learn to recurrently predict action-conditioned representations of the state space. The state encoder learned by KSL produces low-dimensional representations that make optimization of the RL task more sample efficient. Altogether, KSL produces state-of-the-art results in both data efficiency and asymptotic performance in the popular PlaNet benchmark suite. Our analyses show that KSL produces encoders that generalize better to new tasks unseen during training, and its representations are more strongly tied to reward, are more invariant to perturbations in the state space, and move more smoothly through the temporal axis of the RL problem than other methods such as DrQ, RAD, CURL, and SAC-AE. | Reject | This paper presents a reinforcement learning architecture that uses an auxiliary k-step step loss in the context of continuous control from image-based states.
While the topic is relevant and potentially impactful, several reviewers have major concerns about the manuscript. Among these, I highlight:
- Reviewers J6YX, 38iT and Qru8 have concerns about the novelty and contribution of the approach compared to existing literature.
- Reviewers J6YX, TKuY, 38iT and Qru8 have concerns about the experimental evaluation and the quality of comparisons to baselines.
Overall, it seems that the paper would benefit from further polishing. | train | [
"VIdkt2CXjY",
"Zm3sc1-Nc4",
"utb8pQkkI5-",
"H6FsmIUhRQP",
"bh5dahuNEHD",
"8GbrIxj5Ovh",
"kBhysgOm7i1",
"cAD_Wo6r5s",
"cTLf0waB7SN",
"hPFzN9KfUb1"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer 38iT for their time and suggestions. \n\n- $\\textbf{Clarity of writing:}$ To clarify, the auxiliary loss attempts to tie together latent representations over the time axis. Also, we will try to change the notation in 3.2 to make the description more clear, per Reviewer 38iT's suggestion. \n\n-... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4,
4
] | [
"cTLf0waB7SN",
"8GbrIxj5Ovh",
"cAD_Wo6r5s",
"hPFzN9KfUb1",
"kBhysgOm7i1",
"iclr_2022_s51gCxF70pq",
"iclr_2022_s51gCxF70pq",
"iclr_2022_s51gCxF70pq",
"iclr_2022_s51gCxF70pq",
"iclr_2022_s51gCxF70pq"
] |
iclr_2022_SCn0mgEIwh | Learnability and Expressiveness in Self-Supervised Learning | In this work, we argue that representations induced by self-supervised learning (SSL) methods should both be expressive and learnable. To measure expressiveness, we propose to use the Intrinsic Dimension (ID) of the dataset in representation space. Inspired by the human study of Laina et al. (2020), we introduce Cluster Learnability (CL), defined in terms of the learning speed of a KNN classifier trained to predict K-means cluster labels for held-out representations. By collecting 30 state-of-art checkpoints, both supervised and self-supervised, using different architectures, we show that ID and CL can be combined to predict downstream classification performance better than the existing techniques based on contrastive losses or pretext tasks, while having no requirements on data augmentation, model architecture or human labels. To further demonstrate the utility of our framework, we propose modifying DeepCluster (Caron et al., 2018) to improve the learnability of the representations. Using our modification, we are able to outperform DeepCluster on both STL10 and ImageNet benchmarks. The performance of the intermediate checkpoints can also be well predicted under our framework, suggesting the possibility of developing new SSL algorithms without labels. | Reject | The problem studied in this paper is interesting and the high-level motivation of the proposed research is reasonable. However, as pointed out by reviewers, it is not convincing that the developed components in the proposed method are able to address the issues mentioned in the high-level motivation. Furthermore, the experimental results are not convincing to verify the motivations either. Though the authors provided some clarifications in the rebuttal, reviewers' major concerns still remain.
The authors are encouraged to take reviewers' concerns into consideration to revise the proposed method to make it a stronger work for future submission. Based on its current form, this work is not ready for publication at ICLR. | test | [
"7z4e3eymGvO",
"C1zCzTdpmu5",
"BW4vciARsBo",
"jKg1MFn3T6u",
"VxYVmakdy1f",
"FeADgTnQyu",
"zMbmnHmsQr",
"CYjaUj3lV7q",
"i9f5F8W3QGH",
"f8mM_pOY3NR",
"zY3MR85BHLV",
"YygPC2Lx30",
"rC2uJkTK4No",
"AIW19gEgm2v",
"MOSY2ObhTEF"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree with the motivation of this work, developing architecture-agnostic solutions to rank self-supervised methods is a very interesting research direction. However, the proposed solution of ID/CL framework does not properly justify this. \n\nHaving the similar concerns from Reviewer Brvt, it's not clear for me... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"CYjaUj3lV7q",
"BW4vciARsBo",
"jKg1MFn3T6u",
"FeADgTnQyu",
"iclr_2022_SCn0mgEIwh",
"zY3MR85BHLV",
"YygPC2Lx30",
"AIW19gEgm2v",
"rC2uJkTK4No",
"MOSY2ObhTEF",
"VxYVmakdy1f",
"iclr_2022_SCn0mgEIwh",
"iclr_2022_SCn0mgEIwh",
"iclr_2022_SCn0mgEIwh",
"iclr_2022_SCn0mgEIwh"
] |
iclr_2022_bUAdXW8wN6 | Domain Invariant Adversarial Learning | The phenomenon of adversarial examples illustrates one of the most basic vulnerabilities of deep neural networks. Among the variety of techniques introduced to surmount this inherent weakness, adversarial training has emerged as the most effective strategy to achieve robustness. Typically, this is achieved by balancing robust and natural objectives. In this work, we aim to further reduce the trade-off between robust and standard accuracy by enforcing a domain-invariant feature representation. We present a new adversarial training method, Domain Invariant Adversarial Learning (DIAL), which learns a feature representation which is both robust and domain invariant. DIAL uses a variant of Domain Adversarial Neural Network (DANN) on the natural domain and its corresponding adversarial domain. In a case where the source domain consists of natural examples and the target domain is the adversarially perturbed examples, our method learns a feature representation constrained not to discriminate between the natural and adversarial examples, and can therefore achieve a more robust representation. Our experiments indicate that our method improves both robustness and standard accuracy, when compared to other state-of-the-art adversarial training methods. | Reject | The paper describes an adversarial training approach that, in addition to the commonly used robustness loss, requires the network to extract similar representation distributions for clean and attacked data. The proposed method is inspired by domain adaptation approaches that require a model to extract domain invariant/agnostic features from two domains. Although the experimental results are solid and technically sound, the novelty of the methodology is not enough, as the domain classifier and the gradient reversal layer are the same with those methods in domain adaptation such as "unsupervised domain adaptation by backpropagation". On the other hand, more recent SOTA methods are missing and only smaller scale datasets are used for evaluation. During the discussions, the major concerns from three reviewers are novelty.
I totally agree that the simplicity of the method should be a virtue. However, the idea of domain-invariant representation learning is already established well, and its application to adversarial training is quite intuitive to the community. Also, the similar methodology already exists in domain adaptation. According to the top-tier conference culture in the ML community, what most valuable is the novelty and insight, not the performance. In the end, I think that this paper may not be ready for publication at ICLR, but the next version must be a strong paper. | test | [
"oeb72J8t6lR",
"IM9JqA-a36v",
"r0jhiz72auZ",
"s8dHN3zYlOD",
"o_1OQQGOWo7",
"42BgPqmM6CS",
"rY3y1Egj8E_",
"xBM7wYz9E1A",
"NRQNZ5EbpKL",
"D6muCVpdaDB",
"NHSw9GTLZc1",
"IetysIbv4sT",
"6KVt-ZCFPtF"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer 8wyr,\n\nWe wish to thank you again for your initial review. As the discussion phase end tomorrow, we would be grateful if you could check our response and let us know if it addressed your concerns. We would be eager to answer any other concerns you might have.\n\nThank you,\n\n-- Authors",
" Dear... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"NHSw9GTLZc1",
"6KVt-ZCFPtF",
"IetysIbv4sT",
"NRQNZ5EbpKL",
"D6muCVpdaDB",
"IetysIbv4sT",
"6KVt-ZCFPtF",
"NHSw9GTLZc1",
"iclr_2022_bUAdXW8wN6",
"iclr_2022_bUAdXW8wN6",
"iclr_2022_bUAdXW8wN6",
"iclr_2022_bUAdXW8wN6",
"iclr_2022_bUAdXW8wN6"
] |
iclr_2022_6p8D4V_Wmyp | RainNet: A Large-Scale Imagery Dataset for Spatial Precipitation Downscaling | Contemporary deep learning frameworks have been applied to solve meteorological problems (\emph{e.g.}, front detection, synthetic radar generation, precipitation nowcasting, \emph{e.t.c.}) and have achieved highly promising results. Spatial precipitation downscaling is one of the most important meteorological problems. However, the lack of a well-organized and annotated large-scale dataset hinders the training and verification of more effective and advancing deep-learning models for precipitation downscaling. To alleviate these obstacles, we present the first large-scale spatial precipitation downscaling dataset named \emph{RainNet}, which contains more than $62,400$ pairs of high-quality low/high-resolution precipitation maps for over $17$ years, ready to help the evolution of deep models in precipitation downscaling. Specifically, the precipitation maps carefully collected in RainNet cover various meteorological phenomena (\emph{e.g.}, hurricane, squall, \emph{e.t.c}.), which is of great help to improve the model generalization ability. In addition, the map pairs in RainNet are organized in the form of image sequences ($720$ maps per month or 1 map/hour), showing complex physical properties, \emph{e.g.}, temporal misalignment, temporal sparse, and fluid properties. Two machine-learning-oriented metrics are specifically introduced to evaluate or verify the comprehensive performance of the trained model, (\emph{e.g.}, prediction maps reconstruction accuracy). To illustrate the applications of RainNet, 14 state-of-the-art models, including deep models and traditional approaches, are evaluated. To fully explore potential downscaling solutions, we propose an implicit physical estimation framework to learn the above characteristics. Extensive experiments demonstrate that the value of RainNet in training and evaluating downscaling models. | Reject | This paper proposes a new dataset, called RainNet, obtained from gridded precipitation data, for training precipitation downscaling methods, as well as a new neural network-based architecture for that task, which estimates the underlying dynamics of the local weather system, and new metrics for evaluating precipitation downscaling methods.
Reviewers praised the large, novel and useful dataset (D3tQ, szBD, ggKX) and novel metrics for evaluating statistical downscaling methods (D3tQ), along with evaluation on 14 baselines (szBD, ggKX).
There were however many issues highlighted by the reviewers. First, reviewer D3tQ raised concerns about the paper being resubmitted after rejection from NeurIPS (/pdf?id=VVZZJiQB51l), with minimal changes (/pdf?id=6p8D4V_Wmyp), and noticed that the authors did not follow up on most reviewer recommendations. D3tQ noticed however that in the ICLR resubmission, the cross validation results were presented to provide a more robust comparison between models, and that the discussion of metrics in section 4 was much more thorough than in the previous version.
Other themes in the negative reviews included concerns about missing standard errors in the cross-validation results (D3tQ, 5pVg) or measures of uncertainty in the upscaling (ggKX), lack of information about hyperparameter tuning (D3tQ), inadequate literature review about statistical downscaling (D3tQ), lack of information about the dataset (5pVg), missing discussion about applications (ggKX) and insufficient proofreading (D3tQ, 5pVg).
I will not take into consideration the criticism from szBD who "don't feel that ICLR is the right venue for this work" as I do not find such opinions to be much helpful.
The authors did not provide a rebuttal to the initial reviews and there was no discussion about this paper among the reviewers. Given the issues raised by the reviewers and the scores of 3, 3, 5 and 6, I believe that this paper does not meet the acceptance bar in its current form.
Sincerely,
AC | val | [
"KwoC8LrThp",
"ZcdWkYYedr6",
"__VhkPIxSfH",
"rllH4EXlkc3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a novel, high resolution dataset for precipitation downscaling collected from observational data in the southeastern US. In addition to the dataset, several novel metrics for evaluating statistical downscaling methods are presented, as well as a novel deep learning algorithm based on video supe... | [
3,
5,
3,
6
] | [
5,
4,
5,
4
] | [
"iclr_2022_6p8D4V_Wmyp",
"iclr_2022_6p8D4V_Wmyp",
"iclr_2022_6p8D4V_Wmyp",
"iclr_2022_6p8D4V_Wmyp"
] |
iclr_2022__PlNmPOsUS9 | PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function | The security of Deep Learning classifiers is a critical field of study because of the existence of adversarial attacks. Such attacks usually rely on the principle of transferability, where an adversarial example crafted on a surrogate classifier tends to mislead the target classifier trained on the same dataset even if both classifiers have quite different architecture. Ensemble methods against adversarial attacks demonstrate that an adversarial example is less likely to mislead multiple classifiers in an ensemble having diverse decision boundaries. However, recent ensemble methods have either been shown to be vulnerable to stronger adversaries or shown to lack an end-to-end evaluation. This paper attempts to develop a new ensemble methodology that constructs multiple diverse classifiers using a Pairwise Adversarially Robust Loss (PARL) function during the training procedure. PARL utilizes gradients of each layer with respect to input in every classifier within the ensemble simultaneously. The proposed training procedure enables PARL to achieve higher robustness with high clean example accuracy against black-box transfer attacks compared to the previous ensemble methods. We also evaluate the robustness in the presence of white-box attacks, where adversarial examples are crafted on the target classifier. We present extensive experiments using standard image classification datasets like CIFAR-10 and CIFAR-100 trained using standard ResNet20 classifier against state-of-the-art adversarial attacks to demonstrate the robustness of the proposed ensemble methodology. | Reject | The paper proposes a new method to train ensembles of classifiers that are robust to adversarial attacks, in particular black-box transfer-based ones. This is achieved by enforcing the output of early layers of different members of the ensemble to have, on average, gradients with low cosine similarity, which should in turn create different decision boundaries. For this, the authors design a specific loss function, PARL, to be minimized at training time. Two reviewers gave the score of 6 while two reviewers gave the score of 3. The main concerns are: 1) unclear meaning of taking the sum of the gradients of different neurons, and why the similarity of that across models is a proxy for similarity of the decision boundaries; 2) lack of experiments, that is, omitting a simpler baseline like individual robust classifiers. Positive score reviewers also did not champion the paper, thus, the paper should be well addressed these main concerns in the revision and cannot accept to ICLR for now. | train | [
"-aVwO4DU_aq",
"cl3r7nCtgcZ",
"PfpdInlME3g",
"1lm9b_Ok3J8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work develops an ensemble-based adversarial defense which diversifies the sub-models to obtain robustness. The idea is to force the gradients of several layers pointing towards different directions across the sub-models such that the adversarial transferability can be reduced. On CIFAR-10/CIFAR-100, the paper... | [
6,
6,
3,
3
] | [
5,
3,
5,
4
] | [
"iclr_2022__PlNmPOsUS9",
"iclr_2022__PlNmPOsUS9",
"iclr_2022__PlNmPOsUS9",
"iclr_2022__PlNmPOsUS9"
] |
iclr_2022_RNf9AgtRtL | Continuous Control With Ensemble Deep Deterministic Policy Gradients | The growth of deep reinforcement learning (RL) has brought multiple exciting tools and methods to the field. This rapid expansion makes it important to understand the interplay between individual elements of the RL toolbox. We approach this task from an empirical perspective by conducting a study in the continuous control setting. We present multiple insights of fundamental nature, including: a commonly used additive action noise is not required for effective exploration and can even hinder training; the performance of policies trained using existing methods varies significantly across training runs, epochs of training, and evaluation runs; the critics' initialization plays the major role in ensemble-based actor-critic exploration, while the training is mostly invariant to the actors' initialization; a strategy based on posterior sampling explores better than the approximated UCB combined with the weighted Bellman backup; the weighted Bellman backup alone cannot replace the clipped double Q-Learning. As a conclusion, we show how existing tools can be brought together in a novel way, giving rise to the Ensemble Deep Deterministic Policy Gradients (ED2) method, to yield state-of-the-art results on continuous control tasks from \mbox{OpenAI Gym MuJoCo}. From the practical side, ED2 is conceptually straightforward, easy to code, and does not require knowledge outside of the existing RL toolbox. | Reject | This paper studies improving continuous control. The paper suggests a practical, beneficial combination approach that does well in the presented experiments. It also provides some overview and comparison over several recent insights in RL. While both are valuable, multiple reviewers had concerns that the paper has some limitations on both. In particular, the proposed ensemble approach is quite simple though valuable, and that reviewers generally felt that raised their expectations as to the strength of the empirical results which was not yet there. The reviewers’ provided a lot of detailed feedback that may be useful in revising the contribution. | train | [
"VhX9kO8aP3I",
"TC3wa9X00Ws",
"hbVitfmnm1u",
"Ue9AbLA4V-",
"0GKh9DKOyrr",
"fEBMFvsUA63",
"2sezUBWqiTh",
"zuzu3lDIfEb",
"yEGfq_upIz1",
"hvrZdg3s3WC",
"9JKCKvORJdM",
"KL2jvGzFgMO",
"pDiASP6k0H",
"2oYcEsHZsC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a deep reinforcement algorithm, Ensemble Deep Deterministic Policy Gradients (ED2), for continuous control tasks. The algorithm is empirically derived and is claimed to represent SotA performance on several tasks and while providing more stable results. These claims are justified based primaril... | [
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6
] | [
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"iclr_2022_RNf9AgtRtL",
"zuzu3lDIfEb",
"0GKh9DKOyrr",
"iclr_2022_RNf9AgtRtL",
"2oYcEsHZsC",
"2sezUBWqiTh",
"Ue9AbLA4V-",
"VhX9kO8aP3I",
"hvrZdg3s3WC",
"KL2jvGzFgMO",
"pDiASP6k0H",
"iclr_2022_RNf9AgtRtL",
"iclr_2022_RNf9AgtRtL",
"iclr_2022_RNf9AgtRtL"
] |
iclr_2022_lsQCDXjOl3k | Unconditional Diffusion Guidance | Classifier guidance is a recently introduced method to trade off mode coverage and sample fidelity in conditional diffusion models post training, in the same spirit as low temperature sampling or truncation in other types of generative models. Classifier guidance combines the score estimate of a diffusion model with the gradient of an image classifier and thereby requires training an image classifier separate from the diffusion model. It also raises the question of whether guidance can be performed without a classifier. We show that guidance can be indeed performed by a pure generative model without such a classifier: in what we call unconditional guidance, we jointly train a conditional and an unconditional diffusion model, and we combine the resulting conditional and unconditional score estimates to attain a trade-off between sample quality and diversity similar to that obtained using classifier guidance. | Reject | This paper modifies the conditional diffusion model guided by a classifier, as introduced by Dhariwal & Nichol 2021, by replacing the explicit classifier with an implicit classifier. This implicit classifier is derived under Bayes' rule and combined with the conditional diffusion model. This combination can be realized by mixing the score estimates of a conditional diffusion model and an unconditional diffusion model. A trade-off between sample quality and diversity, in terms of the IS and FID scores, can be achieved by adjusting the mixing weight. The paper is clearly written and easy to follow. However, the reviewers do not consider the modification to be that significant in practice, as it still requires label guidance and also increases the computational complexity. From the AC's perspective, the practical significance could be enhanced if the authors can generalize their technique beyond assisting conditional diffusion models. | test | [
"5K0JW1ZNVyS",
"VxAPN4zFsr8",
"QJV0Vc-AD8r",
"bqJsDWadjxL",
"Q7zQ4FTYVno",
"RCBjCujcXl",
"IzD8zWapjks",
"qvdqg9nfe7",
"08Lve9bP1O0",
"SYtQO-C3pEn",
"yJTBJ5z4Kj2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper the authors propose an improvement for score-matching based\ngenerative modeling [1] resembling low temperature sampling as in GANs or\nflow-based models. Similarly to [2] they propose to modify the drift function\nused in the sampling step of the diffusion model by including the gradient of\nsome cl... | [
6,
-1,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_lsQCDXjOl3k",
"SYtQO-C3pEn",
"iclr_2022_lsQCDXjOl3k",
"08Lve9bP1O0",
"qvdqg9nfe7",
"iclr_2022_lsQCDXjOl3k",
"RCBjCujcXl",
"yJTBJ5z4Kj2",
"QJV0Vc-AD8r",
"5K0JW1ZNVyS",
"iclr_2022_lsQCDXjOl3k"
] |
iclr_2022_AdEM_SzfSd | Assessing two novel distance-based loss functions for few-shot image classification | Few-shot learning is a challenging area of research which aims to learn new concepts with only a few labeled samples of data. Recent works based on metric-learning approaches benefit from the meta-learning process in which we have episodic tasks conformed by support set (training) and query set (test), and the objective is to learn a similarity comparison metric between those sets. Due to the lack of data, the learning process of the embedding network becomes an important part of the few-shot task. In this work, we propose two different loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data. The first loss function is the Proto-Triplet Loss, which is based on the original triplet loss with the modifications needed to better work on few-shot scenarios. The second loss function is based on an inter and intra class nearest neighbors score, which help us to know the quality of embeddings obtained from the trained network. Extensive experimental results on the miniImagenNet benchmark increase the accuracy performance from other metric-based few-shot learning methods by a margin of $2\%$, demonstrating the capability of these loss functions to allow the network to generalize better to previously unseen classes. | Reject | This paper is proposed to address the few-shot image classification with the help of two newly designed losses. The first loss function is the Proto-Triplet Loss based on the revision of conventional triplet loss. Another loss function is based on an inter- and intra-class nearest neighbors score to estimate the quality of embeddings. The proposed method has shown its superiority over the baselines on the miniImagenNet benchmark. The major concern of this paper is the novelty that both proposed techniques are not new. Moreover, the baselines are not the SOTAs, and the evaluations on miniImagenNet only are not comprehensive enough. In addition, the authors have not provided any rebuttal to address the reviewers' concerns. | train | [
"AuzDAmuczN8",
"OqIT8V48igZ",
"9ko9bvoB_G3",
"uTspjTkKZP7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes two losses for meta-learning-based few-shot learning. The first loss is a triplet loss where the positive and negative anchors are replaced by class prototypes (averages of class members from the train set of each episode). The second loss is ICNN proposed in Garcıa and Ramırez (2021) for a diff... | [
3,
3,
3,
3
] | [
4,
4,
5,
3
] | [
"iclr_2022_AdEM_SzfSd",
"iclr_2022_AdEM_SzfSd",
"iclr_2022_AdEM_SzfSd",
"iclr_2022_AdEM_SzfSd"
] |
iclr_2022_qHsuiKXkUb | High Precision Score-based Diffusion Models | Recent advances in diffusion models bring the state-of-the art performance on image generation tasks. However, the image generation is still an arduous task in high resolution, both theoretically and practically. From the theory side, the difficulty arises in estimating the high precision diffusion because the data score goes to $\infty$ as $t \rightarrow 0$ of the diffusion time. This paper resolves this difficulty by improving the previous diffusion models from three aspects. First, we propose an alternative parameterization for such unbounded data score, which theoretically enables the unbounded score estimation. Second, we provide a practical soft truncation method (ST-trick) to handle the extreme variation of the score scales. Third, we design a reciprocal variance exploding stochastic differential equation (RVESDE) to enable the sampling at the high precision of $t$. These three improvements are applicable to the variations of both NCSN and DDPM, and our improved versions are named as HNCSN and HDDPM, respectively. The experiments show that the improvements result in the state-of-the-art performances in the high resolution image generation, i.e. CelebA-HQ. Also, our ablation study empirically illustrates that all of alternative parameterization, ST-trick, and RVESDE contributes to the performance enhancement. | Reject | This paper analyzes the behavior of score function as $t \rightarrow$ 0 and proposes simple approaches to mitigate issues around estimating an unbounded data score. The reviewers have acknowledged the importance of the problem and its relevance to current efforts on denoising diffusion models. However, they have also raised serious concerns regarding the clarity of the presentation and missing information across different sections. Additionally, the unbounded data score is only shown on a toy example, and it is not clear if it exists in real-world datasets. Finally, e3pu pointed out that in the commonly used score parameterization from Song et al. the score parameterization can in fact grow to infinity when needed. Given these criticisms and without a response from the authors, we don't believe that the paper is ready for publication at ICLR. | train | [
"84wf22tPf4r",
"rEwC9z2Urhu",
"WyaqTLeJMVt",
"KtgwEXAInqK",
"d-u1GOCPBYc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the behaviour of training SDE-based score-matching generative models for time t close to 0. The paper identifies two issues with existing forward SDEs for training such models: 1) the score of the marginal distribution could be large, and 2) the expected squared loss could be large, draining the... | [
5,
5,
5,
5,
3
] | [
3,
3,
4,
3,
4
] | [
"iclr_2022_qHsuiKXkUb",
"iclr_2022_qHsuiKXkUb",
"iclr_2022_qHsuiKXkUb",
"iclr_2022_qHsuiKXkUb",
"iclr_2022_qHsuiKXkUb"
] |
iclr_2022_TfwF7pqwqdm | On the exploitative behavior of adversarial training against adversarial attacks | Adversarial attacks have been developed as intentionally designed perturbations added to the inputs in order to fool deep neural network classifiers. Adversarial training has been shown to be an effective approach to improving the robustness of the classifiers against such attacks especially in the white-box setting. In this work, we demonstrate that some geometric consequences of adversarial training on the decision boundary of deep networks give an edge to certain types of black-box attacks. In particular, we introduce a highly parallelizable black-box attack against the classifiers equipped with an $\ell_2$ norm similarity detector, which exploits the low mean curvature of the decision boundary. We use this black-box attack to demonstrate that adversarially-trained networks might be easier to fool in certain scenarios. Moreover, we define a metric called robustness gain to show that while adversarial training is an effective method to improve the robustness in the white-box attack setting, it may not provide such a good robustness gain against the more realistic decision-based black-box attacks. | Reject | The paper proposes a novel black-box attack aiming to fool a particular detector model. All reviewers see problems in the claims, the experiments etc and all argue for rejection. The authors did not provide a rebuttal to clarify any of the questions of the reviewers. Thus this is a clear reject. | train | [
"segCPZAAX3",
"ArHZXU0HA1",
"yijMrLyOfEb",
"fg_G9aLSNhj",
"rYNe6DhYraV"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose a new type of query-based black-box attack called multi-point attacks. This attack is designed to avoid being detected by KNN based detectors. Meanwhile, this paper also points that adversarially trained models have smooth boundary thus making it easier for attacks that could exp... | [
3,
5,
3,
5,
5
] | [
4,
3,
4,
4,
4
] | [
"iclr_2022_TfwF7pqwqdm",
"iclr_2022_TfwF7pqwqdm",
"iclr_2022_TfwF7pqwqdm",
"iclr_2022_TfwF7pqwqdm",
"iclr_2022_TfwF7pqwqdm"
] |
iclr_2022_3Li0OPkhQU | Provable Learning of Convolutional Neural Networks with Data Driven Features | Convolutional networks (CNN) are computationally hard to learn. In practice, however, CNNs are learned successfully on natural image data. In this work, we study a semi-supervised algorithm, that learns a linear classifier over data-dependent features which were obtained from unlabeled data. We show that the algorithm provably learns CNNs, under some natural distributional assumptions. Specifically, it efficiently learns CNNs, assuming the distribution of patches in the input images has low-dimensional structure (e.g., when the patches are sampled from a low-dimensional manifold). We complement our result with a lower bound, showing that the dependence of our algorithm on the dimension of the patch distribution is essentially optimal. | Reject | This paper proposes a new distributional assumption and a new algorithm for learning convolutional neural networks. However, the reviewers reach a consensus that this paper's assumptions are not natural and may not be satisfied in real-world domains. The meta reviewer agrees and thus decides to reject the paper. | train | [
"Ulz9ZmO_py",
"7nwOLGQok8F",
"dBMtHiH0Nd",
"x6sKUUhsuLg",
"v24RYO3aTa9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful reviews which should help us improve the quality of this work. We will address your main concerns in the next revision.",
"In this paper, the authors consider learning the function class of one-layer convolutional networks. While multiple results show that learning neural networks ... | [
-1,
5,
5,
3,
5
] | [
-1,
3,
3,
3,
5
] | [
"iclr_2022_3Li0OPkhQU",
"iclr_2022_3Li0OPkhQU",
"iclr_2022_3Li0OPkhQU",
"iclr_2022_3Li0OPkhQU",
"iclr_2022_3Li0OPkhQU"
] |
iclr_2022_LK8bvVSw6rn | How to measure deep uncertainty estimation performance and which models are naturally better at providing it | When deployed for risk-sensitive tasks, deep neural networks (DNNs) must be equipped with an uncertainty estimation mechanism. This paper studies the relationship between deep architectures and their training regimes with their corresponding uncertainty estimation performance. We consider both in-distribution uncertainties ("aleatoric" or "epistemic") and class-out-of-distribution ones. Moreover, we consider some of the most popular estimation performance metrics previously proposed including AUROC, ECE, AURC, and coverage for selective accuracy constraint. We present a novel and comprehensive study carried out by evaluating the uncertainty performance of 484 deep ImageNet classification models.
We identify numerous and previously unknown factors that affect uncertainty estimation and examine the relationships between the different metrics. We find that distillation-based training regimes consistently yield better uncertainty estimations than other training schemes such as vanilla training, pretraining on a larger dataset and adversarial training. We also provide strong empirical evidence showing that ViT is by far the most superior architecture in terms of uncertainty estimation performance, judging by any aspect, in both in-distribution and class-out-of-distribution scenarios. We learn various interesting facts along the way. Contrary to previous work, ECE does not necessarily worsen with an increase in the number of network parameters. Likewise, we discovered an unprecedented 99% top-1 selective accuracy at 47% coverage (and 95% top-1 accuracy at 80%) for a ViT model, whereas a competing EfficientNet-V2-XL cannot obtain these accuracy constraints at any level of coverage. | Reject | As an empirical paper, this paper studies uncertainty estimations with respect to various architectures and learning schemes. Three reviewers suggested acceptance based on the strength of the paper (fairly extensive experiments were conducted, and some new observations were discovered, such as the superiority of ViT). On the other hand, two reviewers proposed rejection due to lack of rigor in writing and lack of novelty. No consensus was reached through additional discussion. In particular, the reviewer's point that the experiment was not well controlled-different models were trained with different hyperparameters etc- seems quite important, and it weakens the significance of the contribution of the paper.
All reviewers agreed that it is a potentially interesting and important paper. I encourage the authors to resubmit in the future after carefully addressing the reviewers' concerns. | train | [
"xq00P3ogrM9",
"tScneNqWPIx",
"N0MLVmjhUTb",
"UepUkSdorE",
"o2BeGwlATQV",
"nR4FJywWpDc",
"cVducdbOOxi",
"jw_Bk_nkcC7",
"947ecenRKG7",
"lTZUnTfjv6W",
"MJAN34dKtdl",
"5o0NLN8GAwh",
"yb30DFtL7ti",
"i1UNz5Uy8cr",
"N96Q9HFxbV",
"QwbtrybqiD-",
"wzgfmQ-t-Yn",
"3ty5UU2YUR",
"gTme8EWcf0u"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" I would like to thank the authors for their responses. I appreciate that.\n\nI also agree with the authors about the value of the empirical papers such as \"Understanding deep learning (still) requires rethinking generalization\". However, while it has well controlled experiments to clearly show the proposed clai... | [
-1,
-1,
5,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
4,
-1,
3,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"QwbtrybqiD-",
"MJAN34dKtdl",
"iclr_2022_LK8bvVSw6rn",
"3ty5UU2YUR",
"iclr_2022_LK8bvVSw6rn",
"i1UNz5Uy8cr",
"iclr_2022_LK8bvVSw6rn",
"iclr_2022_LK8bvVSw6rn",
"lTZUnTfjv6W",
"yb30DFtL7ti",
"N0MLVmjhUTb",
"yb30DFtL7ti",
"cVducdbOOxi",
"o2BeGwlATQV",
"gTme8EWcf0u",
"wzgfmQ-t-Yn",
"UepU... |
iclr_2022_kj8TBnJ0SXh | FaceDet3D: Facial Expressions with 3D Geometric Detail Hallucination | Facial Expressions induce a variety of high-level details on the 3D face geometry. For example, a smile causes the wrinkling of cheeks or the formation of dimples, while being angry often causes wrinkling of the forehead. Morphable Models (3DMMs) of the human face fail to capture such fine details in their PCA-based representations and consequently cannot generate such details when used to edit expressions. In this work, we introduce FaceDet3D, a method that generates - from a single image - geometric facial details that are consistent with any desired target expression. The facial details are represented as a vertex displacement map and used then by a Neural Renderer to photo-realistically render novel images of any single image in any desired expression and view. | Reject | The reviewers raised a number of major concerns including a poor readability of the presented materials, incremental novelty of the presented and, most importantly, insufficient and unconvincing ablation and experimental evaluation studies presented. The authors’ rebuttal failed to address all reviewers’ questions and failed to alleviate reviewers’ concerns. The authors explain that due to the lack of time they could not complete all experimental studies. A major revision of the paper is needed before the paper can be accepted for publication. Hence, I cannot suggest this paper for presentation at ICLR. | train | [
"Lcol6C5qBt",
"ei1PSpbwDz8",
"wDKbT2FT-oU",
"SefIbOGQbUG",
"E_Vw9nUYMeb",
"LsBH8h91a-6",
"ZCWWO2Kreq5",
"mNBPec33oL",
"G5NgQiUwgiA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your responses; I understand the pressure of submitting complete work in time for any submission deadline. I hope that, once completed, the ablation and the performance context breakdown will offer additional support to your work.",
" Thank you so much for the thoughtful review!\n\n* **Writing:**... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"SefIbOGQbUG",
"G5NgQiUwgiA",
"mNBPec33oL",
"ZCWWO2Kreq5",
"LsBH8h91a-6",
"iclr_2022_kj8TBnJ0SXh",
"iclr_2022_kj8TBnJ0SXh",
"iclr_2022_kj8TBnJ0SXh",
"iclr_2022_kj8TBnJ0SXh"
] |
iclr_2022_FOfKpDnp2P | BIGRoC: Boosting Image Generation via a Robust Classifier | The interest of the machine learning community in image synthesis has grown significantly in recent years, with the introduction of a wide range of deep generative models and means for training them. Such machines’ ultimate goal is to match the distributions of the given training images and the synthesized ones. In this work, we propose a general model-agnostic technique for improving the image quality and the distribution fidelity of generated images, obtained by any generative model. Our method, termed BIGRoC (boosting image generation via a robust classifier), is based on a post-processing procedure via the guidance of a given robust classifier and without a need for additional training of the generative model. Given a synthesized image, we propose to update it through projected gradient steps over the robust classifier, in an attempt to refine its recognition. We demonstrate this post-processing algorithm on various image synthesis methods and show a significant improvement of the generated images, both quantitatively and qualitatively. | Reject | The paper proposes to improve generated images via a post-processing update procedure guided by gradients from a robust classifier. After the author response and discussion, all reviewers agree that the paper is below the acceptance threshold. Reviewer concerns include limited technical novelty and missing experimental comparison to relevant baselines. | val | [
"yAvBo8MkJkc",
"aC-bpE9qI6",
"EKZiRyfZXAN",
"YZiIle1nMHJ",
"jxWtvqWZJHH",
"WEicPAcROBV",
"CT8balps9Ow",
"qG-_XxMUx1_",
"vUzfgagfcGR",
"Da-V-CO2M3",
"AfJcQ-bW5oyI",
"wClBNX3X5ptT",
"BTJWLFW8KEZ",
"1z0IOlg4Cl",
"O8Qh3D_Ff9_"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. We would like to refer the reviewer to our comprehensive comparison with [2,3,4] in our responses to reviewer Ezr2. To summarize, our method is simpler, operates better than the aforementioned methods with fewer requirements, and is capable of operating in a setting that the laters si... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"jxWtvqWZJHH",
"CT8balps9Ow",
"WEicPAcROBV",
"iclr_2022_FOfKpDnp2P",
"wClBNX3X5ptT",
"AfJcQ-bW5oyI",
"qG-_XxMUx1_",
"vUzfgagfcGR",
"Da-V-CO2M3",
"1z0IOlg4Cl",
"O8Qh3D_Ff9_",
"YZiIle1nMHJ",
"iclr_2022_FOfKpDnp2P",
"iclr_2022_FOfKpDnp2P",
"iclr_2022_FOfKpDnp2P"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.