paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_rSI-tyrv-ni | Does Entity Abstraction Help Generative Transformers Reason? | Pre-trained language models (LMs) often struggle to reason logically or generalize in a compositional fashion. Recent work suggests that incorporating external entity knowledge can improve language models' abilities to reason and generalize. However the effect of explicitly providing entity abstraction remains unclear, especially with recent studies suggesting that pre-trained models already encode some of that knowledge in their parameters. In this work, we study the utility of incorporating entity type abstractions into pre-trained Transformers and test these methods on three different NLP tasks requiring different forms of logical reasoning: (1) compositional language understanding with text-based relational reasoning (CLUTRR), (2) multi-hop question answering (HotpotQA), and (3) conversational question answering (CoQA). We propose and empirically explore three different ways to add such abstraction: (i) as additional input embeddings, (ii) as a separate sequence to encode, and (iii) as an auxiliary prediction task for the model. Overall our analysis demonstrate that models with abstract entity knowledge performs slightly better than without it. However, our experiments also show that the benefits strongly depend on the technique used and the task at hand. The best abstraction aware model achieved an overall accuracy of 88.8% compared to the baseline model achieving 62.3% on CLUTRR. In addition, abstraction-aware models showed improved compositional generalization in both interpolation and extrapolation settings. However, for HotpotQA and CoQA, we find that F1 scores improve by only 0.5% on average. Our results suggest that the benefits of explicit abstraction could be very significant in formally defined logical reasoning settings such as CLUTRR, but point to the notion that explicit abstraction is likely less beneficial for NLP tasks having less formal logical structure. | Reject | This is a clearly written paper about integration of entity abstraction to the transformer based language modeling methods for language processing tasks that require reasoning (this is clarified by the authors later as tasks that require linger chains of reasoning) and have shown results on CLUTTR, HotpotQA, and CoQA. The reviewers seem to agree on two issues: First, it is not clear why the proposed idea does not result in a lot of improvement, except the synthetic CLUTTR. Authors provided additional experimental results on yet another dataset. Second, the paper would benefit from a detailed analysis of the experimental results, for example, why don't abstractions help on all datasets. | train | [
"MPLcaqr5VsZ",
"kpHQavhEY3",
"DcHCriXJwM_",
"crMMT5P6gOj",
"QdmO8aNNZqx",
"PtNolBrhktc",
"zd_ZGO6RFby",
"mGIT1Cavqzh",
"oCh_0x9jrm9",
"Yt2EwXrswPI"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We think there is potentially a key misunderstanding of the nature of our experimental results leading a number of reviewers to score our paper as a reject. We just want to make sure this key point is clear to everyone. Our main conclusion is that adding explicit abstraction is **not necessary for shallow surface... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"iclr_2022_rSI-tyrv-ni",
"Yt2EwXrswPI",
"oCh_0x9jrm9",
"mGIT1Cavqzh",
"zd_ZGO6RFby",
"iclr_2022_rSI-tyrv-ni",
"iclr_2022_rSI-tyrv-ni",
"iclr_2022_rSI-tyrv-ni",
"iclr_2022_rSI-tyrv-ni",
"iclr_2022_rSI-tyrv-ni"
] |
iclr_2022_zbZL1s-pBF | Training-Free Robust Multimodal Learning via Sample-Wise Jacobian Regularization | Multimodal fusion emerges as an appealing technique to improve model performances on many tasks. Nevertheless, the robustness of such fusion methods is rarely involved in the present literature. In this paper, we are the first to propose a training-free robust late-fusion method by exploiting conditional independence assumption and Jacobian regularization. Our key is to minimize the Frobenius norm of a Jacobian matrix, where the resulting optimization problem is relaxed to a tractable Sylvester equation. Furthermore, we provide a theoretical error bound of our method and some insights about the function of the extra modality. Several numerical experiments on AV-MNIST, RAVDESS, and VGGsound demonstrate the efficacy of our method under both adversarial attacks and random corruptions. | Reject | Three experts reviewed the paper and gave mixed reviews. Reviewer BBZL raised their score to 6 in the discussion phase. Reviewer dv5k was not fully convinced by the rebuttal and remained negative. Reviewer oUrr also remained negative. The reviewers were not excited by the proposed method in general and raised questions about both experiments and theoretical results. AC found clear merits in the paper, but the reviewers' comments suggested the work could be strengthened in both experiments and presentation. Hence, the decision is *not* to recommend acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere. | train | [
"KudKSLeeMyw",
"Dhc6YGiauY",
"seMiPgvFzu",
"diqPb2gYC12",
"5vLBaGy_LYG",
"QYGtw0hiLfe",
"szy1znMdvOm",
"NDkk7OAMzk1",
"LPNrwfXIFJz",
"-4GuJ81LSaS",
"eNjwwnLrxRR",
"HPDioUk70I6"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" * Reply to Weakness #2\n\nThanks for the comments. The authors would like to politely point out that the consideration of out-of-domain setting (or domain adaptation) is itself an open/hot topic in the research literature. We would consider adapting our method to suit this setting in the future, but it appears to... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Dhc6YGiauY",
"NDkk7OAMzk1",
"HPDioUk70I6",
"eNjwwnLrxRR",
"iclr_2022_zbZL1s-pBF",
"szy1znMdvOm",
"5vLBaGy_LYG",
"LPNrwfXIFJz",
"HPDioUk70I6",
"eNjwwnLrxRR",
"iclr_2022_zbZL1s-pBF",
"iclr_2022_zbZL1s-pBF"
] |
iclr_2022_nRCS3BfynGQ | Symmetry-driven graph neural networks | Exploiting symmetries and invariance in data is a powerful, yet not fully exploited, way to achieve better generalisation with more
efficiency. In this paper, we introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates. First, we build equivariance to any transformation in the coordinate embeddings that preserves the distance between neighbouring nodes, allowing for equivariance to the Euclidean group. Then, we introduce angle attributes to build equivariance to any angle preserving transformation - thus, to the conformal group. Thanks to their equivariance properties, the proposed models can be vastly more data efficient with respect to classical graph architectures, intrinsically equipped with a better inductive bias and better at generalising. We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects. Additionally, we provide examples of their limitations when (the right) symmetries are not present in the data. | Reject | This work proposes to extend the invariance/equivariance properties of GNNs by focusing on distance-preserving and angle-preserving transformations, given respectively by the Euclidean and Conformal group. Preliminary experiments are reported that demonstrate the advantage of such architectures.
Reviewers found this work generally interesting, tackling an important problem and proposing a valid solution. However, they also raised important concerns, namely the relatively minor novelty relative to recent models (such as EGNN), as well as the lack of convincing real-world experiments that would validate the modeling assumptions. Taking all these considerations into account, the AC recommends rejection at this time, and encourages the authors to address the points raised by reviewers in a revision. | train | [
"7SGRMe4THWK",
"MEK5p15eJVV",
"5FAizQ93kU4",
"VTvM5U6ySnx",
"R-jV_qnZOlv",
"VdNB-EYvYYS",
"k_lR4Ee6U9i",
"Zl7nNapZM-",
"w2rxBzHbcDE",
"E6655_cA2dx",
"Gj2Acv41GAs",
"UopJEuuP_pi",
"02gwI7FTyz",
"GtWFT9e2VuH",
"JoLkF-NBwLm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the additional comments. With respect to the differences with respect to EGNN we agree on the similarities with our DGN (as we show in the appendix EGNN is a special instance of DGN). However, we would like to remark that in the paper we introduce also the AGN graph block which can deal ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"MEK5p15eJVV",
"E6655_cA2dx",
"E6655_cA2dx",
"Gj2Acv41GAs",
"VdNB-EYvYYS",
"w2rxBzHbcDE",
"Zl7nNapZM-",
"JoLkF-NBwLm",
"GtWFT9e2VuH",
"02gwI7FTyz",
"UopJEuuP_pi",
"iclr_2022_nRCS3BfynGQ",
"iclr_2022_nRCS3BfynGQ",
"iclr_2022_nRCS3BfynGQ",
"iclr_2022_nRCS3BfynGQ"
] |
iclr_2022_biyvmQe5jM | How to decay your learning rate | Complex learning rate schedules have become an integral part of deep learning. We find empirically that common fine-tuned schedules decay the learning rate after the weight norm bounces. This leads to the proposal of ABEL: an automatic scheduler which decays the learning rate by keeping track of the weight norm. ABEL's performance matches that of tuned schedules, is more robust with respect to its parameters and does not depend on the time budget. Through extensive experiments in vision, NLP, and RL, we show that if the weight norm does not bounce, we can simplify schedules even further with no loss in performance. In such cases, a complex schedule has similar performance to a constant learning rate with a decay at the end of training. | Reject | The authors provide an investigation into tuning learning rate schedules. The problem is certainly of great practical importance. After discussion, the reviewers felt the main idea of the paper is worth pursuing, but could use significant refinement. One reviewer suggests: "
"...better treatment of the background material, clearer identification on when the weight norm behaviour happens beside norm (possibly looking also for counter-examples!), rethinking section 6, and a more convincing set of experiments (for showing convincing evidence about e.g. 5.2). Regarding this last point, I want to clarify that in my review I mentioned [1] not for the grid search, but rather for the time-controlled experiments. If you go with random search for selecting the hyperparameters of the learning rate adaptation methods. I personally think that a recipe to make the comparison fair enough is to choose a prior distribution (e.g. uniform/log uniform) that covers reasonable values (e.g. as used for different datasets) with mean equal/close to the known well-performing ("optimal") value." Other reviewers were generally of a similar opinion. The authors are encouraged to continue with the work, taking reviewer comments into account for updated versions. | val | [
"DZzdUK2h3O0",
"t31EAG_PoPT",
"2aTqZMYSU5F",
"iJjS6dfiaPD",
"Tg1H4KkndcT",
"UkJeHxPZliP",
"g5Z_6ejhCZb",
"Vsf40uIPTzJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a novel method for automatically decaying the learning rate when training deep neural networks. The core idea relies on the observation that the weight norms often times bounce up during training, and that decaying the learning rate after the bouncing is beneficial. The authors conducted extens... | [
6,
3,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_biyvmQe5jM",
"iclr_2022_biyvmQe5jM",
"Vsf40uIPTzJ",
"t31EAG_PoPT",
"g5Z_6ejhCZb",
"DZzdUK2h3O0",
"iclr_2022_biyvmQe5jM",
"iclr_2022_biyvmQe5jM"
] |
iclr_2022_vyn49BUAkoD | Bayesian Active Learning with Fully Bayesian Gaussian Processes | The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. When data is scarce, such as in metamodeling, active learning, and Bayesian optimization, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on metamodeling with active learning and the canonical Gaussian Process (GP). We recognize that, for the GP, the bias-variance trade-off regulation is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first one is a mode-seeking Bayesian variant of Query-by-Committee (B-QBC), and the second is simultaneously mode-seeking and minimizing the predictive variance through a Query by Mixture Gaussian Processes (QB-MGP) formulation. Across seven simulators, we empirically show that B-QBC outperforms the benchmark functions, whereas QB-MGP is the most robust acquisition function and achieves the best accuracy with the fewest iterations. We generally show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling. | Reject | This paper studied Bayesian active regression with Gaussian processes, and proposed two intuitive algorithms inspired by the classical disagreement-based and uncertainty sampling criteria. The reviewers appreciate the motivation and overall idea of taking a fully Bayesian approach by utilizing the joint posterior of the hyperparameters for active learning. However, there are shared concerns among the reviewers in the clarify and consistency of several key technical components, including discussion around bias-variance tradeoff and its connection to the fully Bayesian approach, as well as in the experimental details, which make the current package insufficient for publication.
Reviewers provide very useful feedback (in particular with a very extensive review by Reviewer hDWW) for improving the current work. The authors acknowledge in their responses that these are valid concerns and they would address these issues in a further version of this work. | train | [
"RKRlsfREzii",
"8k9DTRwj6n-",
"pN9rV_NkW5Z",
"OSACI6dHVWV",
"6IO2v3UXZgH",
"jLAMGNe00oN",
"zSDKecx22Yo",
"YkxrVVHzBVF",
"Mjb6yavGDB",
"o1RCOkKscQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"It seems that the authors introduces two novel acquisitions functions for Gaussian Process models:\nthe first one is a mode-seeking Bayesian variant of Query-by- Committee (B-QBC), and the second is simultaneously mode-seeking and minimizing the predictive variance through a Query by Mixture Gaussian Processes (QB... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_vyn49BUAkoD",
"OSACI6dHVWV",
"zSDKecx22Yo",
"o1RCOkKscQ",
"RKRlsfREzii",
"Mjb6yavGDB",
"YkxrVVHzBVF",
"iclr_2022_vyn49BUAkoD",
"iclr_2022_vyn49BUAkoD",
"iclr_2022_vyn49BUAkoD"
] |
iclr_2022_vr4Wo33bd1 | Semi-supervised Long-tailed Recognition using Alternate Sampling | Main challenges in long-tailed recognition come from the imbalanced data distribution and sample scarcity in its tail classes. While techniques have been proposed to achieve a more balanced training loss and to improve tail classes data variations with synthesized samples, we resort to leverage readily available unlabeled data to boost recognition accuracy. The idea leads to a new recognition setting, namely semi-supervised long-tailed recognition. We argue this setting better resembles the real-world data collection and annotation process and hence can help close the gap to real-world scenarios. To address the semi-supervised long-tailed recognition problem, we present an alternate sampling framework combining the intuitions from successful methods in these two research areas. The classifier and feature embedding are learned separately and updated iteratively. The class-balanced sampling strategy has been implemented to train the classifier in a way not affected by the pseudo labels' quality on the unlabeled data. A consistency loss has been introduced to limit the impact from unlabeled data while leveraging them to update the feature embedding. We demonstrate significant accuracy improvements over other competitive methods on two datasets. | Reject | The paper addresses semi-supervised learning with unbalanced class distribution, a.k.a long-tail. The main idea is to alternate learning of the representation and the classifier.
Reviewers pointed out that several papers already addressed this learning setup, often under the name "imbalanced semi-supervised learning". No rebuttal was submitted.
The paper should make direct comparison to recent papers listed by reviewers, both in terms of the technical approach and in terms of empirical experiments. It cannot be accepted tot ICLR. | train | [
"-KpYXwzH6VU",
"qCms1Fi6sd",
"k5ejjpH76LM",
"cTUcjpt6vck",
"FbpkgBmfXY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new setting--semi-supervised long-tailed recognition. To harness the imbalanced unlabeled data, the authors combined the decoupling in long-tailed recognition and pseudo-labeling in semi-supervised learning, which formulates a three-stage method. Stage 1 generates pseudo-labels with a classif... | [
5,
5,
5,
1,
5
] | [
4,
5,
5,
5,
3
] | [
"iclr_2022_vr4Wo33bd1",
"iclr_2022_vr4Wo33bd1",
"iclr_2022_vr4Wo33bd1",
"iclr_2022_vr4Wo33bd1",
"iclr_2022_vr4Wo33bd1"
] |
iclr_2022_2M0WXSP6Qi | Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN | Conditional generation is a subclass of generative problems when the output of generation is conditioned by a class attributes’ information. In this paper, we present a new stochastic contrastive conditional generative adversarial network (InfoSCC-GAN) with explorable latent space. The InfoSCC-GAN architecture is based on an unsupervised contrastive encoder built on the InfoNCE paradigm, attributes' classifier, and stochastic EigenGAN generator.
We propose two approaches for selecting the class attributes: external attributes from the dataset annotations and internal attributes from the clustered latent space of the encoder. We propose a novel training method based on a generator regularization using external or internal attributes every $n$-th iteration using the pre-trained contrastive encoder and pre-trained attributes’ classifier. The proposed InfoSCC-GAN is derived from an information-theoretic formulation of mutual information maximization between the input data and latent space representation for the encoder and the latent space and generated data for the decoder. Thus, we demonstrate a link between the training objective functions and the above information-theoretic formulation. The experimental results show that InfoSCC-GAN outperforms vanilla EigenGAN in image generation on several popular datasets, yet providing an interpretable latent space. In addition, we investigate the impact of regularization techniques and each part of the system by performing an ablation study. Finally, we demonstrate that thanks to the stochastic EigenGAN generator, the proposed framework enjoys a truly stochastic generation in contrast to vanilla deterministic GANs yet with the independent training of an encoder, a classifier, and a generator.
The code, supplementary materials, and demos are available \url{https://anonymous.4open.science/r/InfoSCC-GAN-D113} | Reject | This paper presents a method for conditional generations for GANs.
The reviewers note the lack of novelty, or the lack of a theoretical or empirical motivation for the novel bits. They point out flaws in the correctness of the paper, and limited experimental evaluation.
The reviewers agree to reject the paper. Unfortunately the authors did not answer the reviewers. I therefore recommend to reject the paper for this conference, and I strongly suggest that the authors address the reviewers concerns if they are to submit this paper again in a future venue. | train | [
"s8kZu66Sp7Q",
"tMgw00YWyjw",
"p2u4OQo75e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a GAN for conditional generation. The authors combine an unsupervised contrastive encoder, stochastic EigenGAN generator, and a classifier. InfoSCC-GAN can perform image generation conditioned on external attributes by maximizing mutual information between input data and class attributes. In ad... | [
3,
5,
1
] | [
4,
4,
4
] | [
"iclr_2022_2M0WXSP6Qi",
"iclr_2022_2M0WXSP6Qi",
"iclr_2022_2M0WXSP6Qi"
] |
iclr_2022_ptZfV8tJbpe | Modeling label correlations implicitly through latent label encodings for multi-label text classification | Multi-label text classification (MLTC) aims to assign a set of labels to each given document. Unlike single-label text classification methods that often focus on document representation learning, MLTC faces a key challenge of modeling label correlations due to complex label dependencies. Previous state-of-the-art works model label correlations explicitly. It lacks flexibility and is prone to introduce inductive bias that may not always hold, such as label-correlation simplification, sequencing label sets, and label-correlation overload. To address this issue, this paper uses latent label representations to model label correlations implicitly. Specifically, the proposed method concatenates a set of latent labels (instead of actual labels) to the text tokens, inputs them to BERT, then maps the contextual encodings of these latent labels to actual labels cooperatively. The correlations between labels, and between labels and the text are modeled indirectly through these latent-label encodings and their correlations. Such latent and distributed correlation modeling can impose less a priori limits and provide more flexibility. The method is conceptually simple but quite effective. It improves the state-of-the-art results on two widely used benchmark datasets by a large margin. Further experiments demonstrate that its effectiveness lies in label-correlation utilization rather than document representation. Feature study reveals the importance of using latent label embeddings. It also reveals that contrary to the other token embeddings, the embeddings of these latent labels are sensitive to tasks; sometimes pretraining them can lead to significant performance loss rather than promotion. This result suggests that they are more related to task information (i.e., the actual labels) than the other tokens. | Reject | This paper proposes an approach for multi-label text classification. The method constitutes appending few "label" tokens to the beginning of the text input instead of the traditional single <CLS> token. The paper shows improvements over a competitive baseline on two datasets.
Reviewers agree that the novelty and contribution of the paper are marginal. The method of appending extra "fake" tokens has been used in other works as a "trick". It is also unclear how adding a few extra tokens allow for the model to represent label dependencies better.
The authors did not respond to the reviews, so there was no further dicussion. | train | [
"IzrISpYs7Hv",
"YiejFalUhtM",
"CqRru6rTCd0",
"iX8rgelBtu",
"WGfgoUSLms"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a novel multi-label text classification method named LLEM that jointly encodes a document and latent labels (with smaller number than actual labels), and tries to better model label correlations implicitly and impose less a priori limits compared with previous state-of-the-art works. The method... | [
6,
5,
3,
3,
3
] | [
4,
4,
4,
4,
4
] | [
"iclr_2022_ptZfV8tJbpe",
"iclr_2022_ptZfV8tJbpe",
"iclr_2022_ptZfV8tJbpe",
"iclr_2022_ptZfV8tJbpe",
"iclr_2022_ptZfV8tJbpe"
] |
iclr_2022_rX3rZYP8zZF | CareGraph: A Graph-based Recommender System for Diabetes Self-Care | In this work, we build a knowledge graph that captures key attributes of content and notifications in a digital health platform for diabetes management. We propose a Deep Neural Network-based recommender that uses the knowledge graph embeddings to recommend health nudges for maximizing engagement by combating the cold-start and sparsity problems. We use a leave-one-out approach to evaluate the model. We compare the proposed model performance with a text similarity and Deep-and-Cross Network-based approach as the baseline. The overall improvement in Click-Through-Rate prediction AUC for the Knowledge-Graph-based model was 11%. We also observe that our model improved the average AUC by 5% in cold-start situations. | Reject | The paper introduces the CareGraph, a knowledge graph based recommendation approach.
CareGraph is a deep neural network-based recommender that can be used a mobile healthcare platform
for nudge recommendation. The main motivation is to use the knowledge graph to
mitigate cold start problems when recommending nudge messages.
The papers' main strength is the topic of interest. Research on recommending systems in the healthcare context is of great interest.
However, the reviews raised concerns that outweigh the strengths.
The majority reviewers agree that the work is not ready for publication.
Main concerns focus on weak experimental section and lack of technical details.
I recommend the authors to incorporate all the reviewers' comments and make a
stronger submission to a future conference! | train | [
"VzhjYcvoIIM",
"hlY4EHmNEor",
"duNAf9WVq-Y",
"LR73SmsghSr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates the nudge recommendation problem in a mobile healthcare domain, and introduces a knowledge graph based recommendation approach. The proposed method builds a nudge-attribute knowledge graph and uses TransE to infer nudge and attribute embeddings, which are used in a downstream step that leve... | [
3,
5,
3,
3
] | [
4,
5,
4,
4
] | [
"iclr_2022_rX3rZYP8zZF",
"iclr_2022_rX3rZYP8zZF",
"iclr_2022_rX3rZYP8zZF",
"iclr_2022_rX3rZYP8zZF"
] |
iclr_2022_bOcUqfdH3S8 | Provably Calibrated Regression Under Distribution Drift | Accurate uncertainty quantification is a key building block of trustworthy machine learning systems. Uncertainty is typically represented by probability distributions over the possible outcomes, and these probabilities should be calibrated, \textit{e.g}. the 90\% credible interval should contain the true outcome 90\% of the times. In the online prediction setup, existing conformal methods can provably achieve calibration assuming no distribution shift; however, the assumption is difficult to verify, and unlikely to hold in many applications such as time series prediction. Inspired by control theory, we propose a prediction algorithm that guarantees calibration even under distribution shift, and achieves strong performance on metrics such as sharpness and proper scores. We compare our method with baselines on 19 time-series and regression datasets, and our method achieves approximately 2x reduction in calibration error, comparable sharpness, and improved downstream decision utility. | Reject | The paper studies an important problem of quantifying uncertainty (as measure by calibration) of predictions made by an ML algorithm in the presence of distribution drift. However, all reviewers point out a slew of concerns that went un-rebutted by the authors. The reviewers concurred that the paper deserved to be rejected at the current stage, and I concur. I recommend that the authors take the critical and constructive feedback into account to improve the paper and perhaps resubmit to a different venue in 2022. | val | [
"RXzD3q9XmTn",
"Tnovew-hpbn",
"iy11MAPo0p",
"FFF1JCsornH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of calibrating quantiles outputted by a regression model. The data is assumed to draw from a non-iid source (e.g. time series with distribution shift).\nThe paper proposes a method that adjusts the output quantiles to be better calibrated for a less-restricted definition of b-calib... | [
5,
5,
5,
5
] | [
3,
3,
4,
3
] | [
"iclr_2022_bOcUqfdH3S8",
"iclr_2022_bOcUqfdH3S8",
"iclr_2022_bOcUqfdH3S8",
"iclr_2022_bOcUqfdH3S8"
] |
iclr_2022_W08IqLMlMer | Offline Pre-trained Multi-Agent Decision Transformer | Offline reinforcement learning leverages static datasets to learn optimal policies with no necessity to access the environment. This is desirable for multi-agent systems due to the expensiveness of agents' online interactions and the demand for sample numbers. Yet, in multi-agent reinforcement learning (MARL), the paradigm of offline pre-training with online fine-tuning has never been reported, nor datasets or benchmarks for offline MARL research are available. In this paper, we intend to investigate whether offline training is able to learn policy representations that elevate performance on downstream MARL tasks. We introduce the first offline dataset based on StarCraftII with diverse quality levels and propose a multi-agent decision transformer (MADT) for effective offline learning. MADT integrates the powerful temporal representation learning ability of Transformer into both offline and online multi-agent learning, which promotes generalisation across agents and scenarios. The proposed method demonstrates superior performance than the state-of-the-art algorithms in offline MARL. Furthermore, when applied to online tasks, the pre-trained MADT largely improves sample efficiency, even in zero-shot task transfer. To our best knowledge, this is the first work to demonstrate the effectiveness of pre-trained models in terms of sample efficiency and generalisability enhancement in MARL. | Reject | The reviewers have raised relevant concerns that preclude acceptance and the authors have not provided a response. At this time, all reviewers concur that this paper should be rejected and I agree. | train | [
"L6wDHYA9-Kx",
"vlHIZnpXEMs",
"RN9aUTl-AT9",
"2TxL-2xd2u-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a multi-agent decision transformer to improve the learning performance in the setting of offline pre-training with online fine-tuning. Experimental results show that it outperforms offline RL algorithms, BCQ, CQL and ICQ in multiple Starcraft II games. Novelty:\nThis paper is not novel as it se... | [
3,
3,
5,
3
] | [
4,
4,
3,
5
] | [
"iclr_2022_W08IqLMlMer",
"iclr_2022_W08IqLMlMer",
"iclr_2022_W08IqLMlMer",
"iclr_2022_W08IqLMlMer"
] |
iclr_2022_liV-Re74fK | Density Estimation for Conservative Q-Learning | Batch Reinforcement Learning algorithms aim at learning the best policy from a batch of data without interacting with the environment. Within this setting, one difficulty is to correctly assess the value of state-action pairs that are far from the dataset. Indeed, the lack of information may provoke an overestimation of the value function, leading to non-desirable behaviors. A compromise between enhancing the behaviour policy's performance and staying close to it must be found. To alleviate this issue, most existing approaches introduce a regularization term to favor state-action pairs from the dataset. In this paper, we refine this idea by estimating the density of these state-action pairs to distinguish neighbourhoods. The resulting regularization guides the policy toward meaningful unseen regions, improving the learning process. We hence introduce Density Conservative Q-Learning (D-CQL), a batch-RL algorithm with strong theoretical guarantees that carefully penalizes the value function based on the amount of information collected in the state-action space. The performance of our approach is outlined on many classical benchmark in batch-RL. | Reject | The authors introduce a modification to CQL to use a weighting based on density estimates. In an idealized setting, they show that the estimate Q-values bound the true Q-values. Finally, they evaluate their proposed approach on a few benchmark offline RL tasks.
Generally, all reviewers felt that the results were too incremental. The theoretical result follows with light modifications from the CQL paper and even then, the implications of the result are unclear. The experimental results showed small improvements or comparable performance while requiring training a density estimator and introducing an additional hyperparameter. Furthermore, the set of tasks evaluated was limited and no comparisons to other methods than CQL were shown.
While I appreciate the effort the authors took to investigate this improvement, at this time, the paper falls below the bar and I recommend rejection. | train | [
"xguQVa8DAEt",
"QYHmS1Wzdxq",
"HSjK5SsG6M",
"Manwb4kcqne",
"8xMaxYIlUtT",
"vp1nNsnX4xu",
"kjVI80X9hQG",
"oZP53YkEJ",
"fX1XLRIG5Vt"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for his detailed review. We performed a global answer to the reviewers main concerns. We answer the remaining open questions in the following paragraphs.\n\n**Difference with CQL:** The difference between CQL and D-CQL goes beyond from transforming $\\zeta_\\nu(a|s) = 1 - \\frac{\\hat{\\pi}_... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
5
] | [
"fX1XLRIG5Vt",
"oZP53YkEJ",
"kjVI80X9hQG",
"vp1nNsnX4xu",
"iclr_2022_liV-Re74fK",
"iclr_2022_liV-Re74fK",
"iclr_2022_liV-Re74fK",
"iclr_2022_liV-Re74fK",
"iclr_2022_liV-Re74fK"
] |
iclr_2022_MQuxKr2F1Xw | Multi-Trigger-Key: Towards Multi-Task Privacy-Preserving In Deep Learning | Deep learning-based Multi-Task Classification (MTC) is widely used in applications like facial attribute and healthcare that warrant strong privacy guarantees. In this work, we aim to protect sensitive information in the inference phase of MTC and propose a novel Multi-Trigger-Key (MTK) framework to achieve the privacy-preserving objective. MTK associates each secured task in the multi-task dataset with a specifically designed trigger-key. The true information can be revealed by adding the trigger-key if the user is authorized. We obtain such an MTK model by training it with a newly generated training set. To address the information leakage malaise resulting from correlations among different tasks, we generalize the training process by incorporating an MTK decoupling process with a controllable trade-off between the protective efficacy and the model performance. Theoretical guarantees and experimental results demonstrate the effectiveness of the privacy protection without appreciable hindering on the model performance. | Reject | The paper discusses an approach for privacy preservation in the context of multi-task classification. All reviewers struggled to follow the paper and had fundamental questions about the motivation, methods and technical contributions. Unfortunately there was no feedback from the authors to help support the submission. | train | [
"vXu6_uPcruc",
"KS_nYQYbc2S",
"jAMGHTu6BTn",
"d3fh0HUy4Lu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper provides a framework called Multi-Trigger-Key(MTK) to achieve privacy-preserving inference in multi-task setting. This paper is quite confusing in the way it describes its motivation and methods. Here are some of my concerns:\n- How are tasks separated into secure tasks and unprotected tasks? Moreover,... | [
3,
3,
3,
3
] | [
4,
3,
2,
3
] | [
"iclr_2022_MQuxKr2F1Xw",
"iclr_2022_MQuxKr2F1Xw",
"iclr_2022_MQuxKr2F1Xw",
"iclr_2022_MQuxKr2F1Xw"
] |
iclr_2022_1kqWZlj4QYJ | Learning Two-Step Hybrid Policy for Graph-Based Interpretable Reinforcement Learning | We present a two-step hybrid reinforcement learning (RL) policy that is designed to generate interpretable and robust hierarchical policies
on the RL problem with graph-based input. Unlike prior deep reinforcement learning policies parameterized by an end-to-end black-box graph neural network, our approach disentangles the decision-making process into two steps. The first step is a simplified classification problem that maps the graph input to an action group where all actions share a similar semantic meaning. The second step implements a sophisticated rule-miner that conduct explicit one-hop reasoning over the graph and identifies decisive edges in the graph input without the necessity of heavy domain knowledge. This two-step hybrid policy presents human-friendly interpretations and achieves better performance in terms of generalization and robustness. Extensive experimental studies on four levels of complex text-based games have demonstrated the superiority of the proposed method compared to the state-of-the-art. | Reject | The authors did not respond to the concerns raised by all the reviewers. As the recommendation were on the edge, this lack of engagement seems odd, and it left the reviewers with little material to discuss and revise their recommendation. We recommend the authors carefully consider the reviews if they plan to resubmit. | train | [
"4BXL9-2Nbzi",
"BlfI5o6CCEI",
"XlU9V7dP9rc",
"qtUYpANMOhK",
"y4mfk1blvXp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We hope all is fine. Please reach out if not. I think we can still fit a very quick discussion if you submit a response today.\n\nThanks",
"This work targets to the problem of limited interpretability in RL. To solve the problem, the authors propose a two-step hybrid model, including action pruner and action se... | [
-1,
5,
5,
5,
3
] | [
-1,
4,
3,
4,
4
] | [
"iclr_2022_1kqWZlj4QYJ",
"iclr_2022_1kqWZlj4QYJ",
"iclr_2022_1kqWZlj4QYJ",
"iclr_2022_1kqWZlj4QYJ",
"iclr_2022_1kqWZlj4QYJ"
] |
iclr_2022_tHx6q2dM86s | HYPOCRITE: Homoglyph Adversarial Examples for Natural Language Web Services in the Physical World | Recently, as Artificial Intelligence (AI) develops, many companies in various industries are trying to use AI by grafting it into their domains.
Also, for these companies, various cloud companies (e.g., Amazon, Google, IBM, and Microsoft) are providing AI services as the form of Machine-Learning-as-a-Service (MLaaS).
However, although these AI services are very advanced and well-made, security vulnerabilities such as adversarial examples still exist, which can interfere with normal AI services.
This paper demonstrates a HYPOCRITE for hypocrisy that generates homoglyph adversarial examples for natural language web services in the physical world. This hypocrisy can disrupt normal AI services provided by the cloud companies.
The key idea of HYPOCRITE is to replace English characters with other international characters that look similar to them in order to give the dataset noise to the AI engines.
By using this key idea, parts of text can be appropriately replaced with subtext with malicious meaning through black-box attacks for natural language web services in order to cause misclassification.
In order to show attack potential by HYPOCRITE, this paper implemented a framework that makes homoglyph adversarial examples for natural language web services in the physical world and evaluated the performance under various conditions.
Through extensive experiments, it is shown that HYPOCRITE is more effective than other baseline in terms of both attack success rate and perturbed ratio. | Reject | The paper presents attacks against sentiment recognition NLP systems using specially crafted homoglyphs. The novelty of the proposed method is, however, marginal; contributions over some of the related work are unclear. Furthermore, the quality of writing is insufficient. The authors provided no response to reviewers' comments. | train | [
"00TySmBeIP",
"niMPwkaKd-W",
"oTlQdAVQZ7w"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes using homoglyphs to attack commercial NLP models for sentiment classification. Homoglyphs look like the characters in English language but they are encoded differently and, therefore, treated differently by the model. They experiment on various MLaaS models and show that their attack can effect... | [
3,
3,
3
] | [
4,
3,
5
] | [
"iclr_2022_tHx6q2dM86s",
"iclr_2022_tHx6q2dM86s",
"iclr_2022_tHx6q2dM86s"
] |
iclr_2022_zRb7IWkTZAU | Zero-Shot Reward Specification via Grounded Natural Language | Reward signals in reinforcement learning can be expensive signals in many tasks and often require access to direct state. The alternative to reward signals are usually demonstrations or goal images which can be labor intensive to collect. Goal text description is a low effort way of communicating the desired task. Goal text conditioned policies so far though have been trained with reward signals that have access to state or labelled expert demonstrations. We devise a model that leverages CLIP to ground objects in a scene described by the goal text paired with spatial relationship rules to provide an off-the-shelf reward signal on only raw pixels to learn a set of robotic manipulation tasks. We distill the policies learned with this reward signal on several tasks to produce one goal text conditioned policy. | Reject | This manuscript describes a method that turns sentences into reward functions by recognizing objects, parsing sentences into a simple formalism, and then grounding the parse in the recognized objects to form a reward for an agent.
1. The title and much of the manuscript are written in a way that reviewers found confusing. It would seem from the title and most of the text that the method integrates language models, CLIP specifically, into RL in a novel way to provide zero-shot rewards. But this is not the case. CLIP is used purely as an object detector. Yes, the method requires a good object detector and CLIP provides that, but any good object detector that can handle arbitrary phrases would have done.
2. The overall setup of the work: extract the state of the world and then parse sentences to formulate rewards by grounding parts of the parse into parts of the world state has been explored widely in robotics. Reviewers provided citations going back several years, but many others exist.
I would encourage the authors to rewrite the manuscript around their central contributions and downgrade their use of CLIP and language models in general to a minor technical footnote. Similarly refocusing related work on the robotics literature and demonstrating how this approach differs and improves on the state of the art there could result in a strong contribution. | train | [
"N5cR9zhNGW4",
"TJMGwUArmaX",
"r5bCBV0tbJ",
"3ZfbwulrXN",
"Sf9J7hCAU5h",
"OsW_FKM_UuA",
"XRmBKWKTXYY",
"ageyidlqlb1"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the constructive feedback and glad they also find our defined problem statement “essential” and found our work “creative” and “performing well in domains considered from images”. We address their concerns in detail below. \n\n>*“How the method compares to goal image specifications”*\n- O... | [
-1,
-1,
-1,
-1,
6,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"ageyidlqlb1",
"Sf9J7hCAU5h",
"XRmBKWKTXYY",
"OsW_FKM_UuA",
"iclr_2022_zRb7IWkTZAU",
"iclr_2022_zRb7IWkTZAU",
"iclr_2022_zRb7IWkTZAU",
"iclr_2022_zRb7IWkTZAU"
] |
iclr_2022_7AzOUBeajwl | Text Style Transfer with Confounders | Existing methods for style transfer operate either with paired sentences or distributionally matched corpora which differ only in the desired style. In this paper, we relax this restriction and consider data sources with additional confounding differences, from which the desired style needs to be inferred. Specifically, we first learn an invariant style classifier that takes out nuisance variation, and then introduce an orthogonal classifier that highlights the confounding cues. The resulting pair of classifiers guide us to transfer text in the specified direction, creating sentences of the type not seen during training. Experiments show that using positive and negative review datasets from different categories, we can successfully transfer the sentiment without changing the category. | Reject | This paper studies text style transfer which aims to edit a given sentence to possess a desired style value (e.g., positive sentiment) while keeping all other styles and content unchanged. The paper specifically focuses on a challenging setting where besides the target style (e.g., sentiment) to transfer, there exists confounding attributes (e.g., product category) that correlate with the target style, making it hard to change only the target style while preserving the other. The proposed approach is to learn an invariant/unbiased style classifier using Invariance Risk Minimization (IRM), together with an orthogonal classifier for monitoring style-independent changes (e.g., product category), to supervise the generator training. The main concerns are on the experiments -- it's suggested to include experiments on other styles besides sentiment; human evaluation and/or other metrics are needed for more convincing comparison; it's also encouraged to experiment with large language models (e.g., GPT-2, BART) besides the small LSTM/CNN networks as in the present work. | train | [
"SCSTLpscrkQ",
"bGqWUtIYGgf",
"qaGENmzI86_",
"KKbjD_55Y7",
"sI-cyUEC_l8",
"2qZRJfD9Aj1",
"RxddBwQsMK",
"xIF8InepgfR",
"T_fBdfPXXC",
"vEOd7O4nclI",
"ZToqdSpoVUT",
"UJ5M6sSC4WG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this clear answer. I understand you argumentation but I am still embarrassed by the lack of range of experiences: several multi-aspect datasets exist and would enable you to validate your framework on a wider range than sentiment.\nI get the argument on simple LSTM... But once again, I am convinced ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"RxddBwQsMK",
"qaGENmzI86_",
"sI-cyUEC_l8",
"T_fBdfPXXC",
"2qZRJfD9Aj1",
"UJ5M6sSC4WG",
"ZToqdSpoVUT",
"vEOd7O4nclI",
"iclr_2022_7AzOUBeajwl",
"iclr_2022_7AzOUBeajwl",
"iclr_2022_7AzOUBeajwl",
"iclr_2022_7AzOUBeajwl"
] |
iclr_2022_AB2r0YKBSpD | Data Scaling Laws in NMT: The Effect of Noise and Architecture | In this work, we empirically study the data scaling properties of neural machine translation (NMT). We first establish that the test loss of encoder-decoder transformer models scales as a power law in the number of training samples, with a dependence on the model size. We then systematically vary various aspects of the training setup to understand how they impact the data scaling laws. In particular, we change the (1) Architecture and task setup, to a Transformer-LSTM Hybrid as well as a Decoder-only transformer with language modeling loss (2) Noise level in the training distribution, starting with noisy data with filtering applied as well as clean data corrupted with synthetic iid noise. In all the above cases, we find that the data scaling exponents are minimally impacted, suggesting that marginally worse architectures or training data quality can be compensated for by adding more data. Lastly, we find that changing the training distribution to use back-translated data instead of parallel data, can impact the scaling exponent. | Reject | This paper analyzes the data scaling laws in NMT tasks with different network architectures and data qualities. The main purpose of this paper is to investigate how such different experimental setup affects the scaling law. The authors found that those difference does not have strong impact on the scaling exponent, and a small difference of model architecture and data noise can be compensated by larger data size.
This paper gives nice justification of data scaling law from some different aspects which is instructive to some extent. On the other hand, the paper has some weakness as listed in the following: (1) The scaling law itself has been analyzed by many papers, and its novelty is rather limited. I acknowledge that this paper investigates different aspects of the data scaling law and the size of experiments are larger than existing work. However, the result is rather unsurprising. (2) The experiments are conducted mostly on one language pair (English-to-German), it is still unclear whether the findings are universal to other language pairs. As the authors responded, exhaustive experiments over all language pairs are unrealistic but some more investigation to more general data sets could be conducted to strengthen the paper.
This paper is around the borderline. Some reviewers were rather positive to this paper. However, they also pointed out the concerns I listed above and they do not show strong support on the paper.
In summary, although this paper shows some instructive findings, it is still a bit below the threshold of acceptance. | train | [
"RxeaCbMu2S_",
"IjdUohGWNe",
"JdRMwv8tWs5",
"SZvc_-1Z9km",
"GGQk2W6pnc8",
"N7QIzBTpJkL",
"mmJMj_z2Dd",
"Q5ExSrU4cRu",
"y2sx3E23GM",
"0-KKAAN7SPX",
"9fDBuwmQqdJ",
"l33mrMSVZkX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The 26B sentence pairs refers to the 1.4TB 'Raw' data at the following link https://s3.amazonaws.com/web-language-models/paracrawl/release8/en-de.classified.gz . To obtain this, follow https://paracrawl.eu/v8 -> \"German\" -> click on the arrow mark next to 'TXT'. This gives a dropdown menu with for the raw data.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"IjdUohGWNe",
"N7QIzBTpJkL",
"l33mrMSVZkX",
"9fDBuwmQqdJ",
"0-KKAAN7SPX",
"y2sx3E23GM",
"Q5ExSrU4cRu",
"iclr_2022_AB2r0YKBSpD",
"iclr_2022_AB2r0YKBSpD",
"iclr_2022_AB2r0YKBSpD",
"iclr_2022_AB2r0YKBSpD",
"iclr_2022_AB2r0YKBSpD"
] |
iclr_2022_VjoSeYLAiZN | A NEW BACKBONE FOR HYPERSPECTRAL IMAGE RECONSTRUCTION | As the inverse process of snapshot compressive imaging, the hyperspectral image (HSI) reconstruction takes the 2D measurement as input and posteriorly retrieves the captured 3D spatial-spectral signal. Built upon several assumptions, numerous sophisticated neural networks have come to the fore in this task. Despite their prosperity under experimental settings, it's still extremely challenging for existing networks to achieve high-fidelity reconstructive quality while maximizing the reconstructive efficiency (computational efficiency and power occupation), which prohibits their further deployment in practical applications. In this paper, we firstly conduct a retrospective analysis on aforementioned assumptions, through which we indicate the imminent aspiration for an authentically practical-oriented network in reconstructive community. By analysing the effectiveness and limitations of the widely-used reconstructive backbone U-Net, we propose a Simple Reconstruction Network, namely SRN, just based on some popular techniques, e.g., scale/spectral-invariant learning and identity connection. It turns out, under current conditions, such a pragmatic solution outperforms existing reconstructive methods by an obvious margin and maximize the reconstructive efficiency concretely. We hope the proposed SRN can further contribute to the cutting-edge reconstructive methods as a promising backbone, and also benefit the realistic tasks, i.e., real-time/high-resolution HSI reconstruction, solely as a baseline.
| Reject | The paper proposes a new neural network architecture for hyperspectral image reconstruction. The paper received borderline/negative reviews. Significant concerns were raised about the novelty and significance of the contribution. Unfortunately, the authors did not upload a rebuttal, preventing the reviewers from changing their opinion about the paper. There is therefore no reason to overturn their recommendation. | test | [
"Ml3LpCFrJI",
"L2_MYOyBOQ",
"rAPpv_F919h",
"t6PlxjfkjK3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Different from the previous works about Unet structure for HSI reconstruction, this paper introduces a new backbone, which is efficient and lightweight. The experiments also demosntrate that the proposed model can significantly outperfom the previous works in quantitative evaluation results. From the above summary... | [
6,
5,
3,
5
] | [
3,
3,
4,
4
] | [
"iclr_2022_VjoSeYLAiZN",
"iclr_2022_VjoSeYLAiZN",
"iclr_2022_VjoSeYLAiZN",
"iclr_2022_VjoSeYLAiZN"
] |
iclr_2022_MmujBClawFo | Attention: Self-Expression Is All You Need | Transformer models have achieved significant improvements in performance for various learning tasks in natural language processing and computer vision. Much of their success is attributed to the use of attention layers that capture long-range interactions among data tokens (such as words and image patches) via attention coefficients that are global and adapted to the input data at test time. In this paper we study the principles behind attention and its connections with prior art. Specifically, we show that attention builds upon a long history of prior work on manifold learning and image processing, including methods such as kernel-based regression, non-local means, locally linear embedding, subspace clustering and sparse coding. Notably, we show that self-attention is closely related to the notion of self-expressiveness in subspace clustering, wherein data points to be clustered are expressed as linear combinations of other points with global coefficients that are adapted to the data and capture long-range interactions among data points. We also show that heuristics in sparse self-attention can be studied in a more principled manner using prior literature on sparse coding and sparse subspace clustering. We thus conclude that the key innovations of attention mechanisms relative to prior art are the use of many learnable parameters, and multiple heads and layers. | Reject | This paper points out connections between the self-attention module in transformers and some prior art, including kernel regression, the non-local mean algorithm, locally linear embeddings, and the self-expression algorithm for subspace clustering. Based on these observations, the authors argue that the innovation of self-attention is not modeling the long-range relation, which is also proposed in prior work, but the learnable parameters and the multi-head design. The authors also suggest several directions for future work, such as using self-attention for manifold clustering.
Reviewers pointed out several weaknesses with this paper: that some connections (e.g. connection to kernel regression) had been pointed out before, that the relation between self-attention and locally linear embedding and self-expression in subspace clustering is a bit nuanced, as pointed out by one of the reviewers, and that while some speculative future directions might be interesting, the paper falls short in actually trying some of them out empirically, or building a proof-of-concept.
In the discussion period, the authors pointed out that this is a position paper (which unfortunately was not expressed so assertively in their submission), which according to their view liberates them from digging deeper and test empirically some of these connections and speculative directions. According to the authors, a core contribution of their position paper is that "it expresses the opinion that the original attention paper failed to cite and acknowledge that attention mechanisms build upon a series of prior works in sparse coding, subspace clustering, and locally linear embedding."
There are no specific guidelines to review position papers at ICLR that I know of, but I will base my assessment on the assumption that a good position paper should:
- provide a good historical perspective of a subject
- connect previously unrelated lines of work in non-obvious ways
- inspire the research community to look at new directions.
While a good position paper can be extremely valuable and enlightening, I am not convinced that this particular paper achieves either of the goals above, and therefore it is my opinion that it does not deserve publication at ICLR.
As pointed out both by the authors and the reviewers, the connection between self-attention and kernel regression and non-local mean denoising is not new, and so it is not an original contribution of this paper. The relation between self-attention and locally linear embedding and self-expression in subspace clustering appears to be new, but this relation is a bit nuanced, as pointed out by one of the reviewers.
The tone of this position paper is that some of these connections were missing in the original attention paper -- the authors say "attention did not properly acknowledge prior art" in one of their responses (it is not clear if they are referring to Bahdanau et al.'s attention paper or to Vaswani et al.'s transformer paper). However, the historical perspective of how attention mechanisms came to be seems to be missing from this position paper -- attention has been proposed by Bahdanau et al. for machine translation, inspired by the idea of word alignment that has been prevalent in machine translation for decades. Later, in the transformer paper, self-attention was suggested as an alternative to recurrent and convolutional models for machine translation (note that self-attention has been used before the transformer paper, see e.g. [1]). While a theoretical connection with kernel regression etc. exists, this was not related to the original motivation of these works. There are many ways of arriving at the same construction! And given the simplicity of attention mechanisms it doesn't surprise me that connections with other lines of research exist. Had they been noticed, they would probably be a parenthesis in the original papers, because attention is derived there in a much more direct way (this doesn't mean that the connections aren't interesting, but that they are not _essencial_ to the construction).
In their response, the authors dismissed a constructive suggestion from one of the reviewers which in my opinion would have strengthen this paper -- the connection with graph neural networks. If the point of the paper is to point out past research that connects fundamentally to the idea of attention mechanisms, why leaving this out?
In sum, in my view this paper lacks the rigor, the insight, and the historical perspective that should characterize a strong position paper, and as such I cannot recommend acceptance. I strongly suggest that the authors take into account some of the insightful suggestions given by the reviewers in future iterations of their work.
[1] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016. | test | [
"Lq5KkfWvc8C",
"Ht7y8Orn2pr",
"swCV-kTWEPq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This review paper tries to study the principles behind attention and its connections with prior art using the self-expression methods (e.g., kernel-based regression, non-local means, locally linear embedding, subspace clustering). This is a review paper to discusses the relationships between the attentions and th... | [
5,
5,
5
] | [
4,
4,
3
] | [
"iclr_2022_MmujBClawFo",
"iclr_2022_MmujBClawFo",
"iclr_2022_MmujBClawFo"
] |
iclr_2022_qyzTEWWM0Pp | Multiresolution Equivariant Graph Variational Autoencoder | In this paper, we propose Multiresolution Equivariant Graph Variational Autoencoders (MGVAE), the first hierarchical generative model to learn and generate graphs in a multiresolution and equivariant manner. At each resolution level, MGVAE employs higher order message passing to encode the graph while learning to partition it into mutually exclusive clusters and coarsening into a lower resolution that eventually creates a hierarchy of latent distributions. MGVAE then constructs a hierarchical generative model to variationally decode into a hierarchy of coarsened graphs. Importantly, our proposed framework is end-to-end permutation equivariant with respect to node ordering. MGVAE achieves competitive results with several generative tasks including general graph generation, molecular generation, unsupervised molecular representation learning to predict molecular properties, link prediction on citation graphs, and graph-based image generation. | Reject | The paper proposes multiresolution and equivariant generative models. Experimental results for several applications are shown.
Pros:
- A first hierarchical generative model with multiresolution and equivariance.
- Extensive experiments
Cons:
- Marginal novelty (multiresolution and permutation equivalence each is not novel for graph neural networks.
- State-of-the-art methods are not compared as baselines.
- Some standard metrics are not evaluated, and the used metrics are questionable (some generated molecules might not be stable although the chemical validity is 100%).
- Time/space complexity evaluation is missing.
The authors did not address some of the serious concerns in the rebuttal. | train | [
"tpqwvHukQf",
"hC1NalfgLuo",
"f7rdrQtqgDg",
"p37QPlIuOJs",
"N_RSt0LRb1t",
"aRqCm9dOHfS",
"AN_nfTKIXhW",
"vLPFrffKIXA",
"xsA119KswLW",
"08PwvY6FLy3",
"K8lLdgUTTpn",
"COtsfdWa7L",
"X62DGJ9-VW"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a variational-autoencoder-based model for learning and generating graph structures. In particular, they propose a multiresolution graph network (MGN) that encodes a given graph in a hierarchical manner, i.e., at different levels of resolution. \nTraining the nodes of the coarsened graphs as lat... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4,
4
] | [
"iclr_2022_qyzTEWWM0Pp",
"K8lLdgUTTpn",
"COtsfdWa7L",
"tpqwvHukQf",
"aRqCm9dOHfS",
"X62DGJ9-VW",
"08PwvY6FLy3",
"xsA119KswLW",
"iclr_2022_qyzTEWWM0Pp",
"iclr_2022_qyzTEWWM0Pp",
"iclr_2022_qyzTEWWM0Pp",
"iclr_2022_qyzTEWWM0Pp",
"iclr_2022_qyzTEWWM0Pp"
] |
iclr_2022_LM17I_oVVPB | A Simple Reward-free Approach to Constrained Reinforcement Learning | In constrained reinforcement learning (RL), a learning agent seeks to not only optimize the overall reward but also satisfy the additional safety, diversity, or budget constraints. Consequently, existing constrained RL solutions require several new algorithmic ingredients that are notably different from standard RL. On the other hand, reward-free RL is independently developed in the unconstrained literature, which learns the transition dynamics without using the reward information, and thus naturally capable of addressing RL with multiple objectives under the common dynamics. This paper bridges reward-free RL and constrained RL. Particularly, we propose a simple meta-algorithm such that given any reward-free RL oracle, the approachability and constrained RL problems can be directly solved with negligible overheads in sample complexity. Utilizing the existing reward-free RL solvers, our framework provides sharp sample complexity results for constrained RL in the tabular MDP setting, matching the best existing results up to a factor of horizon dependence; our framework directly extends to a setting of tabular two-player Markov games, and gives a new result for constrained RL with linear function approximation. | Reject | The paper presents a general solution method for constrained RL problems using reward-free exploration. While the reviewers found this reduction interesting in general, they had concerns about the price of this reduction in general (such as the increased regret or for suboptimal dependence of the bounds on some problem parameters), which is to be paid in exchange for the simplicity and flexibility of the proposed approach. This, coupled with the limited technical novelty used in the derivations, made all reviewers think that this is a borderline paper, and I also agree with this assessment. The paper could benefit a lot from presenting more evidence of the benefits of their approach (either theoretically or empirically). Based on the above, unfortunately, I am not able to recommend acceptance at this point. | train | [
"taX9L6f1jdj",
"Y2Leen8eHq",
"fvwqbIS8up_",
"qY2wGYOQ5eL",
"qK8VGeyNxYk",
"5-zD5RbJYSo",
"3aI8Q2gFyMd",
"QI8_5UeuH9c"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the answers. I'm willing to increase my score to 6, but I still believe the paper stands in a borderline position.",
" I would like to thank the authors for the clarifications. However, I think that my original score is appropriate for this version of the paper.",
" We would like to thank Review... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"qY2wGYOQ5eL",
"fvwqbIS8up_",
"QI8_5UeuH9c",
"3aI8Q2gFyMd",
"5-zD5RbJYSo",
"iclr_2022_LM17I_oVVPB",
"iclr_2022_LM17I_oVVPB",
"iclr_2022_LM17I_oVVPB"
] |
iclr_2022_LtI14EpWKH | Tessellated 2D Convolution Networks: A Robust Defence against Adversarial Attacks | Data-driven (deep) learning approaches for image classification are prone to adversarial attacks. This means that an adversarial crafted image which is sufficiently close (visually indistinguishable) to its representative class can often be misclassified to be a member of a different class. A reason why deep neural approaches exhibits such vulnerability towards adversarial threats is mainly because the abstract representations learned in a data-driven manner often do not correlate well with human perceived features. To mitigate this problem, we propose the tessellated 2d convolution network, a novel divide-and-conquer based approach, which first independently learns the abstract representations of non-overlapping regions within an image, and then learns how to combine these representations to infer its class. It turns out that a non-uniform tiling of an image which ensures that the difference between the maximum and the minimum region sizes is not too large is the most robust way to construct such a tessellated 2d convolution network. This criterion can be achieved, among other schemes, by using a Mondrian tessellation of the input image. Our experiments demonstrate that our tessellated networks provides a more robust defence mechanism against gradient-based adversarial attacks in comparison to conventional deep neural models. | Reject | This paper proposes an image tesselation scheme to improve the robustness of image classifiers. The reviewers agree that the method is simple and intuitive, and view this as a positive attribute. At the same time, the reviewers want to see if the method works on higher resolution images. It was also not clear to reviewers how the attacks on the method were constructed, whether they were white box, and whether they were adaptive. Without a rebuttal, these questions remain unanswered. | train | [
"Inh6GdcVXWd",
"OjSpNvja8vH",
"tC1vkrY-fL",
"H0JrfuuklW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a tessellated convolution network that is more robust to adversarial attacks. In the paper, authors proposed three different ways of tiling. Experiments show this method is more robust to FGSM and PGD for classification tasks on fashion MNIST and CIFAR10.\n The idea of this paper is simple and ... | [
5,
3,
3,
5
] | [
3,
4,
4,
5
] | [
"iclr_2022_LtI14EpWKH",
"iclr_2022_LtI14EpWKH",
"iclr_2022_LtI14EpWKH",
"iclr_2022_LtI14EpWKH"
] |
iclr_2022_X3WxnuzAYyE | PKCAM: Previous Knowledge Channel Attention Module | Attention mechanisms have been explored with CNNs, both across the spatial and channel dimensions.
However, all the existing methods devote the attention modules to capture local interactions from the current feature map only, disregarded the valuable previous knowledge that is acquired by the earlier layers.
This paper tackles the following question: Can one incorporate previous knowledge aggregation while learning channel attention more efficiently? To this end, we propose a Previous Knowledge Channel Attention Module( PKCAM), that captures channel-wise relations across different layers to model the global context.
Our proposed module PKCAM is easily integrated into any feed-forward CNN architectures and trained in an end-to-end fashion with a negligible footprint due to its lightweight property. We validate our novel architecture through extensive experiments on image classification and object detection tasks with different backbones.
Our experiments show consistent improvements in performances against their counterparts. We also conduct experiments that probe the robustness of the learned representations. | Reject | This paper computes channel attention by considering feature maps across different layers, and named it the previous knowledge channel attention module (PKCAM). The reviewers find the proposed idea too straightforward and naive. Lack of technical contribution is one of the major criticisms. There are also correctness concerns with the submission. The authors have not provided any rebuttal.
We recommend rejecting the paper. | train | [
"iAxDsbcJZXQ",
"Krjnv1xYj2X",
"JnE2ujxRtn",
"t243j0nI1G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a previous knowledge channel attention module (PKCAM) that captures channel relations across the different layers to help enhance feature representation. The module can be integrated into the current ResNet series and show reasonable performance improvement over the baseline network on some benc... | [
3,
3,
5,
3
] | [
4,
5,
4,
4
] | [
"iclr_2022_X3WxnuzAYyE",
"iclr_2022_X3WxnuzAYyE",
"iclr_2022_X3WxnuzAYyE",
"iclr_2022_X3WxnuzAYyE"
] |
iclr_2022_GVDwiINkMR | Picking Daisies in Private: Federated Learning from Small Datasets | Federated learning allows multiple parties to collaboratively train a joint model without sharing local data. This enables applications of machine learning in settings of inherently distributed, undisclosable data such as in the medical domain. In practice, joint training is usually achieved by aggregating local models, for which local training objectives have to be in expectation similar to the joint (global) objective. Often, however, local datasets are so small that local objectives differ greatly from the global objective, resulting in federated learning to fail. We propose a novel approach that intertwines model aggregations with permutations of local models. The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains. This enables training on extremely small local datasets, such as patient data across hospitals, while retaining the training efficiency and privacy benefits of federated learning. | Reject | The reviewers were not convinced by the authors' responses to their concerns, and this paper generated little followup discussion. Some primary concerns include the privacy analysis, limited technical contribution and scope (e.g., only being applicable to iid data), and lacking comparison to suggested baselines. The authors are suggested to take the reviewer comments into account for further investigation. | train | [
"LD40Y0cEji",
"cA6t1za11vv",
"MsQoSggTmhH",
"BMmqgbDHal"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a new training procedure for federated learning (FL) systems based on a daisy-chain network. Training the system has two phases, a daisy-chain phase in which models are transmitted from one client to another via a coordinator node, and the standard aggregation phase in which models are averaged ... | [
5,
3,
3,
5
] | [
4,
3,
3,
4
] | [
"iclr_2022_GVDwiINkMR",
"iclr_2022_GVDwiINkMR",
"iclr_2022_GVDwiINkMR",
"iclr_2022_GVDwiINkMR"
] |
iclr_2022_xWRX16GCugt | Sequoia: A Software Framework to Unify Continual Learning Research | The field of Continual Learning (CL) seeks to develop algorithms that accumulate knowledge and skills over time through interaction with non-stationary environments. In practice, a plethora of evaluation procedures (settings) and algorithmic solutions (methods) exist, each with their own potentially disjoint set of assumptions. This variety makes measuring progress in CL difficult. We propose a taxonomy of settings, where each setting is described as a set of assumptions. A tree-shaped hierarchy emerges from this view, where more general settings become the parents of those with more restrictive assumptions. This makes it possible to use inheritance to share and reuse research, as developing a method for a given setting also makes it directly applicable onto any of its children. We instantiate this idea as a publicly available software framework called Sequoia, which features a wide variety of settings from both the Continual Supervised Learning (CSL) and Continual Reinforcement Learning (CRL) domains. Sequoia also includes a growing suite of methods which are easy to extend and customize, in addition to more specialized methods from external libraries. We hope that this new paradigm and its first implementation can help unify and accelerate research in CL. You can help us grow the tree by visiting (this GitHub URL). | Reject | The manuscript introduces a taxonomy for organizing continual learning research settings and a software framework that realizes this taxonomy. Each continual learning setting is represented by as a set of shared assumptions (e.g., are task IDs observed or not) represented in a hierarchy, and the software is introduced with the hopes of unifying continual learning research.
The manuscript identifies a clear issue in the field: settings and methods for continual learning have proliferated so that there is little coherence in benchmarks, making progress difficult to judge. Reviewers generally agreed that the motivation of building software to help unify continual learning research was a positive.
However, reviewers also pointed to many concerns with the manuscript and software package (Sequoia) that comprises its main contribution. In particular, there is concern that the software is at an early stage of development and makes heavy use of existing libraries to function (e.g. Avalanche and Continuum). This makes it unclear what Sequioa offers over using its dependencies directly. As well, there is concern that multiple standard benchmark tasks and common methods are missing from the implementation — particularly for large scale experiments with, e.g. ImageNet-1k. In theory, the library allows extension and these might be implemented by others in the community. However, this would require that the original manuscript+software are strong enough to draw buy in from other researchers.
In sum, the manuscript+software does not yet offer a convincing starting point for researchers looking for a starting point to begin their continual learning research. | train | [
"qPZdMSiRaMq",
"GufV25E1dTy",
"I7UqQe35yGU",
"-QISKIbXYal",
"Un4Pe7yblf",
"VpaakCcBRF8",
"eXjzyBvxUq7",
"5dReghiwcq_",
"NnSrSJcvHP2",
"lcYlYv9tU1S",
"DvjgSVgmKBi",
"UwjAKsTbSl"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification. I will give only a brief answer since I think we both made our points clear to each other.\n\n**Tree vs Lattice**: Of course I agree with you, there is a tradeoff between the representation of actual problem's constraints, programming language features, and usability for the end user... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
1,
3,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"eXjzyBvxUq7",
"I7UqQe35yGU",
"iclr_2022_xWRX16GCugt",
"iclr_2022_xWRX16GCugt",
"5dReghiwcq_",
"UwjAKsTbSl",
"DvjgSVgmKBi",
"-QISKIbXYal",
"lcYlYv9tU1S",
"iclr_2022_xWRX16GCugt",
"iclr_2022_xWRX16GCugt",
"iclr_2022_xWRX16GCugt"
] |
iclr_2022_an_ndI09oVZ | Deep banach space kernels | The recent success of deep learning has encouraged many researchers to explore the deep/concatenated variants of classical kernel methods. Some of which includes MLMKL, DGP and DKL. Although, These methods have proven to be quite useful in various real-world settings. They still suffer from the limitations of only utilizing kernels from Hilbert spaces. In this paper, we address these shortcomings by introducing a new class of concatenated kernel learning methods that use the kernels from the reproducing kernel Banach spaces(RKBSs). These spaces turned out to be one of the most general spaces where a reproducing Kernel exists. We propose a framework of construction for these Deep RKBS models and then provide a representer theorem for regularized learning problems. We also describe the relationship with its deep RKHS variant as well as standard Deep Gaussian Processes. In the end, we construct and implement a two-layer deep RKBS model and demonstrate it on a range of machine learning tasks. | Reject | The paper develops kernel functions in Banach spaces. However the results seem to be preliminary and further development is needed before
the manuscript can be published. Reviewers point out several errors and also author/authors have graciuously agree with the suggestion
that they will incorporate all the feedback in future submissions. | train | [
"aPFTobiH9Qh",
"M7PZ2k0ddP",
"XscGfGlCaxH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proves a Representer Theorem for the composition of functions from Reproducing Kernel Banach Spaces (RKBSs). This submission looks more like a working draft rather than a conference paper. In particular,\n\n**There are many typos**, making the reading very difficult (I am highlighting a few of them, onl... | [
1,
3,
3
] | [
5,
4,
3
] | [
"iclr_2022_an_ndI09oVZ",
"iclr_2022_an_ndI09oVZ",
"iclr_2022_an_ndI09oVZ"
] |
iclr_2022_aM7l2S2s5pk | Offline-Online Reinforcement Learning: Extending Batch and Online RL | Batch RL has seen a surge in popularity and is applicable in many practical scenarios where past data is available. Unfortunately, the performance of batch RL agents is limited in both theory and practice without strong assumptions on the data-collection process e.g. sufficient coverage or a good policy. To enable better performance, we investigate the offline-online setting: The agent has access to a batch of data to train on but is also allowed to learn during the evaluation phase in an online manner. This is an extension to batch RL, allowing the agent to adapt to new situations without having to precommit to a policy. In our experiments, we find that agents trained in an offline-online manner can outperform agents trained only offline or online, sometimes by a large margin, for different dataset sizes and data-collection policies. Furthermore, we investigate the use of optimism vs. pessimism for value functions in the offline-online setting due to their use in batch and online RL. | Reject | It seems that the reviewers reached out a consensus that the paper is not ready for publication at ICLR. The reviewers raised concerns including “The empirical observations are not supported by theoretical analysis” , “The proposed algorithm is a simple modification to an existing algorithm”, concerns with “with the novelty of the paper”, “The message of the paper is not new. “ Please see the reviews for more detailed discussions about the paper. | train | [
"THgQ0MGfe0",
"INdHtkIpXHJ",
"aId2qt_c6ed",
"7NwXGIAi63F"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors explore the offline-online setting for RL, where the agent has access to a batch of data to train on but is also allowed to learn during the evaluation phase in an online manner. Compared to the offline setting, the agent is now allowed to take online samples during the evaluation phase.... | [
5,
5,
3,
3
] | [
3,
3,
5,
3
] | [
"iclr_2022_aM7l2S2s5pk",
"iclr_2022_aM7l2S2s5pk",
"iclr_2022_aM7l2S2s5pk",
"iclr_2022_aM7l2S2s5pk"
] |
iclr_2022_IptBMO1AR5g | Regularizing Deep Neural Networks with Stochastic Estimators of Hessian Trace | In this paper we develop a novel regularization method for deep neural networks by penalizing the trace of Hessian. This regularizer is motivated by a recent guarantee bound of the generalization error. Hutchinson method is a classical unbiased estimator for the trace of a matrix, but it is very time-consuming on deep learning models. Hence a dropout scheme is proposed to efficiently implements the Hutchinson method. Then we discuss a connection to linear stability of a nonlinear dynamical system. Experiments demonstrate that our method outperforms existing regularizers such as Jacobian, confidence penalty, and label smoothing. Our regularization method is also orthogonal to data augmentation methods, achieving the best performance when our method is combined with data augmentation. | Reject | This paper regularizes deep neural networks via the Hessian trace. The algorithm is based on Hutchison’s method, further accelerated via dropout. Connection to the linear stability of dynamical system is discussed. The proposed regularization shows favorably in the experimental results.
The idea of the method is clear. The paper’s writing needs a lot of improvement because there are a number of grammatical errors. The major technical concerns include: a) the experimental results are still not convincing; b) the explanation of favoring instability in the dynamical system that resorts to overfitting prevention (reviewer GDik). I’ve read the rebuttal, but remain unconvinced. | train | [
"cBlIOts7FyZ",
"jiU_tQdL0rS",
"MWJP7p5CsO9",
"jhQzK5xU5x",
"pmsQvqCDMYM",
"WJbYFJBlnwd",
"1q7UKr16Gj",
"0b0U0zuG-u",
"R24eZCmqhA-",
"G0zUtAAM6Ua",
"b5EBP-PgpsB",
"qUhS-ehLPe",
"w9klzMtlzTq",
"g9GveEhBHDp"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Our newly tested result on CIFAR-10 is below.\n\n|Method|Test Accuracy|\n|-|-|\n|Baseline with Weight-Decay|94.00 ($\\pm$0.47)|\n|Jacobian|89.23 ($\\pm$1.02)|\n|DropBlock|89.23 ($\\pm$0.44)|\n|Sankar’s Method for Full Network|88.05 ($\\pm$0.22)|\n|Sankar’s Method for Middle Network|88.13 ($\\pm$0.12)|\n|Confidenc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"1q7UKr16Gj",
"pmsQvqCDMYM",
"R24eZCmqhA-",
"WJbYFJBlnwd",
"g9GveEhBHDp",
"w9klzMtlzTq",
"0b0U0zuG-u",
"qUhS-ehLPe",
"G0zUtAAM6Ua",
"b5EBP-PgpsB",
"iclr_2022_IptBMO1AR5g",
"iclr_2022_IptBMO1AR5g",
"iclr_2022_IptBMO1AR5g",
"iclr_2022_IptBMO1AR5g"
] |
iclr_2022_SVwbKmEg7M | Unsupervised Neural Machine Translation with Generative Language Models Only | We show how to derive state-of-the-art unsupervised neural machine translation systems from generatively pre-trained language models. Our method consists of three steps: \emph{few-shot amplification}, \emph{distillation}, and \emph{backtranslation}. We first use the zero-shot translation ability of large pretrained language models to generate translations for a small set of unlabeled sentences. We then amplify these zero-shot translations by using them as few-shot demonstrations for sampling a larger synthetic dataset. This dataset is then distilled by discarding the few-shot demonstrations and then fine-tuning. During backtranslation, we repeatedly generate translations for a set of inputs and then fine-tune a single language model on both directions of the translation task at once, ensuring cycle-consistency by swapping the roles of gold monotext and generated translations when fine-tuning. By using our method to leverage GPT-3's zero-shot translation capability, we achieve a new state-of-the-art in unsupervised translation on the WMT14 English-French benchmark, attaining a BLEU score of 42.1. | Reject | This paper proposes an alternative method to improve UNMT by using only a pre-trained generative language model to bootstrap the process. In all, the reviewers think the proposed method is reasonable.
However, the empirical part is not convincing. Most reviewers think that evaluating the method on one language pair (En-Fr) is not enough to show the effect of the proposed method. In addition, some reviewers argue the clarity of this paper.
In all, I think the proposed method is meaningful. However, the current version is not ready to be published in this ICLR. I hope the authors can improve their paper according to the reviews. | train | [
"lXAQsslW4sk",
"yrypSSCF0qD",
"GrmKZov3yp",
"01G9zS0Cv_1",
"1TptUe25wos",
"OhtAztCLsJ",
"XCc4qkWo-KU",
"ENNOynusDoP"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your review.\n\n> Using such a language model to boot-strap self-training (called distillation in this paper) followed by back-translation is interesting, but should be grounded in techniques that have access to the same kind of large and diverse dataset. One clear example is bi-text mining (e.g. se... | [
-1,
-1,
-1,
-1,
5,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"1TptUe25wos",
"ENNOynusDoP",
"XCc4qkWo-KU",
"OhtAztCLsJ",
"iclr_2022_SVwbKmEg7M",
"iclr_2022_SVwbKmEg7M",
"iclr_2022_SVwbKmEg7M",
"iclr_2022_SVwbKmEg7M"
] |
iclr_2022_FpnQMmnsE8Y | Recurrent Parameter Generators | We present a generic method for recurrently using the same parameters for many different convolution layers to build a deep network. Specifically, for a network, we create a recurrent parameter generator (RPG), from which the parameters of each convolution layer are generated. Though using recurrent models to build a deep convolutional neural network (CNN) is not entirely new, our method achieves significant performance gain compared to the existing works. We demonstrate how to build a one-layer-size neural network to achieve similar performance compared to other traditional CNN models on various applications and datasets. We use the RPG to build a ResNet18 network with the number of weights equivalent to one convolutional layer of a conventional ResNet and show this model can achieve $67.2\%$ ImageNet top-1 accuracy. Additionally, such a method allows us to build an arbitrarily complex neural network with any amount of parameters. For example, we build a ResNet34 with model parameters reduced by more than $400$ times, which still achieves $41.6\%$ ImageNet top-1 accuracy. Furthermore, the RPG can be further pruned and quantized for better run-time performance in addition to the model size reduction. We provide a new perspective for model compression. Rather than shrinking parameters from a large model, RPG sets a certain parameter-size constraint and uses the gradient descent algorithm to automatically find the best model under the constraint. Extensive experiment results are provided to demonstrate the power of the proposed recurrent parameter generator.
| Reject | Meta Review for Recurrent Parameter Generators
This work investigates a method for reducing the parameters of a deep CNN by having a recurrent parameter generator (RPG) produce the weights, in effect achieving this compression via parameter sharing across layers (similar to earlier works, such as the 2016 Hypernetworks paper, as discussed in between xUeP and the author during the review period). But unlike previous work, this work conducts extensive empirical experiments on classification and even pose estimate tasks, and proposes an additional method, such as the use of pseudo-random seed to perform element-wise random sign reflection in the weight sharing. The novelty and experimental results are clearly displayed in this work, and shows a lot of promise, but after much discussion, I currently cannot recommend acceptance for ICLR 2022.
In my assessment, and also looking at reviewers and discussion, I believe this work is a great workshop paper at present, but there are a few items that would make it much stronger. There are outstanding issues in the paper that need to be improved. In particular, during discussions, reviewers noted that the paper has a problem with the design and presentation of the experiments. It somehow shifts the reader’s focus to the compression task (3 of the 4 reviewers raised concerns about the compression performance and questioned the baselines). In their rebuttal, the authors emphasized that their contribution is not limited to compression but is rather more fundamental, and the authors propose an approach for understanding the relationship between the model DoF and the network performance. But if that's the main narrative of the paper, rather than the compression aspects, the authors need to clearly articulate why decoupling the DoF from the underlying architecture is advantageous (and also make the narrative more clear in the writing). While there are novel innovations in the method proposed, the authors also need to explain clearly why their method works well, why the even weight assignment and random sign flipping are so effective?
There is discussion between the authors and reviewers about what constitutes vector quantization, and I believe the author has clarified their position effectively (with regard to cgCS's review), and I believe this will be explained in great clarity in future revisions. But even with that disagreement out of the way, we still believe that this work needs improvement to meet the bar of ICLR 2022. Reviewers, including myself, do acknowledge the novelty and are excited about the method proposed, and we look forward to seeing an updated version of this work published or presented at a future journal or conference. Good luck! | test | [
"WzF5CyjGI67",
"cGnhFapPw5T",
"nQe9QwQ54H",
"BJLElRWLygz",
"cG2i1V0Df28",
"9UAvzFLBT6-",
"NYeCHiOKCZ5",
"PcKSG2ZSM9",
"IfNKvRJ_LDE",
"WKxoN1hy0CY",
"XerUWoASwh",
"YUyb7h-PJA",
"1wnWbl8bsUd",
"d2CH1HmzbZn",
"OrNQtglgpHb",
"919O9uHX2X8",
"hGYCB_bEzZ7",
"aWXyldRXj0U"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" \nDear authors, thank you for your sincere answer. Your honest response about the inference time is in line with my expectations. In other words, the proposed method does not help reduce computational complexity. All in all, this paper is innovative, although I pay more attention to practicality. If AC and other ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"cGnhFapPw5T",
"9UAvzFLBT6-",
"PcKSG2ZSM9",
"iclr_2022_FpnQMmnsE8Y",
"IfNKvRJ_LDE",
"hGYCB_bEzZ7",
"PcKSG2ZSM9",
"iclr_2022_FpnQMmnsE8Y",
"OrNQtglgpHb",
"YUyb7h-PJA",
"aWXyldRXj0U",
"XerUWoASwh",
"d2CH1HmzbZn",
"hGYCB_bEzZ7",
"BJLElRWLygz",
"iclr_2022_FpnQMmnsE8Y",
"iclr_2022_FpnQMmn... |
iclr_2022_keQjAwuC7j- | Two Birds, One Stone: Achieving both Differential Privacy and Certified Robustness for Pre-trained Classifiers via Input Perturbation | Recent studies have shown that pre-trained classifiers are increasingly powerful to improve the performance on different tasks, e.g, neural language processing, image classification. However, adversarial examples from attackers can trick pre-trained classifiers to misclassify. To solve this challenge, a reconstruction network is built before the public pre-trained classifiers to offer certified robustness and defend against adversarial examples through input perturbation. On the other hand, the reconstruction network requires training on the dataset, which incurs privacy leakage of training data through inference attacks. To prevent this leakage, differential privacy (DP) is applied to offer a provable privacy guarantee on training data through gradient perturbation. Most existing works employ certified robustness and DP independently and fail to exploit the fact that input perturbation designed to achieve certified robustness can achieve (partial) DP. In this paper, we propose perturbation transformation to show how the input perturbation designed for certified robustness can be transformed into gradient perturbation during training. We propose Multivariate Gaussian mechanism to analyze the privacy guarantee of this transformed gradient perturbation and precisely quantify the level of DP achieved by input perturbation. To satisfy the overall DP requirement, we add additional gradient perturbation during training and propose Mixed Multivariate Gaussian Analysis to analyze the privacy guarantee provided by the transformed gradient perturbation and additional gradient perturbation. Moreover, we prove that Mixed Multivariate Gaussian Analysis can work with moments accountant to provide a tight DP estimation. Extensive experiments on benchmark datasets show that our framework significantly outperforms state-of-the-art methods and achieves better accuracy and robustness under the same privacy guarantee. | Reject | This paper develops a technique to provide both privacy and robustness
at the same time using differential privacy.
Unfortunately the paper in its current form does not have meaningfully
interpretable security or privacy claims. The reviewers point at a number
of these flaws that the authors do not address to the satisfaction of
the reviewers, but there are a few others as well.
- What is actually private, at the end of this whole procedure? If the
actual "pretrained classifier" is not made private, then what's the
purpose of the entire privacy setup in this paper? Why does the denoiser
need to be private if the classifier isn't?
- The proof of Lemma 1 appears incorrect. The proof in Appendix E says that
Equation 10 is true, but this sweeps all of the remaining Taylor series
terms under the rug and doesn't deal with them. How are they handled?
- In Figure 4(a), what does it even mean to have a "FGSM privacy budget
epsilon"? Or a "MIM privacy budget epsilon"? A privacy budget is almost
always something defined with respect to the *training data privacy*,
how does this relate to the attack in this paper?
- How does this paper compare prior *canonical* defenses, both on the
robustness and privacy side? In particular, comparisons to adversarial
training on the robustness side, and some recent DPSGD result on the
privacy side? | train | [
"iMl9aoO2KR",
"a6Es0cUgVY5",
"KWqlkdEZbDc",
"iqZyJiNqCLx",
"89UXw6dcTWr",
"Pxn02yTzuhF",
"lRS68dVcE4Z",
"5X8BBH8mOn"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your review and comments. We have carefully read through these comments and have explanations as follows.\n\nFor weaknesses:\n\n1.1 Although x_(i)^non and z_(i)^non are two random vectors, the input perturbation, z_(i)^non – x_(i)^non, is deterministic for each batch given input perturbati... | [
-1,
-1,
-1,
-1,
5,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"5X8BBH8mOn",
"lRS68dVcE4Z",
"Pxn02yTzuhF",
"89UXw6dcTWr",
"iclr_2022_keQjAwuC7j-",
"iclr_2022_keQjAwuC7j-",
"iclr_2022_keQjAwuC7j-",
"iclr_2022_keQjAwuC7j-"
] |
iclr_2022_CxebB5Psl1 | Graph Similarities and Dual Approach for Sequential Text-to-Image Retrieval | Sequential text-to-image retrieval, a.k.a. Story-to-images task, requires semantic alignment with a given story and maintaining global coherence in drawn image sequence simultaneously. Most of the previous works have only focused on modeling how to follow the content of a given story faithfully. This kind of overfitting tendency hinders matching structural similarity between images, causing an inconsistency in global visual information such as backgrounds. To handle this imbalanced problem, we propose a novel image sequence retrieval framework that utilizes scene graph similarities of the images and a dual learning scheme. Scene graph describes high-level information of visual groundings and adjacency relations of the key entities in a visual scene. In our proposed retriever, the graph encoding head learns to maximize graph embedding similarities among sampled images, giving a strong signal that forces the retriever to also consider morphological relevance with previously sampled images. We set a video captioning as a dual learning task that reconstructs the input story from the sampled image sequence. This inverse mapping gives informative feedback for our proposed retrieval system to maintain global contextual information of a given story. We also suggest a new contextual sentence encoding architecture to embed a sentence in consideration of the surrounding context. Through extensive experiments, Our proposed framework shows better qualitative and quantitative performance with Visual Storytelling benchmark compared to conventional story-to-image models. | Reject | Reviewers unanimously vote for rejection for several reasons. First, the draft is incomplete and difficult to read. Second, one of the proposed methods (contextual sentence encoder) appears the same as past work, while the other proposed method (graph encoding) is difficult to interpret from what is written. Third, the draft is missing comparisons with recent work, and some included comparisons may be unfair due to data conditions. No author response was provided. The reviewer consensus is that this draft is underdeveloped, and not yet ready for submission or publication. | test | [
"jgNa8_4-yvt",
"FKxe3gnf9Zg",
"bMBlXEWrt8v",
"9KOaY54ka3H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have reviewed all other reviews and keep my original score.",
"The authors propose a method for a sequence-to-sequence retrieval task. The idea of the task is given a set of sentences describing steps in a coherent story, the model needs to retrieve a single image per each sentence to illustrate that story. U... | [
-1,
1,
3,
5
] | [
-1,
5,
5,
3
] | [
"FKxe3gnf9Zg",
"iclr_2022_CxebB5Psl1",
"iclr_2022_CxebB5Psl1",
"iclr_2022_CxebB5Psl1"
] |
iclr_2022_WZR7ckBkzPY | Variational Wasserstein gradient flow | The gradient flow of a function over the space of probability densities with respect to the Wasserstein metric often exhibits nice properties and has been utilized in several machine learning applications. The standard approach to compute the Wasserstein gradient flow is the finite difference which discretizes the underlying space over a grid, and is not scalable. In this work, we propose a scalable proximal gradient type algorithm for Wasserstein gradient flow. The key of our method is a variational formulation of the objective function, which makes it possible to realize the JKO proximal map through a primal-dual optimization. This primal-dual problem can be efficiently solved by alternatively updating the parameters in the inner and outer loops. Our framework covers all the classical Wasserstein gradient flows including the heat equation and the porous medium equation. We demonstrate the performance and scalability of our algorithm with several numerical examples. | Reject | The paper proposes a min/max reformulation for JKO gradient flows appealing the variational formulation of f-divergences. This would alleviate the need of an explicit density. All reviewers pointed out the limited novelty in the work and the limited experimentation.
We encourage authors to add a theoretical analysis to their work and further strengthening of the experimental section with high dimensional experiments, and to resubmit the work on an upcoming venue. | train | [
"tn7cbxtkQR_",
"ypxjBhd3d75",
"3pt3mneiEV0",
"2Ow4C1h62m8",
"IfbHE9O91_W",
"7JcBbeRRQe2",
"zYGuvjjK9mb",
"Wl5ufYQpO6k"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply.\n\nI believe that the points raised in the comments need to be improved/fixed before publication. Hence, the paper is not ready yet and needs to be revised.\n\n",
" We would like to thank the reviewers for the comments and suggestions. We would modify the paper according to the comments.\... | [
-1,
-1,
-1,
-1,
5,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"2Ow4C1h62m8",
"iclr_2022_WZR7ckBkzPY",
"Wl5ufYQpO6k",
"7JcBbeRRQe2",
"iclr_2022_WZR7ckBkzPY",
"iclr_2022_WZR7ckBkzPY",
"iclr_2022_WZR7ckBkzPY",
"iclr_2022_WZR7ckBkzPY"
] |
iclr_2022_FxBdFwFjXX | Multi-Task Distribution Learning | Multi-Task Learning describes training on multiple tasks simultaneously to leverage the shared information between tasks. Tasks are typically defined as alternative ways to label data. Given an image of a face, a model could either classify the presence of sunglasses, or the presence of facial hair. This example highlights how the same input image can be posed as two separate binary classification problems. We present Multi-Task Distribution Learning, highlighting the similarities between Multi-Task Learning and preparing for Distribution Shift. Even with rapid advances in large-scale models, a Multi-Task Learner that is trained with object detection will outperform zero-shot inference on object detection. Similarly, we show how training with a data distribution aids with performance on that data distribution. We begin our experiments with a pairing of distribution tasks. We then show that this scales to optimizing 10 distribution tasks simultaneously. We further perform a task grouping analysis to see which augmentations train well together and which do not. Multi-Task Distribution Learning highlights the similarities between Distribution Shift and Zero-Shot task inference. These experiments will continue to improve with advances in generative modeling that enables simulating more interesting distribution shifts outside of standard augmentations. In addition, we discuss how the WILDS benchmark of Domain Generalizations and Subpopulation Shifts will aid in future work. Utilizing the prior knowledge of data augmentation and understanding multi-task interference is a promising direction to understand the phenomenon of Distribution Shift. To facilitate reproduction, we are open-sourcing code, leaderboards, and experimental data upon publication. | Reject | Though some concepts discussed in the submission are interesting, there are many major concerns: there is a lack of literature review, comparison experiments with the state-of-the-art methods are missing, the technical novelty of the proposed method is very limited.
In the rebuttal, the authors agreed with reviewers' comments and did not provide responses to address reviewers' concerns.
Therefore, based on its current form, this submission does not meet the standard of publication at ICLR. | train | [
"KIT7okUBibu",
"T9u-Ps6XrN",
"CJ5EXEtFmMS",
"WR-C-DW9mee",
"5c0qWqCiS3e"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time in this review! We agree that the paper needs more work.\n\nWe stand by claim that viewing learning with Data Augmentation in a similar light as Multi-Task learning with multiple loss functions is novel.\n\nHowever, we agree that we have not provided enough evidence or direction for this s... | [
-1,
-1,
1,
1,
3
] | [
-1,
-1,
4,
4,
4
] | [
"WR-C-DW9mee",
"CJ5EXEtFmMS",
"iclr_2022_FxBdFwFjXX",
"iclr_2022_FxBdFwFjXX",
"iclr_2022_FxBdFwFjXX"
] |
iclr_2022_e_FK_rDajEv | Learning Neural Causal Models with Active Interventions | Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science. The appealing scaling properties of neural networks have recently led to a surge of interest in differentiable neural network-based methods for learning causal structures from data. So far, differentiable causal discovery has focused on static datasets of observational or interventional origin. In this work, we introduce an active intervention-targeting mechanism which enables quick identification of the underlying causal structure of the data-generating process. Our method significantly reduces the required number of interactions compared with random intervention targeting and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph (DAG) from data. We examine the proposed method across multiple frameworks in a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data. | Reject | This paper proposes an active intervention-targeting mechanism for causal structure discovery. After the discussion, there was a consensus among the reviewers that this paper needs another round of revision to address the lingering concerns. These concerns include providing a more fair experimental setup (e.g. by properly distinguishing and designing proper experiments for the observational, random intervention, and targeted intervention settings). Since the paper lacks theoretical guarantees (which is OK and not a requirement for acceptance), the merits rest on providing a thorough and fair experimental evaluation. | train | [
"p92I4j9A8zv",
"jv-mPV-DK87",
"2LCLBXMDKXt",
"yy23p--vKua",
"1sx6JQ2Opc0",
"9hBozcJvCEi",
"-tFqZW1P9re",
"LZCwkD2ZHRX",
"uUGnCiOd9dI",
"OR0GJc4_jIA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"Structure learning from observational data is a long-standing challenge that could benefit from creative uses of interventional data. The authors explore the use of active learning in a continuous optimization framework to better traverse and narrow down the space of potential graphs that describe the data. The co... | [
3,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_e_FK_rDajEv",
"9hBozcJvCEi",
"iclr_2022_e_FK_rDajEv",
"p92I4j9A8zv",
"yy23p--vKua",
"1sx6JQ2Opc0",
"uUGnCiOd9dI",
"2LCLBXMDKXt",
"OR0GJc4_jIA",
"iclr_2022_e_FK_rDajEv"
] |
iclr_2022_-0Cjhnl-dhK | Towards Uncertainties in Deep Learning that Are Accurate and Calibrated | Predictive uncertainties can be characterized by two properties---calibration and sharpness. This paper introduces algorithms that ensure the calibration of any model while maintaining sharpness. They apply in both classification and regression and guarantee the strong property of distribution calibration, while being simpler and more broadly applicable than previous methods (especially in the context of neural networks, which are often miscalibrated). Importantly, these algorithms achieve a long-standing statistical principle that forecasts should maximize sharpness subject to being fully calibrated. Using our algorithms, machine learning models can under some assumptions be calibrated without sacrificing accuracy: in a sense, calibration can be a free lunch. Empirically, we find that our methods improve predictive uncertainties on several tasks with minimal computational and implementation overhead. | Reject | The reviewers highlight that several of significant claims of the paper are not backed up by experiments, and the experiments themselves lack sufficient detail, therefore, at this stage, I recommend rejection. I suggest the authors address the questions and comments they have received before considering whether they might resubmit or not. | val | [
"9OGByQChNBS",
"yOMqxuJl-Fz",
"hpXu9RMQr79",
"fIkyQWR5Yex"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to re-calibrate the probabilistic predictions of machine learning models by learning a secondary function that maps from their output parameters to a new set of calibrated distribution parameters. The secondary function is learnt on a held out set. The paper argues for the use of flexible recal... | [
3,
5,
3,
3
] | [
2,
3,
4,
3
] | [
"iclr_2022_-0Cjhnl-dhK",
"iclr_2022_-0Cjhnl-dhK",
"iclr_2022_-0Cjhnl-dhK",
"iclr_2022_-0Cjhnl-dhK"
] |
iclr_2022_-29uFS4FiDZ | Word Sense Induction with Knowledge Distillation from BERT | Pre-trained contextual language models are ubiquitously employed for language understanding tasks, but are unsuitable for resource-constrained systems. Noncontextual word embeddings are an efficient alternative in these settings. Such methods typically use one vector to encode multiple different meanings of a word, and incur errors due to polysemy. This paper proposes a two-stage method to distill multiple word senses from a pre-trained language model (BERT) by using attention over the senses of a word in a context and transferring this sense information to fit multi-sense embeddings in a skip-gram-like framework. We demonstrate an effective approach to training the sense disambiguation mechanism in our model with a distribution over word senses extracted from the output layer embeddings of BERT. Experiments on the contextual word similarity and sense induction tasks show that this method is superior to or competitive with state-of-the-art multi-sense embeddings on multiple benchmark data sets, and experiments with an embedding-based topic model (ETM) demonstrates the benefits of using this multi-sense embedding in a downstream application.
| Reject | This paper investigates a technique for projecting contextual embeddings into static embeddings. Neither the technique is ver novel, nor are the empirical results very strong. While the reviewers did not engage in a discussion, the area chair does not see this paper reaching the quality bar of the conference. | test | [
"D7mzqWJD_iC",
"9vpH6LkonKN",
"_SawjXH19AC",
"UyYxHbEcFaH",
"7ZMfD56QdiY",
"hioh9MATWsZ",
"7os2cewbbx8",
"Puraq76gBnBr",
"hs3sS17kXKh",
"uF3zVon4Frj",
"_ZznDpSjZhv",
"I8zjatSbfBb",
"Vie8GbEKPv0"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The comparison of inference time between the sense embedding models and the BERT model.\n\nModel ------------------- Inference time / example\n\nMUSE ------------------ 0.06 ms\n\nBERTKDEmbed ----- 0.06 ms\n\nBERTSense ---------- 32.35 ms\n\nWe can see that there is a huge difference in the latency between transf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
3,
4
] | [
"I8zjatSbfBb",
"uF3zVon4Frj",
"Vie8GbEKPv0",
"_ZznDpSjZhv",
"hs3sS17kXKh",
"7os2cewbbx8",
"uF3zVon4Frj",
"iclr_2022_-29uFS4FiDZ",
"iclr_2022_-29uFS4FiDZ",
"iclr_2022_-29uFS4FiDZ",
"iclr_2022_-29uFS4FiDZ",
"iclr_2022_-29uFS4FiDZ",
"iclr_2022_-29uFS4FiDZ"
] |
iclr_2022_kWuBTQmkO8_ | MixRL: Data Mixing Augmentation for Regression using Reinforcement Learning | Data augmentation is becoming essential for improving regression accuracy in critical applications including manufacturing and finance. Existing techniques for data augmentation largely focus on classification tasks and do not readily apply to regression tasks. In particular, the recent Mixup techniques for classification rely on the key assumption that linearity holds among training examples, which is reasonable if the label space is discrete, but has limitations when the label space is continuous as in regression. We show that mixing examples that either have a large data or label distance may have an increasingly-negative effect on model performance. Hence, we use the stricter assumption that linearity only holds within certain data or label distances for regression where the degree may vary by each example. We then propose MixRL, a data augmentation meta learning framework for regression that learns for each example how many nearest neighbors it should be mixed with for the best model performance using a small validation set. MixRL achieves these objectives using Monte Carlo policy gradient reinforcement learning. Our experiments conducted both on synthetic and real datasets show that MixRL significantly outperforms state-of-the-art data augmentation baselines. MixRL can also be integrated with other classification Mixup techniques for better results. | Reject | This paper generalize the idea of Mixup-based data augmentation for regression. Compared to classification for which Mixup was used, the paper argues that in regression the linearity assumption only holds within specific data or label distances. The paper thus proposes MixRL to select suitable pairs using k-nearest neighbor in a batch for mixup. The selection policy is trained with meta-learning by minimizing the validation-set loss. The approach provides consistent but small improvement over mixup on several datasets. Reviewers have also suggested discussion and comparison with more baselines, such as respective method using other (lower-variant) gradient estimators (e.g., gumbel-softmax), and using local input/output kernels for data selection, etc. | train | [
"eOYqCYT8wxE",
"6h-9hheaTn-",
"7yZt3TBIzN",
"Xkf-HgVS6PZ",
"3VFnp-ddW8bb",
"ZJKnbVvmBZd",
"vPCJ6hYy_G5",
"w1aSa2xAjy9"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your additional comments.\n\nComment 1\n\nWe clarify that using Gumbel-softmax is not enough to approximate the gradients.\n\nHere is the entire process described in four steps:\n1. (x, y, k) -> mixup value network -> probability -> sampling -> mixup sampled pairs with the training dataset (mixed da... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"6h-9hheaTn-",
"3VFnp-ddW8bb",
"w1aSa2xAjy9",
"vPCJ6hYy_G5",
"ZJKnbVvmBZd",
"iclr_2022_kWuBTQmkO8_",
"iclr_2022_kWuBTQmkO8_",
"iclr_2022_kWuBTQmkO8_"
] |
iclr_2022_qiukmqxQF6 | LatTe Flows: Latent Temporal Flows for Multivariate Sequence Analysis | We introduce Latent Temporal Flows (\emph{LatTe-Flows}), a method for probabilistic multivariate time-series analysis tailored for high dimensional systems whose temporal dynamics are driven by variations in a lower-dimensional discriminative subspace. We perform indirect learning from hidden traits of observed sequences by assuming that the random vector representing the data is generated from an unobserved low-dimensional latent vector. \emph{LatTe-Flows} jointly learns auto-encoder mappings to a latent space and learns the temporal distribution of lower-dimensional embeddings of input sequences. Since encoder networks retain only the essential information to generate a latent manifold, the temporal distribution transitions can be more efficiently uncovered by time conditioned Normalizing Flows. The learned latent effects can then be directly transferred into the observed space through the decoder network. We demonstrate that the proposed method significantly outperforms the state-of-the-art on multi-step forecasting benchmarks, while enjoying reduced computational complexity on several real-world datasets. We apply {\emph{LatTe-Flows}} to a challenging sensor-signal forecasting task, using multivariate time-series measurements collected by wearable devices, an increasingly relevant health application.
| Reject | This paper proposes a new autoregressive flow model with autoencoders to learn latent embeddings from time series. The authors conducted extensive comparative experiments, and the experimental results are very encouraging. However, the proposed method, as a combination of the encoder/decoder structure and autoregressive flows on the latent space, does not seem novel enough. | test | [
"hqxI7QNdwxL",
"51k-3pW3gq",
"qpmZuCdNKGx",
"uGXX8yeRqzx",
"VnB8fl5GiV0",
"5PHqg6j-u7",
"gBo-CknEDXK"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response. I've read all the other reviewers' comments and it seems that some of us agree that the comparative experiments are extensive and the experimental results are very encouraging. However, we also seem to agree that the proposed method, which combines encoder/decoder structure w... | [
-1,
-1,
-1,
-1,
3,
5,
1
] | [
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"uGXX8yeRqzx",
"gBo-CknEDXK",
"5PHqg6j-u7",
"VnB8fl5GiV0",
"iclr_2022_qiukmqxQF6",
"iclr_2022_qiukmqxQF6",
"iclr_2022_qiukmqxQF6"
] |
iclr_2022_FqMXxvHquTA | SegTime: Precise Time Series Segmentation without Sliding Window | Time series are common in a wide range of domains and tasks such as stock market partitioning, sleep stage labelling, and human activity recognition, where segmentation, i.e. splitting time series into segments that correspond to given categories, is often required. A common approach to segmentation is to sub-sample the time series using a sliding window with a certain length and overlapping stride, to create sub-sequences of fixed length, and then classify these sub-sequences into the given categories. This reduces time series segmentation to classification. However, this approach guarantees to find only approximate breakpoints: the precise breakpoints can appear in sub-sequences, and thus the accuracy of segmentation degrades when labels change fast. Also, it ignores possible long-term dependencies between sub-sequences. We propose a neural networks approach SegTime that finds precise breakpoints, obviates sliding windows, handles long-term dependencies, and it is insensitive to the label changing frequency. SegTime does so, thanks to its bi-pass architecture with several structures that can process information in a multi-scale fashion. We extensively evaluated the effectiveness of SegTime with very promising results. | Reject | This paper deals with segmentation of time series. The paper has received quite detailed reviews and the approach seems to have several interesting aspects (interesting architecture choice, stepwise classification approach, ability of capturing long range dependencies). However, there is a consensus that the paper would definitely benefit from a further iteration before publication in ICLR or in any other similar venue. The authors in their final response have already identified the improvement points raised by the reviewers. In addition to these, I believe it would be helpful to put the contributions better into perspective with existing literature. I think all these this would require a major rewrite and I encourage the authors to make a fresh submission in a future venue. | train | [
"Y-h7PPKMOr",
"0eAAWp6mWlR",
"w-RBE6YZYtD",
"cLny6uNJyBF",
"enqCQC19vFN",
"Bkm-kuc6YvL",
"ASRe9VKqOhI",
"GhKljGJQQBx",
"-H5-GXcEN0n",
"sN6dykWPCwd",
"QVUkMWOqrMz",
"_NlMXa0c4s0",
"6elit_UwCnh",
"HzhWEIg5bwI",
"4gdfdz-4-kO"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have responded comments 2, 3, 4 in the General Reponse since other reviewers also have similar comments. We copy the general reponse here:\n\nWe agree with the reviewers that the paper should be improved by providing more details for methods, evaluation, extra ablation study, bound difference, training and inf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"w-RBE6YZYtD",
"-H5-GXcEN0n",
"Bkm-kuc6YvL",
"GhKljGJQQBx",
"HzhWEIg5bwI",
"4gdfdz-4-kO",
"GhKljGJQQBx",
"enqCQC19vFN",
"_NlMXa0c4s0",
"6elit_UwCnh",
"iclr_2022_FqMXxvHquTA",
"iclr_2022_FqMXxvHquTA",
"iclr_2022_FqMXxvHquTA",
"iclr_2022_FqMXxvHquTA",
"iclr_2022_FqMXxvHquTA"
] |
iclr_2022_gxRcqTbJpVW | Structured Pruning Meets Orthogonality | Several recent works empirically found finetuning learning rate is crucial to the final performance in structured neural network pruning. It is shown that the \emph{dynamical isometry} broken by pruning answers for this phenomenon. How to develop a filter pruning method that maintains or recovers dynamical isometry \emph{and} is scalable to modern deep networks remains elusive up to now. In this paper, we present \emph{orthogonality preserving pruning} (OPP), a regularization-based structured pruning method that maintains the dynamical isometry during pruning. Specifically, OPP regularizes the gram matrix of convolutional kernels to encourage kernel orthogonality among the important filters meanwhile driving the unimportant weights towards zero. We also propose to regularize batch-normalization parameters for better preserving dynamical isometry for the whole network. Empirically, OPP can compete with the \emph{ideal} dynamical isometry recovery method on linear networks. On non-linear networks (ResNet56/VGG19, CIFAR datasets), it outperforms the available solutions \emph{by a large margin}. Moreover, OPP can also work effectively with modern deep networks (ResNets) on ImageNet, delivering encouraging performance in comparison to many recent filter pruning methods. To our best knowledge, this is the \emph{first} method that effectively maintains dynamical isometry during pruning for \emph{large-scale} deep neural networks. | Reject | The paper proposes a pruning approach that regularizes the gram matrix of convolutional kernels to encourage kernel orthogonality among the important filters meanwhile driving the unimportant weights towards zero. While the reviewers found the proposed method well-motivated and intuitive, they believe that the proposed claims are of limited novelty and are not supported well by the experiments. Analyzing and explaining the effect of different parts of the proposed method, i.e., orthogonalization and regularization of batch normalization parameters, on the accuracy of the pruned models would significantly improve the manuscript. | train | [
"tAiCb5J2yQ0",
"K7gjETrMNOM",
"YCMphI4jadB",
"7swOsWTrxuV",
"2qkUfSNuT7_",
"NUBa_zVxIIu",
"Z4vswkAZjRW",
"FNf7VGHJmvF",
"K-CGW9WDc4",
"rgqxCUavMqC",
"kkgYu3GMZFU",
"VjEA4T_n_-",
"YorqrDhbGQj",
"aLqIrvF-Nhx",
"haOc2vHzaj",
"zPoiTRmG5py",
"XCGvSsUyKL"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 8kct,\n\nSorry to bother you if any. We greatly thank you for the reviewing process so far! Even though you do not recommend acceptance for this paper, we *really* hope to have some feedback from you still, since *your opinions are critical* for us to improve the paper. If possible, please take a lo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"zPoiTRmG5py",
"YCMphI4jadB",
"XCGvSsUyKL",
"rgqxCUavMqC",
"XCGvSsUyKL",
"zPoiTRmG5py",
"zPoiTRmG5py",
"kkgYu3GMZFU",
"iclr_2022_gxRcqTbJpVW",
"aLqIrvF-Nhx",
"YorqrDhbGQj",
"XCGvSsUyKL",
"haOc2vHzaj",
"K-CGW9WDc4",
"iclr_2022_gxRcqTbJpVW",
"iclr_2022_gxRcqTbJpVW",
"iclr_2022_gxRcqTbJ... |
iclr_2022_xspalMXAB0M | A Boosting Approach to Reinforcement Learning | We study efficient algorithms for reinforcement learning in Markov decision processes, whose complexity is independent of the number of states. This formulation succinctly captures large scale problems, but is also known to be computationally hard in its general form.
Previous approaches attempt to circumvent the computational hardness by assuming structure in either transition function or the value function, or by relaxing the solution guarantee to a local optimality condition.
We consider the methodology of boosting, borrowed from supervised learning, for converting weak learners into an effective policy. The notion of weak learning we study is that of sampled-based approximate optimization of linear functions over policies. Under this assumption of weak learnability, we give an efficient algorithm that is capable of improving the accuracy of such weak learning methods iteratively. We prove sample complexity and running time bounds on our method, that are polynomial in the natural parameters of the problem: approximation guarantee, discount factor, distribution mismatch and number of actions. In particular, our bound does not explicitly depend on the number of states.
A technical difficulty in applying previous boosting results, is that the value function over policy space is not convex. We show how to use a non-convex variant of the Frank-Wolfe method, coupled with recent advances in gradient boosting that allow incorporating a weak learner with multiplicative approximation guarantee, to overcome the non-convexity and attain global optimality guarantees. | Reject | The paper proposes a boosting algorithm for RL based on online boosting. The main advantage of the result is that the sample complexity does not explicitly depend on number of states. Post rebuttal, some of the reviewers have changed their opinion on the paper. However, overall the reviewers still seem to be on the fence about this paper. Seems like the paper combines the techniques from Hazan Singh’21 along with a frank-wolfe algorithm to deal with non-convex sets but the reviewers seem to view this as not as significant a new contribution.
I see the paper as being interesting but do agree with some of the comments of the reviewers and am leaning to a reject. | train | [
"aDI9m0Di7ap",
"lbme5TvtJDh",
"BG3X4UrXC1T",
"Y7vrsjZu7hV",
"iM3sz9sUXEX",
"_Gxh7ZMtJW1",
"xcCc9WQ9i53",
"Oq7jQ7oUsh",
"4wKjyq8M74I",
"dTWwnCWgey9",
"WNEKfuqEOR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for taking the time to respond to the review, and thank you for clarifying some things I had confusion about. In light of their updates I have improved my score from 3 to 5. I am still very concerned about clarity in the paper overall and I think the paper would be made much stronger with... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"iM3sz9sUXEX",
"iclr_2022_xspalMXAB0M",
"WNEKfuqEOR",
"4wKjyq8M74I",
"lbme5TvtJDh",
"dTWwnCWgey9",
"Oq7jQ7oUsh",
"iclr_2022_xspalMXAB0M",
"iclr_2022_xspalMXAB0M",
"iclr_2022_xspalMXAB0M",
"iclr_2022_xspalMXAB0M"
] |
iclr_2022_-3Qj7Jl6UP5 | The magnitude vector of images | The magnitude of a finite metric space is a recently-introduced invariant quantity. Despite beneficial theoretical and practical properties, such as a general utility for outlier detection, and a close connection to Laplace radial basis kernels, magnitude has received little attention by the machine learning community so far. In this work, we investigate the properties of magnitude on individual images, with each image forming its own metric space. We show that the known properties of outlier detection translate to edge detection in images and we give supporting theoretical justifications. In addition, we provide a proof of concept of its utility by using a novel magnitude layer to defend against adversarial attacks. Since naive magnitude calculations may be computationally prohibitive, we introduce an algorithm that leverages the regular structure of images to dramatically reduce the computational cost. | Reject | This paper discusses a relatively new concept called "magnitude" for finite metric spaces and investigates its potential applications in machine learning, in particular for computer vision.
Reviewers generally agree that this is an interesting concept and appreciate the algorithm for reducing its computational cost.
However, there are concerns (1) that the concept is not well-motivated theoretically for machine learning problems
(2) the experimental results, for edge detection and adversarial robustness, are not convincing. More rigorous empirical work should be carried out. | train | [
"3d-vQ94t9mc",
"qPcjEyVlJmR",
"p4EPN1RxZk",
"hPYWOj9ZOe3",
"1pCoiEdslWz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper discuss a concept of magnitude vector of images. The authors provided a fast algorithm for computing the magnitude vector, and discuss the potential applications in edge detection and adversarial robustness. Strength:\nThe paper discuss a concept that is relatively new in the machine learning community.... | [
3,
3,
-1,
3,
3
] | [
4,
3,
-1,
3,
4
] | [
"iclr_2022_-3Qj7Jl6UP5",
"iclr_2022_-3Qj7Jl6UP5",
"iclr_2022_-3Qj7Jl6UP5",
"iclr_2022_-3Qj7Jl6UP5",
"iclr_2022_-3Qj7Jl6UP5"
] |
iclr_2022_7zFokR7k_86 | Learning Symbolic Rules for Reasoning in Quasi-Natural Language | Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human intelligence. However, rule-based systems have had limited success competing with learning-based systems outside formalized domains such as automated theorem proving. We hypothesize that this is due to the manual construction of rules in past attempts. In this work, we ask how we can build a rule-based system that can reason with natural language input but without the manual construction of rules. We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences, and MetaInduce, a learning algorithm that induces MetaQNL rules from training data consisting of questions and answers, with or without intermediate reasoning steps. Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks; it learns compact models with much less data and produces not only answers but also checkable proofs. | Reject | The paper describes a system for learning rules in a quasi-NL format: roughly Horn clauses where a predicate p(X1,...,Xk) is replaced by a natural language pattern interleaving ground tokens and variables. The method is to propose ground sentences - using one of several task-specific approaches - use anti-unification of pairs to variabilize, and then find a minimal theory from these proposed pairs by reduction to maxsat.
Pros:
- QNL is a neat idea, and makes symbolic rule-learning possible to some NLP tasks
- The use of maxsat is novel in rule-learning AFAIK
Cons:
- Unification is a highly simplified model of the NL task of cross-document co-reference
- It's unclear if maxsat process will work in the presence of noise, or how well it scales
- The datasets use clean text generated from templates or synthetic grammars
- Experimentally, the generality of the system is not well demonstrated, because there are differences in how it is applied: eg a subset of short examples for scan, input engineering ($TRUE, $FALSE) for RuleTaker, plus the "heuristics for filtering invalid rules generated by anti-unification”
- It's not clear if this work really speaks to the main "point" of the SCAN and RuleTaker datasets. These are both the kind tasks that symbolic systems would be expected to do well, and are used as ANN benchmarks because ANNs perform in unexpected ways: worse than one would expect for SCAN, and better for RuleTaker. They are important for understanding ANNs but I'm not certain what the research benefit is of using them for symbolic methods as a benchmark. | train | [
"egVauJudI_u",
"jPYzzXRYu2-",
"hxVy-Gx4lKH",
"vsnyod7rm9k",
"1wbgtKL98Y",
"ZTI8pJ16ZrS",
"vcVd4GH8PIV",
"tVlOX2FrD9n",
"pkn_-ZLEsIJ",
"oq-JrRE0K1T",
"mHVfo7vIYja",
"ujaJ1Pec3O7",
"X1FAZGYwgWC",
"OPKO1skpy5U",
"JoRDRqUmoAq",
"7s9vVGYza3d"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an algorithm that learns rules from natural language data, and a symbolic system for manipulating these rules, where existing provers can be applied. The objective is to maximize the number of examples in a test set that are consistent with the proposed mode while minimizing the number of rules... | [
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2022_7zFokR7k_86",
"oq-JrRE0K1T",
"mHVfo7vIYja",
"iclr_2022_7zFokR7k_86",
"tVlOX2FrD9n",
"vcVd4GH8PIV",
"pkn_-ZLEsIJ",
"7s9vVGYza3d",
"JoRDRqUmoAq",
"egVauJudI_u",
"vsnyod7rm9k",
"X1FAZGYwgWC",
"OPKO1skpy5U",
"iclr_2022_7zFokR7k_86",
"iclr_2022_7zFokR7k_86",
"iclr_2022_7zFokR7k_8... |
iclr_2022_jZQOWas0Lo3 | Cycle monotonicity of adversarial attacks for optimal domain adaptation | We reveal an intriguing connection between adversarial attacks and cycle monotone maps, also known as optimal transport maps. Based on this finding, we developed a novel method named source fiction for semi-supervised optimal transport-based domain adaptation. In our algorithm, instead of mapping from target to the source domain, optimal transport maps target samples to the set of adversarial examples. The trick is that these adversarial examples are labeled target samples perturbed to look like source samples for the source domain classifier. Due to the cycle monotonicity of adversarial attacks, optimal transport can naturally approximate this transformation. We conduct experiments on various datasets and show that our method can notably improve the performance of optimal transport methods in semi-supervised domain adaptation. | Reject | In this paper propose a novel approach for semi-supervised domain adaptation based on the cyclic monotonicity property of optimal transport map. The main idea is to adapt (perturbed wrt a source classifier) the labeled source samples toward the target samples while preserving the known labels via the cyclical monotonicity. Then these perturbed samples can be used to perform classical OT domain adaptation. This pre-processing of the data has been shown in the numerical experiments to lead to better performance in average.
The proposed method has been found intriguing by all reviewers but the writing of the paper has been found clearly lacking and several suggestions were proposed by the reviewers. The choice of the authors to call the perturbed samples adversaries for instance made the paper harder to understand and the (anti-adversarial is also not a good choice of words). Another concern was that despite encouraging numerical results lack more baselines semi-supervised Domain Adaptation methods discussed by the reviewers were not compared (with or without OT).
The authors provided a short but clear response that was appreciated by the reviewers. But the clarifications promised by the authors were not done in the PDF during the editing period which means that the paper clearly needs a new round of reviews. For this reason the consensus during the discussions was that this paper should be rejected. The AC believes that this is an interesting research direction that should be investigated but that the paper needs some more work before reaching the threshold for acceptance in selective ML venues. The authors are strongly encouraged to take into account the comments form the reviewers before resubmitting their work. | test | [
"O9u5AyTCiQM",
"c1gR0Tgzxqi",
"hhh5AfjlG4o",
"rRW9Hl0g4q5",
"DidGwcIVOWJ",
"DVSEXm-OlTy",
"e1G3T3Jo8eQ",
"GDh89EhfXEK",
"-hDavJn30a6",
"oNyZ8yh-ZzP",
"auTt92gf8yH",
"gCl3_W_ib3v",
"eT2glnTY7yk"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answers. \nI have read your answer and the other reviews. The author did not submit any revision of the paper.\nI maintain my opinion that this work is promising but is rather unfinished and thus maintain my score. I encourage the author to resubmit their work. ",
" Thank you for your concise... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"e1G3T3Jo8eQ",
"-hDavJn30a6",
"DVSEXm-OlTy",
"DidGwcIVOWJ",
"gCl3_W_ib3v",
"eT2glnTY7yk",
"auTt92gf8yH",
"iclr_2022_jZQOWas0Lo3",
"oNyZ8yh-ZzP",
"iclr_2022_jZQOWas0Lo3",
"iclr_2022_jZQOWas0Lo3",
"iclr_2022_jZQOWas0Lo3",
"iclr_2022_jZQOWas0Lo3"
] |
iclr_2022_1R_PRbQK2eu | Dual Training of Energy-Based Models with Overparametrized Shallow Neural Networks | Energy-based models (EBMs) are generative models that are usually trained via maximum likelihood estimation. This approach becomes challenging in generic situations where the trained energy is nonconvex, due to the need to sample the Gibbs distribution associated with this energy. Using general Fenchel duality results, we derive variational principles dual to maximum likelihood EBMs with shallow overparametrized neural network energies, both in the active (aka feature-learning) and lazy regimes. In the active regime, this dual formulation leads to a training algorithm in which one updates concurrently the particles in the sample space and the neurons in the parameter space of the energy at a faster rate. We also consider a variant of this algorithm in which the particles are sometimes restarted at random samples drawn from the data set, and show that performing these restarts at every iteration step corresponds to score matching training. Using intermediate parameter setups in our dual algorithm thereby gives a way to interpolate between maximum likelihood and score matching training. These results are illustrated in simple numerical experiments. | Reject | In this paper, the authors generalized the Fenchel duality formulation of the maximum likelihood for F1-EBM, which leads to a min-max optimization formulation. Meanwhile, the optimization reveals a new connection between primal-dual MLE with score matching. These contribution is significant and make the paper interesting to the community.
However, there are several issues need to be addressed.
- *Experiments*: most of the reviewers concern about the empirical study, which are conducted on synthetic data. The paper will be much stronger if the comparison on real-world dataset, e.g., MNIST and CIFAR10, can be conducted.
- *Clarification of paper*: I totally understand that due to this is a theoretical-oriented paper, it must be notation and derivation heavy. However, the paper will be much easier for reader, if more discussion is added, e.g., the implementation of the proposed algorithms and more explanation about the comparison between primal-dual algorithm vs. score matching and the experimental results for broader audiences.
In fact, I personally like the paper very much, which provides a promising solid algorithm for EBM estimation, and the connection to score matching is also novel and different from the current understanding. However, unfortunately, the authors gave up the rebuttal and did not successfully address the concerns from the reviewers. I strongly encourage the authors to revise the draft according to the reviewers' suggestion and resubmit to another venue. | train | [
"TUSzf3BnA7D",
"kJTiU6-GFk9",
"ssdg9rnIMle",
"7mL3xbPMVXy",
"-3LOmmntLzB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewers for their work. Their comments will be useful for us to improve our paper.",
"This paper derives a Fenchel duality formulation of the maximum likelihood loss of F1-EBMs, which turns the optimization into a min-max problem on probability measures over the sample space. A dual... | [
-1,
6,
3,
5,
5
] | [
-1,
3,
3,
3,
4
] | [
"iclr_2022_1R_PRbQK2eu",
"iclr_2022_1R_PRbQK2eu",
"iclr_2022_1R_PRbQK2eu",
"iclr_2022_1R_PRbQK2eu",
"iclr_2022_1R_PRbQK2eu"
] |
iclr_2022_2aC0_RxkBL_ | Where is the bottleneck in long-tailed classification? | A commonly held belief in deep-learning based long-tailed classification is that the representations learned from long-tailed data are ”good enough” and the performance bottleneck is the classification head atop the representation learner. We design experiments to investigate this folk wisdom, and find that representations learned from long-tailed data distributions substantially differ from the representations learned from ”normal” data distributions. We show that the long-tailed representations are volatile and brittle with respect to the true data distribution. Compared to the representations learned from the true, balanced distributions, long-tailed representations fail to localize tail classes and display vastly worse inter-class separation and intra-class compactness when unseen samples from the true data distribution are embedded into the feature space. We provide an explanation for why data augmentation helps long-tailed classification despite leaving the dataset imbalance unchanged — it promotes inter-class separation, intra-class compactness, and improves localization of tail classes w.r.t to the true data distribution. | Reject | This paper investigates the role of representation learning when the distribution over the feature space has a long tail. The main motivation is to determine how much of the overall learning, in this case, is bottlenecked specifically by representation learning. The main findings are that vanilla learning gives brittle long-tailed representations, harming overall performance. The paper suggests a form of data augmentation to remedy this. Reviewers acknowledge that this investigation is worthwhile. However, many concerns were raised as to whether experiments support the drawn conclusions. A more principled approach to the data augmentation methodology is also needed. The authors address some of these, providing further experiments, but these were not enough to sway reviewers. Since results are fundamentally empirical in nature, this shortcoming indicates that the paper is not ready to share with the community just yet. Stronger experiments with clearer evidence are needed to fully support the thesis of the work. | val | [
"0Vzcz1TcgIm",
"5gPqLqHczA",
"esyfqNXFprn",
"_mYxegrgPc-",
"6LGNfTrb7h",
"-oWQzglOEuq",
"Jk3FNRzY1l",
"jmg-PcyNAMI"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the great feedback and enthusiasm! We shall add these papers to the literature review section to cover more perspectives on representation learning in the long-tailed setting. It is indeed true that representation learning is becoming more common, however, it is also true that the dominant paradigm ... | [
-1,
-1,
-1,
-1,
5,
3,
8,
3
] | [
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"jmg-PcyNAMI",
"Jk3FNRzY1l",
"6LGNfTrb7h",
"-oWQzglOEuq",
"iclr_2022_2aC0_RxkBL_",
"iclr_2022_2aC0_RxkBL_",
"iclr_2022_2aC0_RxkBL_",
"iclr_2022_2aC0_RxkBL_"
] |
iclr_2022_El9kZ2caYVy | Noise-Contrastive Variational Information Bottleneck Networks | While deep neural networks for classification have shown impressive predictive performance, e.g. in image classification, they generally tend to be overconfident. We start from the observation that popular methods for reducing overconfidence by regularizing the distribution of outputs or intermediate variables achieve better calibration by sacrificing the separability of correct and incorrect predictions, another important facet of uncertainty estimation. To circumvent this, we propose a novel method that builds upon the distributional alignment of the variational information bottleneck and encourages assigning lower confidence to samples from the latent prior. Our experiments show that this simultaneously improves prediction accuracy and calibration compared to a multitude of output regularization methods without impacting the uncertainty-based separability in multiple classification settings, including under distributional shift. | Reject | The paper proposes a classification method that improves model calibration using variational information bottlenecks and a noise-contrastive loss.
Unfortunately, the authors' were not able to participate in the discussion of the paper with the reviewers. Given this, the reviewers raised several unaddressed concerns: First, it was argued that the different components of the proposed method required additional justification, in particular with regards to novelty. Second, reviewers argued that the paper required additional empirical validation, for example by testing if it works well with different convolutional methods such as ResNets.
Given these concerns, a consensus was reached that this paper should be rejected which is also my recommendation, | train | [
"wu9dqvVOL_Y",
"8RcsIfWUp9H",
"OL7ClwPTBis",
"xqmLLZ452J-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work suggests a *noise-contrastive loss* for variational information bottleneck networks to resolve the poor performance at separating correct and incorrect predictions in regularization methods. Standard regularization methods suffer *separability problems*: models indiscriminately regularize confidence to i... | [
5,
6,
5,
3
] | [
3,
4,
3,
3
] | [
"iclr_2022_El9kZ2caYVy",
"iclr_2022_El9kZ2caYVy",
"iclr_2022_El9kZ2caYVy",
"iclr_2022_El9kZ2caYVy"
] |
iclr_2022_TN-W4p7H2pK | Conditional Generative Quantile Networks via Optimal Transport and Convex Potentials | Quantile regression has a natural extension to generative modelling by leveraging a stronger convergence in pointwise rather than in distribution. While the pinball quantile loss works in the scalar case, it does not have a provable extension to the vector case. In this work, we consider a quantile approach to generative modelling using optimal transport with provable guarantees. We suggest and prove that by optimizing smooth functions with respect to the dual of the correlation maximization problem, the optimum is convex almost surely and hence construct a Brenier map as our generative quantile network. Furthermore, we introduce conditional generative modelling with a Kantorovich dual objective by constructing an affine latent model with respect to the covariates. Through extensive experiments on synthetic and real datasets for conditional generative and probabilistic forecasting tasks, we demonstrate the efficacy and versatility of our theoretically motivated model as a distribution estimator and conditioner. | Reject | This paper proposes a conditional quantile generative model using optimal transport. Although the problem addressed in this paper is interesting and important, the proposed convex potential quantile (CPQ) approach is highly relevant to a recent work (Carlier et al. 2017). Due to the lack of clear explanations of the contributions compared to the existing work, none of the reviewers suggested acceptance of this paper. | train | [
"CaVSCOjTq0i",
"o1EsL7-haL",
"1lwisCZBP8W",
"thI4O_dMDGL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Quantile regression is frequently used as an alternative to conventional regression. An important advantage of quantile regression is that provides enough flexibility to capture the whole conditional distribution, rather than the conditional mean, of the response variable for given predictor variables. Standard ap... | [
3,
3,
5,
3
] | [
4,
4,
4,
2
] | [
"iclr_2022_TN-W4p7H2pK",
"iclr_2022_TN-W4p7H2pK",
"iclr_2022_TN-W4p7H2pK",
"iclr_2022_TN-W4p7H2pK"
] |
iclr_2022_kiNEOCSEzt | Estimating and Penalizing Induced Preference Shifts in Recommender Systems | The actions that a recommender system (RS) takes -- the content it exposes users to -- influence the preferences users have over what content they want. Therefore, when an RS designer chooses which system to deploy, they are implicitly \emph{choosing how to shift} or influence user preferences. Even more, if the RS is trained via long-horizon optimization (e.g. reinforcement learning), it will have incentives to manipulate preferences, i.e to shift them so they are more easy to satisfy, and thus conducive to higher reward. While some work has argued for making systems myopic to avoid this issue, myopic systems can still influence preferences in undesirable ways. In this work, we argue that we need to enable system designers to \textit{estimate} the shifts an RS \emph{would} induce; \textit{evaluate}, before deployment, whether the shifts are undesirable; and even \textit{actively optimize} to avoid such shifts. These steps involve two challenging ingredients: \emph{estimation} requires the ability to anticipate how hypothetical policies would influence user preferences if deployed -- we do this by training a user predictive model that implicitly contains their preference dynamics from historical user interaction data; \emph{evaluation} and \emph{optimization} additionally require metrics to assess whether such influences are manipulative or otherwise unwanted -- we introduce the notion of “safe shifts”, that define a trust region within which behavior is believed to be safe. We show that recommender systems that optimize for staying in the trust region can avoid manipulative behaviors (e.g., changing preferences in ways that make users more predictable), while still generating engagement. | Reject | This paper studies the influence of recommender systems on users' preferences. The authors propose a method for estimating preference shifts, evaluating their desirability, and avoiding such shifts (when needed).
After the initial review and discussion period, a fourth reviewer with significant recsys experience and a very good knowledge of this sub area was invited to provide an additional review of the paper. This is reviewer vNt7. Their review was positive overall but did highlight some limitations and potential ways to improve the paper's grounding in the recsys literature.
Overall, the main strengths of this paper were that it studies an interesting and practically motivated question. The reviewers also found the proposed solution reasonable.
The main limitations are twofold. One, the results use a single set of simulation assumptions. Showing similar results under different simulation assumptions would be helpful to better understand the robustness and potential limitations of the approach. Two, there is a certain disconnect with the simulation literature. See comments from reviewers vNt7 and kWQ2 (although I found your reply to Virtual-Taobao convincing).
Overall and given the final reviewer recommendations (three marginally above and one marginally below), this is a very borderline paper. However, the consensus view of the committee is that it would benefit from additional work before publication.
I am sorry that I cannot recommend acceptance at this stage. I do believe that some of the suggestions from the reviewers highlighted above (more diverse simulation, better grounding in current recsys simulation literature and in the field) will be useful in preparing the next version of this work. | train | [
"77Ftg_5L4Bn",
"lzVDZJUuCKU",
"EdxJ5_cc1gq",
"AWEbkfUFpLU",
"WHUBC5TxFcq",
"AhHfC_JqMm",
"uUWPHQLCk6b",
"HqqDbx5GCMY",
"_qvROPaIlo",
"pDbnZNAIOq6",
"7wDX5MuCxR0",
"dH4mEydv_80"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors argue that it is important to 1) estimate the impact of recommendation systems of user preferences, 2) evaluate if the shifts would be undesirable, and 3) optimize to avoid undesirable shifts. The authors propose a method to do this and rely on simulations to evaluate it. I do not wish ... | [
6,
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_kiNEOCSEzt",
"uUWPHQLCk6b",
"iclr_2022_kiNEOCSEzt",
"7wDX5MuCxR0",
"iclr_2022_kiNEOCSEzt",
"iclr_2022_kiNEOCSEzt",
"dH4mEydv_80",
"WHUBC5TxFcq",
"WHUBC5TxFcq",
"EdxJ5_cc1gq",
"EdxJ5_cc1gq",
"iclr_2022_kiNEOCSEzt"
] |
iclr_2022_ODnCiZujily | DeepSplit: Scalable Verification of Deep Neural Networks via Operator Splitting | Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non-convex optimization problem, for which several past works have proposed convex relaxations as a promising alternative. However, even for reasonably-sized neural networks, these relaxations are not tractable, and so must be replaced by even weaker relaxations in practice. In this work, we propose a novel operator splitting method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller sub-problems that often have analytical solutions. The method is modular, scales to very large problem instances, and compromises of operations that are amenable to fast parallelization with GPU acceleration. We demonstrate our method in obtaining tighter bounds on the worst-case performance of large convolutional networks in image classification and reinforcement learning settings. | Reject | The authors propose a novel operator splitting method for solving convex relaxations of neural network verification problems, and develop and validate an optimized implementation of the same on large scale networks, focusing on the problem of verifying robustness to norm bounded adversarial perturbations.
The reviewers agree that the paper contains interesting ideas that are worthy of further development and that these ideas may prove useful eventually in pushing the envelope of what is possible in neural network verification. However, in its current form, the paper misses some key experimental evidence to rigorously evaluate the value of the contributions made:
1) Comparison against SOTA incomplete verifiers: The authors do not provide detailed and rigorous comparisons against well-known baselines (for example the incomplete verifiers from Fast-and-complete (Xu et al., 2021), Beta-CROWN(Wang et al. 2021))
2) Incorporating tighter relaxations: It would be valuable for the community to understand whether the proposed algorithm is compatible with tighter relaxations like those of (Tjandraatmadja et al., 2020). Even if they are not, it would be interesting to understand the comparison against standard solvers for these tighter relaxations compared against the advanced solver developed by the authors applied to the weaker relaxation.
3) Showing performance in the context of complete verification: While this is not a requirement, it would be great to see how the method performs in the conjunction with a branch and bound search, as this sometimes reveals surprising tradeoffs or weaknesses of incomplete verifiers (as observed in the results of Beta-CROWN(Wang et al. 2021)).
I encourage the authors to strengthen the paper adding these experiments and resubmit to a future venue. | train | [
"ajfHxtszKdH",
"7V7irfhCd3K",
"sG7Csr0YnP",
"pSu4CQy2X3Z",
"A9-y47uO0MJ",
"lImN6sOerTq",
"tNNX50LI2fC",
"PqE__tNnZsV",
"iZPb21QBn8J",
"vOYP_K0beJm",
"DZEJrmcFuWd",
"4rk6FL4RTw5",
"C8TUrdt2Kt",
"KMivMNnCSw1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. After going through their response and the other reviews, I am retaining my score of 3. \n\nWhile the authors were able to address some of my concerns (1 and 5), my other concerns have not been satisfactorily resolved (2,3 and 4). To summarise:\n\nMain claim: ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4,
5
] | [
"lImN6sOerTq",
"PqE__tNnZsV",
"pSu4CQy2X3Z",
"KMivMNnCSw1",
"DZEJrmcFuWd",
"A9-y47uO0MJ",
"C8TUrdt2Kt",
"4rk6FL4RTw5",
"vOYP_K0beJm",
"iclr_2022_ODnCiZujily",
"iclr_2022_ODnCiZujily",
"iclr_2022_ODnCiZujily",
"iclr_2022_ODnCiZujily",
"iclr_2022_ODnCiZujily"
] |
iclr_2022_C4o-EEUx-6 | Flashlight: Enabling Innovation in Tools for Machine Learning | As the computational requirements for machine learning systems and the size and complexity of machine learning frameworks increases, essential framework innovation has become challenging. While computational needs have driven recent compiler, networking, and hardware advancements, utilization of those advancements by machine learning tools is occurring at a slower pace. This is in part due to the difficulties involved in prototyping new computational paradigms with existing frameworks. Large frameworks prioritize machine learning researchers and practitioners as end users and pay comparatively little attention to systems researchers who can push frameworks forward --- we argue that both are equally-important stakeholders. We introduce Flashlight, an open source library built to spur innovation in machine learning tools and systems by prioritizing open, modular, customizable internals and state-of-the-art, research-ready models and training setups across a variety of domains. Flashlight enables systems researchers to rapidly prototype and experiment with novel ideas in machine learning computation and has low overhead, competing with and often outperforming other popular machine learning frameworks. We see Flashlight as a tool enabling research that can benefit widely-used libraries downstream and bring machine learning and systems researchers closer together. | Reject | This paper describes Flashlight, a tool for ML researchers with specific design considerations for conducting systems research. The needs for such tool are significant, and recent advances in this topic have been relatively slow, so this research is timely and important.
Reviewers are positive about the importance of the problem and the nice design of Flashlight. It seems the tool has been used by researchers with positive feedback. At the time of the original submission, reviewers expressed some concerns about the novelty and the weak arguments for convincingly showing the advantages over other similar tools.
Authors provided nice replies including specific case studies, but with the short time period to reassess the proposed changes and additions, some reviewers remain hesitant, and thus this paper cannot be accepted at this time. I strongly encourage the authors to incorporate all of the proposed revisions and resubmit to a future venue. | train | [
"YG9BXdta7q9",
"gHlXAZDsYv_",
"66fmLPAf_1I",
"XiTsQcrXn7O",
"_nj4yFVl1F8",
"vYSbaAyUMMp",
"MTnRIh3zOr4",
"vd1fC5pLEEG",
"ZHHPEkgDLog",
"P5ZwM9emtep"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ## Additional Details on Case Studies\n\nWe are providing additional technical details about the [aforementioned case studies](https://openreview.net/forum?id=C4o-EEUx-6¬eId=MTnRIh3zOr4) per a reviewer's suggestion and interest. These may be relevant to other reviewers per prior feedback.\n\n### More Details: ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"iclr_2022_C4o-EEUx-6",
"iclr_2022_C4o-EEUx-6",
"gHlXAZDsYv_",
"P5ZwM9emtep",
"ZHHPEkgDLog",
"vd1fC5pLEEG",
"iclr_2022_C4o-EEUx-6",
"iclr_2022_C4o-EEUx-6",
"iclr_2022_C4o-EEUx-6",
"iclr_2022_C4o-EEUx-6"
] |
iclr_2022_14kbUbOaZUc | Metric Learning on Temporal Graphs via Few-Shot Examples | Graph metric learning methods aim to learn the distance metric over graphs such that similar graphs are closer and dissimilar graphs are farther apart. This is of critical importance in many graph classification applications such as drug discovery and epidemics categorization. In many real-world applications, the graphs are typically evolving over time; labeling graph data is usually expensive and also requires background knowledge. However, state-of-the-art graph metric learning techniques consider the input graph as static, and largely ignore the intrinsic dynamics of temporal graphs; Furthermore, most of these techniques require abundant labeled examples for training in the representation learning process. To address the two aforementioned problems, we wish to learn a distance metric only over fewer temporal graphs, which metric could not only help accurately categorize seen temporal graphs but also be adapted smoothly to unseen temporal graphs. In this paper, we first propose the streaming-snapshot model to describe temporal graphs on different time scales. Then we propose the MetaTag framework: 1) to learn the metric over a limited number of streaming-snapshot modeled temporal graphs, 2) and adapt the learned metric to unseen temporal graphs via a few examples. Finally, we demonstrate the performance of MetaTag in comparison with state-of-the-art algorithms for temporal graph classification problems. | Reject | The paper proposes a new method for representation learning of time-varying graphs which uses a streaming-snapshot model to describe graphs on different time scales and meta-learning for adaption to unseen graphs. Reviewers highlighted as strengths that the paper proposes an interesting approach for modeling temporal dynamics in graphs which of interest to the ICLR community. However, reviewers raised also concerns regarding the novelty of contributions, the empirical evaluation (also with regard to related work), as well as the clarity of presentation. In addition there was no author response. All reviewers and the AC agree therefore that the paper is not yet ready for publication at ICLR at this point. | train | [
"RHdfey63AO",
"1xTIjToPhAS",
"R81eL7ylOkE",
"NcjA0kmjtdA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a methodology of graph learning for dynamic graphs, where the dynamics are encoded in the representation to obtain improved results on graph classification tasks. This framework includes a temporal graph encoder that uses attention mechanisms to generate representations, as well as a meta-lea... | [
5,
3,
3,
5
] | [
3,
4,
3,
3
] | [
"iclr_2022_14kbUbOaZUc",
"iclr_2022_14kbUbOaZUc",
"iclr_2022_14kbUbOaZUc",
"iclr_2022_14kbUbOaZUc"
] |
iclr_2022_H6mR1eaBP1l | Training sequence labeling models using prior knowledge | Sequence labeling task (part-of-speech tagging, named entity recognition) is one of the most common in NLP. At different times, the following architectures were used to solve it: CRF, BiLSTM, BERT (in chronological order). The combined model BiLSTM / BERT + CRF, where the last one is the topmost layer, however, performs better than just BiLSTM / BERT.
It is common when there is a small amount of labeled data available for the task. Hence it is difficult to train a model with good generalizing capability, so one has to resort to semi-supervised learning approaches. One of them is called pseudo-labeling, the gist of what is increasing the training samples with unlabeled data, but it cannot be used alongside with the CRF layer, as this layer simulates the probability distribution of the entire sequence, not of individual tokens.
In this paper, we propose an alternative to the CRF layer — the Prior Knowledge Layer (PKL), that allows one to obtain probability distributions of each token and also takes into account prior knowledge concerned the structure of label sequences. | Reject | This paper presents an approach for using prior knowledge to constrain transitions for consecutive time steps and aims to replace conditional random fields for sequence tagging tasks in sequence labeling. However, the paper seems incomplete with no experimental results and analysis to validate the proposed ideas. | val | [
"Tpp9MBlId-n",
"GkbFXZ74_9c",
"gJZG2O1eACp",
"3mUNvbjRU_3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors try to propose a `Prior Knowledge Layer' as an alternative to the CRF layer for the standard sequence labeling framework in NLP. This is not a paper, instead technically more like a toy report. The architecture of the entire manuscript is way incomplete, without enough technique detail, not to mention ... | [
1,
1,
1,
1
] | [
5,
4,
5,
5
] | [
"iclr_2022_H6mR1eaBP1l",
"iclr_2022_H6mR1eaBP1l",
"iclr_2022_H6mR1eaBP1l",
"iclr_2022_H6mR1eaBP1l"
] |
iclr_2022_eOdSD0B5TE | On the Implicit Biases of Architecture & Gradient Descent | Do neural networks generalise because of bias in the functions returned by gradient descent, or bias already present in the network architecture? $\textit{¿Por qué no los dos?}$ This paper finds that while typical networks that fit the training data already generalise fairly well, gradient descent can further improve generalisation by selecting networks with a large margin. This conclusion is based on a careful study of the behaviour of infinite width networks trained by Bayesian inference and finite width networks trained by gradient descent. To measure the implicit bias of architecture, new technical tools are developed to both $\textit{analytically bound}$ and $\textit{consistently estimate}$ the average test error of the neural network--Gaussian process (NNGP) posterior. This error is found to be already better than chance, corroborating the findings of Valle-Pérez et al. (2019) and underscoring the importance of architecture. Going beyond this result, this paper finds that test performance can be substantially improved by selecting a function with much larger margin than is typical under the NNGP posterior. This highlights a curious fact: $\textit{minimum a posteriori}$ functions can generalise best, and gradient descent can select for those functions. In summary, new technical tools suggest a nuanced portrait of generalisation involving both the implicit biases of architecture and gradient descent. | Reject | This is an interesting paper aiming to further advance the knowledge of implicit bias in deep networks. Unfortunately, the reviewers had many concerns about technical details and presentation. One concern was about section 5, on margins and implicit bias. Oddly, this section 5 does not cite the extensive literature on margin maximization, implicit bias, and implicit regularization in deep learning (despite a mention of Soudry et al earlier on), whereas the choice of paper title and also this section title would suggest an advance over this work, or at least reference to this work (which goes far beyond that one paper); instead, that section left me a bit confused about the suggested bias and its implications on generalization. As such, I suggest the authors spend more time on their submission, aiming to further separate their work from prior work, and address the comments of reviewers. | train | [
"seicv4mzYjy",
"5cBaAd1Fcy6",
"0lvO4fHsjvT",
"DUn8aea4n4M",
"OnGm4pQOBfR",
"FWduua5Q1j4",
"AqbgDY_SBy5",
"Fw3cGYwfFrm",
"moPv9S1o_sx",
"GzoWuo9Ikpb",
"jhhBbaY1nFQ",
"kSsAKQCZeUb",
"v3fjj3jkwWL",
"BCO8-IYEplO",
"E8jAxSVYNVz",
"wFvQg8Pn6bQ",
"3i4D7WcTsZq",
"0pacoWrNQqG",
"S2XdThR02... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Dear Reviewer WQvV,\n\nThank you for updating your review. We are grateful for the additional feedback.\n\nWe are posting some replies here for completeness:\n- We agree with you that the language surrounding the \"implicit bias of gradient descent\" might be misleading to some, as it might lead some readers to e... | [
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"AqbgDY_SBy5",
"Fw3cGYwfFrm",
"OnGm4pQOBfR",
"iclr_2022_eOdSD0B5TE",
"moPv9S1o_sx",
"iclr_2022_eOdSD0B5TE",
"E8jAxSVYNVz",
"wFvQg8Pn6bQ",
"GzoWuo9Ikpb",
"jhhBbaY1nFQ",
"kSsAKQCZeUb",
"BCO8-IYEplO",
"iclr_2022_eOdSD0B5TE",
"0pacoWrNQqG",
"FWduua5Q1j4",
"S2XdThR02sC",
"DUn8aea4n4M",
... |
iclr_2022_UgNQM-LcVpN | A Modulation Layer to Increase Neural Network Robustness Against Data Quality Issues | Data quality is a common problem in machine learning, especially in high-stakes settings such as healthcare. Missing data affects accuracy, calibration, and feature attribution in complex patterns. Developers often train models on carefully curated datasets to minimize missing data bias; however, this reduces the usability of such models in production environments, such as real-time healthcare records. Making machine learning models robust to missing data is therefore crucial for practical application. While some classifiers naturally handle missing data, others, such as deep neural networks, are not designed for unknown values. We propose a novel neural network modification to mitigate the impacts of missing data. The approach is inspired by neuromodulation that is performed by biological neural networks. Our proposal replaces the fixed weights of a fully-connected layer with a function of an additional input (reliability score) at each input, mimicking the ability of cortex to up- and down-weight inputs based on the presence of other data. The modulation function is jointly learned with the main task using a multi-layer perceptron. We tested our modulating fully connected layer on multiple classification, regression, and imputation problems, and it either improved performance or generated comparable performance to conventional neural network architectures concatenating reliability to the inputs. Models with modulating layers were more robust against degradation of data quality by introducing additional missingness at evaluation time. These results suggest that explicitly accounting for reduced information quality with a modulating fully connected layer can enable the deployment of artificial intelligence systems in real-time settings.
| Reject | The paper proposes a modulation layer to address the problem of missing data.
The results do not show that the approach outperforms existing sota approaches.
The results do not demonstrate that the proposed modulation layer is an improvement over attention layer.
Many smaller errors (spelling, etc.) where found in the manuscript.
Experimental details are insufficient to make the results reproducible.
The authors have not provided a response to the reviewers. | train | [
"qNlaQi193wV",
"84SYmJ-GBeM",
"6Mz_0LqZeBy",
"VU70Z6SWPZo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This submission contributes an approach to handle missing values in Neural networks by modulating inside the architecture the missing values by factors which decrease the role of the feature in the architecture. The approach is benchmarked empirically, but does not appear to outperform consistently other approach... | [
3,
3,
3,
3
] | [
5,
4,
3,
5
] | [
"iclr_2022_UgNQM-LcVpN",
"iclr_2022_UgNQM-LcVpN",
"iclr_2022_UgNQM-LcVpN",
"iclr_2022_UgNQM-LcVpN"
] |
iclr_2022_V7eSbSAz-O8 | Benchmarking Machine Learning Robustness in Covid-19 Spike Sequence Classification | The rapid spread of the COVID-19 pandemic has resulted in an unprecedented amount of sequence data of the SARS-CoV-2 viral genome --- millions of sequences and counting. This amount of data, while being orders of magnitude beyond the capacity of traditional approaches to understanding the diversity, dynamics and evolution of viruses, is nonetheless a rich resource for machine learning (ML) and deep learning (DL) approaches as alternatives for extracting such important information from these data. It is of hence utmost importance to design a framework for testing and benchmarking the robustness of these ML and DL approaches.
This paper the first (to our knowledge) to explore such a framework. In this paper, we introduce several ways to perturb SARS-CoV-2 spike protein sequences in ways that mimic the error profiles of common sequencing platforms such as Illumina and PacBio. We show from experiments on a wide array of ML approaches from naive Bayes to logistic regression, that DL approaches are more robust (and accurate) to such adverarial attacks to the input sequences, while $k$-mer based feature vector representations are more robust than the baseline one-hot embedding. Our benchmarking framework may developers of futher ML and DL techniques to properly assess their approaches towards understanding the behaviour of the SARS-CoV-2 virus, or towards avoiding possible future pandemics. | Reject | This paper presents a framework to test the accuracy and robustness of different machine learning algorithms in classifying the COVID-19 spike sequences. After reading the paper and taking into consideration the reviewing process, here are my comments:
- The work is aligned to the efforts on understanding the COVID-19 pandemic.
- Many concepts are not novel.
- Sequences errors are not modeled in a realistic way.
- The benchmark is very limited and nonlinear machine learning approaches are presented.
- Many typos are presented.
From the above, the paper is not suitable for aacceptance in ICLR. | train | [
"8lktPhG7UMP",
"uKV-d3g-q6D",
"ecTbnF3Qf5o",
"7ee9-fuhPhF",
"F21HWw6SRUN",
"e-CIarTqfa6",
"SA1YDqXcUwj",
"ivzCuHH1t5h",
"DavCyx93Jr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for responding to my comments and questions. However, I still believe there would be major changes required to bring the paper to the appropriate level. So, I maintain my ranking.",
" Thank you for providing responses to my comments. Based on all the reviews and the rebuttal, the paper in its current ... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"7ee9-fuhPhF",
"ecTbnF3Qf5o",
"DavCyx93Jr",
"F21HWw6SRUN",
"ivzCuHH1t5h",
"SA1YDqXcUwj",
"iclr_2022_V7eSbSAz-O8",
"iclr_2022_V7eSbSAz-O8",
"iclr_2022_V7eSbSAz-O8"
] |
iclr_2022_qZNw8Ao_BIC | Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation | We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patch-based transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs across a wide set of ImageNet based robustness benchmarks. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation, and together boost the performance further. | Reject | The reviews are split. The most significant concern seems to be the narrow focus of the paper: insensitivity of a very specific architecture, ViT to some patch-based transformations of the image. The paper aims to "understand and improve" the behavior of ViTs in this respect, but as the reviewers point out, the understanding (what exactly is the mechanism for this insensitivity) is lacking. Furthermore, there is a good reason to believe that other transformer architectures might not have a similar behavior. Ultimately both the lack of depth and the lack of breadth of the investigation suggest that the impact may be limited. I think this is not a good fit for ICLR. | train | [
"NZORGQDgWqZ",
"QUGxoCkPCM",
"o5UUIfDkOf",
"Xtx6bY8Zbz_",
"lAG773EwlLt",
"n6XCeuGbVQz",
"ZMbZdphDx8d",
"MDi8vooY0r",
"Qw7Oe1zZV_Y",
"xyOx4T53AMj",
"C9ABl-lo1tT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear AC, dear reviewers,\n\nThe authors have addressed my concerns in a satisfactory manner:\n\n1. They have provided a study on standard CNNs greater vulnerability to patch-based image perturbation in comparison to ViTs. This makes the interpretation of ViTs being insensitive to patch-based losses less subjectiv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"iclr_2022_qZNw8Ao_BIC",
"MDi8vooY0r",
"C9ABl-lo1tT",
"Qw7Oe1zZV_Y",
"Qw7Oe1zZV_Y",
"xyOx4T53AMj",
"MDi8vooY0r",
"iclr_2022_qZNw8Ao_BIC",
"iclr_2022_qZNw8Ao_BIC",
"iclr_2022_qZNw8Ao_BIC",
"iclr_2022_qZNw8Ao_BIC"
] |
iclr_2022_mhv2gWm3sf | $f$-Divergence Thermodynamic Variational Objective: a Deformed Geometry Perspective | In this paper, we propose a $f$-divergence Thermodynamic Variational Objective ($f$-TVO). $f$-TVO generalizes the Thermodynamic Variational Objective (TVO) by replacing Kullback–Leibler (KL) divergence with arbitary differeitiable $f$-divergence. In particular, $f$-TVO approximates dual function of model evidence $f^*(p(x))$ rather than the log model evidence $\log p(x)$ in TVO. $f$-TVO is derived from a deformed $\chi$-geometry perspective. By defining $\chi$-exponential family exponential, we are able to integral $f$-TVO along the $\chi$-path, which is the deformed geodesic between variational posterior distribution and true posterior distribution. Optimizing scheme of $f$-TVO includes reparameterization trick and Monte Carlo approximation. Experiments on VAE and Bayesian neural network show that the proposed $f$-TVO performs better than cooresponding baseline $f$-divergence variational inference. | Reject | The four reviewers believed the paper was below threshold for acceptance to ICLR. They raised concerns with the experimental evaluation and thought that the paper could benefit from another edit to help with the clarity. | train | [
"_4X1CjCyp6G",
"hzivIK3QmE7",
"1b_4Trb0TkW",
"n-axOHbjzBh",
"lpg4ZUGV3wR",
"DibGZ44fZD",
"40hEkrDtAGv",
"C8BnuLZMoUw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a novel f-divergence Thermodynamic Variational Objective (f-TVO) framework for VI, that extends the TVO towards, a more general, family of f-divergences. The authors propose to use a $\\chi$-deformed exponential distribution, which casts the f-TVO objective as integral along the $\\chi$-path bet... | [
5,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_mhv2gWm3sf",
"C8BnuLZMoUw",
"40hEkrDtAGv",
"DibGZ44fZD",
"_4X1CjCyp6G",
"iclr_2022_mhv2gWm3sf",
"iclr_2022_mhv2gWm3sf",
"iclr_2022_mhv2gWm3sf"
] |
iclr_2022_y1faDxZ_-0a | SSFL: Tackling Label Deficiency in Federated Learning via Personalized Self-Supervision | Federated Learning (FL) is transforming the ML training ecosystem from a centralized over-the-cloud setting to distributed training over edge devices in order to strengthen data privacy, reduce data migration costs, and break regulatory restrictions. An essential, but rarely studied, challenge in FL is label deficiency at the edge. This problem is even more pronounced in FL, compared to centralized training, due to the fact that FL users are often reluctant to label their private data and edge devices do not provide an ideal interface to assist with annotation. Addressing label deficiency is also further complicated in FL, due to the heterogeneous nature of the data at edge devices and the need for developing personalized models for each user. We propose a self-supervised and personalized federated learning framework, named SSFL, and a series of algorithms under this framework which work towards addressing these challenges. First, under the SSFL framework, we analyze the compatibility of various centralized self-supervised learning methods in FL setting and demonstrate that SimSiam networks performs the best with the standard FedAvg algorithm. Moreover, to address the data heterogeneity at the edge devices in this framework, we have innovated a series of algorithms that broaden existing supervised personalization algorithms into the setting of self-supervised learning including perFedAvg, Ditto, and local fine-tuning, among others. We further propose a novel personalized federated self-supervised learning algorithm, Per-SSFL, which balances personalization and consensus by carefully regulating the distance between the local and global representations of data. To provide a comprehensive comparative analysis of all proposed algorithms, we also develop a distributed training system and related evaluation protocol for SSFL. Using this training system, we conduct experiments on a synthetic non-I.I.D. dataset based on CIFAR-10, and an intrinsically non-I.I.D. dataset GLD-23K. Our findings show that the gap of evaluation accuracy between supervised learning and unsupervised learning in FL is both small and reasonable. The performance comparison indicates that representation regularization-based personalization method is able to outperform other variants. Ablation studies on SSFL are also conducted to understand the role of batch size, non-I.I.D.ness, and the evaluation protocol. | Reject | This manuscript proposes an extension of semi-supervised learning to the federated setting. The contributions include a thorough evaluation of performance and some method extensions.
There are four reviewers. One reviewer points out a name leakage issue in the code that was missed and suggests deks-rejection. The area chair has chosen not to desk-reject the paper. Three other reviews agree that the manuscript addresses an interesting and timely issue -- indeed, label acquisition is a significant issue in federated learning. Three reviewers agree to reject the paper -- raising concerns about novelty compared to existing methods, some details of the evaluation, and some lack of clarity. The authors provide a good rebuttal addressing many of these issues. However, the reviewers are unconvinced that the method is sufficiently novel after reviews and discussion. Authors are encouraged to address the highlighted concerns for future submission of this work. | train | [
"r49D1VmN_Od",
"Q9SLFxWrGfK",
"U0IZ83aclx7",
"KWQETYyde1",
"Y2VoUpLkhV1",
"Zj87BXq__QR",
"EWBjWta06kW",
"Km-JKYd-l5-"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" * Insufficient quantitative assessment of SimSiam approach against other self-supervised approaches.\n\nResponse: \"Insufficiency\" is such a general vocabulary. Can you provide specific suggestions? In which aspects, how should it be evaluated?\n\n* It is unclear the feature representation can be used in practic... | [
-1,
-1,
-1,
-1,
5,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"Km-JKYd-l5-",
"EWBjWta06kW",
"Zj87BXq__QR",
"Y2VoUpLkhV1",
"iclr_2022_y1faDxZ_-0a",
"iclr_2022_y1faDxZ_-0a",
"iclr_2022_y1faDxZ_-0a",
"iclr_2022_y1faDxZ_-0a"
] |
iclr_2022_DHLngM1mR3W | AAVAE: Augmentation-Augmented Variational Autoencoders | Recent methods for self-supervised learning can be grouped into two paradigms: contrastive and non-contrastive approaches. Their success can largely be attributed to data augmentation pipelines which generate multiple views of a single input that preserve the underlying semantics. In this work, we introduce augmentation-augmented variational autoencoders (AAVAE), yet another alternative to self-supervised learning, based on autoencoding. We derive AAVAE starting from the conventional variational autoencoder (VAE), by replacing the KL divergence regularization, which is agnostic to the input domain, with data augmentations that explicitly encourage the internal representations to encode domain-specific invariances and equivariances. We empirically evaluate the proposed AAVAE on image classification, similar to how recent contrastive and non-contrastive learning algorithms have been evaluated. Our experiments confirm the effectiveness of data augmentation as a replacement for KL divergence regularization. The AAVAE outperforms the VAE by 30% on CIFAR-10, 40% on STL-10 and 45% on Imagenet. On CIFAR-10 and STL-10, the results for AAVAE are largely comparable to the state-of-the-art algorithms for self-supervised learning. | Reject | This paper presents an augmentation-based training of autoencoders with stochastic latent space. The proposed method is examined on the representation learning task on several image datasets. While the reviewers found the submission interesting, simple, and easy to implement, they also raised serious concerns around the novelty of the proposed method and the impact of removing the KL term (which removes the generative interpretability of the model). Unfortunately, the experiments do not provide a convincing utility of the model compared to more popular representation learning methods (i.e., contrastive and non-contrastive methods). Given these concerns, the paper is not ready for presentation at ICLR. | train | [
"-RDRGykui41",
"T9uhiMrviN",
"8q563Oxt9Mu",
"AVm1ktOxxD",
"vAd4PpzYzh9",
"abNXI4xqsTS",
"7MW1au4CCrD",
"O2kxyd46iL",
"oh6LZyZVbF0",
"luUwKH8Apev",
"W7vAAxZlwFh"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their time to respond back to my questions and for changing the paper where necessary. \n\nHowever, I remain somewhat unsatisfied with one of the central claims of that paper: adds structured domain specific information, rather than helping with overfitting or performing better function ap... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
3,
4
] | [
"AVm1ktOxxD",
"7MW1au4CCrD",
"oh6LZyZVbF0",
"W7vAAxZlwFh",
"O2kxyd46iL",
"luUwKH8Apev",
"iclr_2022_DHLngM1mR3W",
"iclr_2022_DHLngM1mR3W",
"iclr_2022_DHLngM1mR3W",
"iclr_2022_DHLngM1mR3W",
"iclr_2022_DHLngM1mR3W"
] |
iclr_2022_SjGRJ4vSZlP | Near-Optimal Algorithms for Autonomous Exploration and Multi-Goal Stochastic Shortest Path | We revisit the incremental autonomous exploration problem proposed by Lim and Auer (2012). In this setting, the agent aims to learn a set of near-optimal goal-conditioned policies to reach the $L$-controllable states: states that are incrementally reachable from an initial state $s_0$ within $L$ steps in expectation. We introduce three new algorithms with stronger sample complexity bounds than existing ones. Furthermore, we also prove the first lower bound for the autonomous exploration problem. In particular, the lower bound implies that one of our proposed algorithms, Value-Aware Autonomous Exploration, is nearly minimax-optimal when the number of $L$-controllable states grows polynomially with respect to $L$. Key in our algorithm design is a connection between autonomous exploration and multi-goal stochastic shortest path, a new problem that naturally generalizes the classical stochastic shortest path problem. This new problem and its connection to autonomous exploration can be of independent interest. | Reject | The paper studies an interesting problem, but as pointed out by reviewers, the presentation of the problem statement and contributions need to be improved. | val | [
"9gnRujGqxWr",
"CNIAuyolDyF",
"F6RMPAzBehU",
"k2m3fO0gGD",
"GfaCtqB30A0",
"kC819ghxGz",
"H3WN2KyYu_x",
"AD8B_OBsATq",
"k4CIaT1vnL4",
"lS937lOV39w",
"gouOTYkTP3b",
"vZvcF4Yi5l9",
"U2h-Kn4I1V"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the autonomous exploration problem and multi-goal SSP problem, and proposes three algorithms with improved cumulative costs on 4 learning objectives. The authors also construct a hard instance and show the lower bound of the cumulative cost. Their bounds are optimal in terms of $L, A, \\epsilon$... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2022_SjGRJ4vSZlP",
"lS937lOV39w",
"k4CIaT1vnL4",
"kC819ghxGz",
"U2h-Kn4I1V",
"gouOTYkTP3b",
"U2h-Kn4I1V",
"gouOTYkTP3b",
"vZvcF4Yi5l9",
"9gnRujGqxWr",
"iclr_2022_SjGRJ4vSZlP",
"iclr_2022_SjGRJ4vSZlP",
"iclr_2022_SjGRJ4vSZlP"
] |
iclr_2022_bidTZROu2y | Physics Informed Machine Learning of SPH: Machine Learning Lagrangian Turbulence | Smoothed particle hydrodynamics (SPH) is a mesh-free Lagrangian method for obtaining approximate numerical solutions of the equations of fluid dynamics, which has been widely applied to weakly- and strongly compressible turbulence in astrophysics and engineering applications. We present a learn-able hierarchy of parameterized and "physics-explainable" SPH informed fluid simulators using both physics based parameters and Neural Networks as universal function approximators. Our learning algorithm develops a mixed mode approach, mixing forward and reverse mode automatic differentiation with forward and adjoint based sensitivity analyses to efficiently perform gradient based optimization. We show that our physics informed learning method is capable of: (a) solving inverse problems over the physically interpretable parameter space, as well as over the space of Neural Network parameters; (b) learning Lagrangian statistics of turbulence; (c) combining Lagrangian trajectory based, probabilistic, and Eulerian field based loss functions; and (d) extrapolating beyond training sets into more complex regimes of interest. Furthermore, our hierarchy of models gradually introduces more physical structure, which we show improves interpretability, generalizability (over larger ranges of time scales and Reynolds numbers), preservation of physical symmetries, and requires less training data. | Reject | The paper explores and discusses the effects of incorporating prior domain-knowledge for modeling fluid dynamics with neural networks, with a focus on smoothed particle hydrodynamics. Reviewers agree that the contributions are modest, and that they are not well presented with respect to issues of efficiency, scalability, robustness, etc. More work need to be done to make it useful to the community. Many of directions to improve the paper were in reviewer comments. | train | [
"I6lE47rrQ7K",
"ZYE3s_5L5C5",
"-z9LugCg6Mh",
"fDzP31R5S9i",
"FraZIy7cFkJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a hierarchy of parameterized models based on one of the Lagrangian simulation methods, Smoothed particle hydrodynamics (SPH). \nSeveral models with different levels of physics information embedded are proposed. \nWhen training these models, sensitivity analysis and automatic differentiation a... | [
5,
3,
3,
6,
3
] | [
3,
3,
4,
3,
4
] | [
"iclr_2022_bidTZROu2y",
"iclr_2022_bidTZROu2y",
"iclr_2022_bidTZROu2y",
"iclr_2022_bidTZROu2y",
"iclr_2022_bidTZROu2y"
] |
iclr_2022_BZbUtxOy3R | Character Generation through Self-Supervised Vectorization | The prevalent approach in self-supervised image generation is to operate on pixel level representations. While this approach can produce high quality images, it cannot benefit from the simplicity and innate quality of vectorization. Here we present a drawing agent that operates on stroke-level representation of images. At each time step, the agent first assesses the current canvas and decides whether to stop or keep drawing. When a `draw’ decision is made, the agent outputs a program indicating the stroke to be drawn. As a result, it produces a final raster image by drawing the strokes on a canvas, using a minimal number of strokes and dynamically deciding when to stop. We train our agent through reinforcement learning on MNIST and Omniglot datasets for unconditional generation and parsing (reconstruction) tasks. We utilize our parsing agent for exemplar generation and type conditioned concept generation in Omniglot challenge without any further training. We present successful results on all three generation tasks and the parsing task. Crucially, we do not need any stroke-level or vector supervision; we only use raster images for training. Code will be made available upon acceptance.
| Reject | The paper studies the problem of character generation using reinforcement learning for generation/parsing. All the reviewers recommended reject due to insufficient experimental investigation to support the ideas. The authors did not provide a rebuttal. Hence, the reviewers' opinion still remains the same. AC agrees with the reviewers and believes that the paper is not yet ready for publication. | train | [
"wkr6ikN7MxJ",
"Ed5IhLuxBjn",
"-NFJeFLnZ1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors present a method for character generation using the self-supervised technology. Different from existing approaches, it can leverage the benefits from higher level abstration (due to the used strokes) and get rid of stroke supervision. In this way, high-quality images are generated while ... | [
5,
5,
3
] | [
3,
4,
4
] | [
"iclr_2022_BZbUtxOy3R",
"iclr_2022_BZbUtxOy3R",
"iclr_2022_BZbUtxOy3R"
] |
iclr_2022_CO0ZuH5vaMu | Using Document Similarity Methods to create Parallel Datasets for Code Translation | Translating source code from one programming language to another is a critical, time-consuming task in modernizing legacy applications and codebases. Recent work in this space has drawn inspiration from the software naturalness hypothesis by applying natural language processing techniques towards automating the code translation task. However, due to the paucity of parallel data in this domain, supervised techniques have only been applied to a limited set of popular programming languages. To bypass this limitation, unsupervised neural machine translation techniques have been proposed to learn code translation using only monolingual corpora. In this work, we propose to use document similarity methods to create noisy parallel datasets of code, thus enabling supervised techniques to be applied for automated code translation without having to rely on the availability or expensive curation of parallel code datasets. We explore the noise tolerance of models trained on such automatically-created datasets and show that these models perform comparably to models trained on ground truth for reasonable levels of noise. Finally, we exhibit the practical utility of the proposed method by creating parallel datasets for languages beyond the ones explored in prior work, thus expanding the set of programming languages for automated code translation. | Reject | Motivated by addressing the problem of lacking parallel training data for supervised code translation, this paper proposed to construct noisy parallel source code datasets using a document similarity based approach, and empirically evaluated its effectiveness for code translation tasks.
The paper is in general well-written, easy to follow, and the method is simple and empirical results look positive. Some major concern by reviewers is that while the proposed method is simple and may be easy to use, the overall technical novelty/contribution is limited, e.g., there generally lacks of more thorough discussions on how to deal with the critical noise issue in a more robust or sophisticated way. In addition, there were also other concerns about the experimental issues, such as datasets, metrics, ablation analysis, usability, etc.
Overall, the paper presents some preliminary positive results for an interesting research problem, but the overall technical novelty and contributions are incremental and the paper is not strong enough for the acceptance by this conference. Nonetheless, this work could be potentially valuable for the niche area of code translation research, and authors are encouraged to continue to improve this research with more thorough investigation for a future venue. | train | [
"L_WhFbYLJq6",
"DhdcRZcmgsa",
"Gos8Asx0Kh",
"EoT14rIN1WK",
"ywkzB0UqchM",
"bXjoHuM6Qq1",
"kJrFH4foW1l",
"9DU0yZ8E2uy",
"DJ2IQx6jgs",
"9T7r7mqqEB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for taking the time out to review our paper. Here, we aim to address the concerns that you've raised in your review:\n\n1. `In its current form, the problem statement and approach seem very trivial`\n 1. We would like to very strongly argue that the problem statement is not triv... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"9T7r7mqqEB",
"kJrFH4foW1l",
"kJrFH4foW1l",
"9DU0yZ8E2uy",
"DJ2IQx6jgs",
"DJ2IQx6jgs",
"iclr_2022_CO0ZuH5vaMu",
"iclr_2022_CO0ZuH5vaMu",
"iclr_2022_CO0ZuH5vaMu",
"iclr_2022_CO0ZuH5vaMu"
] |
iclr_2022_5_zwnS5oJDp | Bayesian Learning with Information Gain Provably Bounds Risk for a Robust Adversarial Defense | In this paper, we present a novel method to learn a Bayesian neural network robust against adversarial attacks. Previous algorithms have shown an adversarially trained Bayesian Neural Network (BNN) provides improved robustness against attacks. However, the learning approach for approximating the multi-modal Bayesian posterior leads to mode collapse with consequential sub-par robustness and under performance of an adversarially trained BNN. Instead, we propose approximating the multi-modal posterior of a BNN to prevent mode collapse and encourage diversity over learned posterior distributions of models to develop a novel adversarial training method for BNNs. Importantly, we conceptualize and formulate information gain (IG) in the adversarial Bayesian learning context and prove, training a BNN with IG bounds the difference between the conventional empirical risk with the risk obtained from adversarial training---our intuition is that information gain from benign and adversarial examples should be the same for a robust BNN. Extensive experimental results demonstrate our proposed algorithm to achieve state-of-the-art performance under strong adversarial attacks. | Reject | In this paper, the authors leverage information gain in conjunction with Bayesian Neural Networks in order to to improve the robustness of Bayesian Neural Networks. However, as pointed out by reviwers, there are several mistakes in theier derivations and evaluations. Moreover, the authors failed to crrectly refer to the exisiting work proposing similar methods. | test | [
"9mQJg1DbQp9",
"b8Pnpi5AXyY",
"o9Zh07hPqt",
"XcFTwmTvC61",
"FipHToU6lDC",
"vFqV7NKiagU",
"-vZKVqZQGXO",
"n7JgFPkc5kA",
"tuoBa3tmR8f",
"LeQSq9HYytw",
"wMFQE9_sp24",
"9lZGgyRwp7e"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your responses to me. I believe that this paper needs a major improvement. Therefore, I keep my score unchanged.",
" > __C5__. The authors are expected to make more comprehensive analysis, e.g., more experiments on ImageNet, more defense/attacks methods. \n\n__Response__: Thank you for your feedback.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"FipHToU6lDC",
"9lZGgyRwp7e",
"wMFQE9_sp24",
"9lZGgyRwp7e",
"tuoBa3tmR8f",
"LeQSq9HYytw",
"LeQSq9HYytw",
"LeQSq9HYytw",
"iclr_2022_5_zwnS5oJDp",
"iclr_2022_5_zwnS5oJDp",
"iclr_2022_5_zwnS5oJDp",
"iclr_2022_5_zwnS5oJDp"
] |
iclr_2022_1XdUvpaTNlM | BWCP: Probabilistic Learning-to-Prune Channels for ConvNets via Batch Whitening | This work presents a probabilistic channel pruning method to accelerate Convolutional Neural Networks (CNNs). Previous pruning methods often zero out unimportant channels in training in a deterministic manner, which reduces CNN's learning capacity and results in suboptimal performance. To address this problem, we develop a probability-based pruning algorithm, called batch whitening channel pruning (BWCP), which can stochastically discard unimportant channels by modeling the probability of a channel being activated. BWCP has several merits. (1) It simultaneously trains and prunes CNNs from scratch in a probabilistic way, exploring larger network space than deterministic methods. (2) BWCP is empowered by the proposed batch whitening tool, which is able to empirically and theoretically increase the activation probability of useful channels while reducing the probability of unimportant channels without adding any extra parameters and computational cost in inference. (3) Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet with various network architectures show that BWCP outperforms its counterparts by achieving better accuracy given limited computational budgets. For example, ResNet50 pruned by BWCP has only 0.58% Top-1 accuracy drop on ImageNet, while reducing 42.9% FLOPs of the plain ResNet50. | Reject | The reviewers consider the authors' approach to pruning of convolutional networks reasonable; but neither sufficiently novel nor sufficiently well explored for inclusion in the conference. In particular, the reviewers would like to see a more explicit discussion of the effect on training time of the authors' method, and more discussion and comparison against previous probabilistic pruning methods. | train | [
"lyXekC0Qaf",
"Cn-W9JBlxDm",
"ukUyGzLD36",
"VBT4AKs4oS2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a probabilistic channel pruning method (BWCP) for accelerating CNNs. The key newly proposed technique is “batch whitening”. They evaluate their method on CIFAR-10/100 and ImageNet compared to other recent filter pruning methods. Strengths:\n\n1.\tThey propose a probabilistic pruning method, qui... | [
3,
5,
5,
6
] | [
5,
4,
4,
5
] | [
"iclr_2022_1XdUvpaTNlM",
"iclr_2022_1XdUvpaTNlM",
"iclr_2022_1XdUvpaTNlM",
"iclr_2022_1XdUvpaTNlM"
] |
iclr_2022_MvtLspSX324 | Go with the Flow: the distribution of information processing in multi-path networks | The architectures of convolution neural networks (CNN) have a great impact on the predictive performance and efficiency of the model.
Yet, the development of these architectures is still driven by trial and error, making the design of novel models a costly endeavor.
To move towards a more guided process, the impact of design decisions on information processing must be understood better.
This work contributes by analyzing the processing of the information in neural architectures with parallel pathways.
Using logistic regression probes and similarity indices, we characterize the role of different pathways in the network during the inference process.
In detail, we find that similar sized pathways advance the solution quality at a similar pace, with high redundancy.
On the other hand, shorter pathways dominate longer ones by majorly transporting (and improving) the main signal, while longer pathways do not advance the solution quality directly.
Additionally, we explore the situation in which networks start to ``skip'' layers and how the skipping of layers is expressed. | Reject | The paper analyzes the flow of information in convolutional neural networks with parallel pathways by using logistic regression probes and similarity indices.
Following the analysis, the authors concluded that:
- pathways of similar size have similar contributions to learning and have high redundancy
- shorter pathways directly improve solution quality to a greater extent than longer pathways
- pathways of different lengths also lead to grater variety amid features in the 'downstream' layers of the network
The novelty in this type of analysis and its thoroughness was appreciated by the reviewers. The insight about the benefits of pathways of different length is also valuable; although there is a sense in the community that long pathways without skip connections bring diminishing returns, as pointed by reviewer ac9X, there is still some benefit in quantifying it through such an analysis and establishing the correct mix of long/short pathways. [note: This paper is not there yet, but has the potential.]
On the other hand, the experiments were performed on a single network, a network selected such that this instrumentation is possible, so there is the issue of whether these conclusions generalize, as pointed out by reviewer Hchc.
There were other comments, in terms of structure, errors and typos, as pointed out by reviewer PRea.
The authors have not responded to the comments, nor updated their manuscript.
In its current form, the paper is not ready for acceptance. | train | [
"vbhwQ0aIJ7i",
"LI37povzGv",
"Wy4uTzCMWrc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is an experimental paper that seeks out to investigate information processing in multi-path networks i.e. networks such as ResNet, EfficientNet, Inception-style. The goal was to investigate how different pathways process information in such networks in order to better understand learned representations to inf... | [
3,
3,
5
] | [
3,
5,
3
] | [
"iclr_2022_MvtLspSX324",
"iclr_2022_MvtLspSX324",
"iclr_2022_MvtLspSX324"
] |
iclr_2022_DtfrnB1fiX | Squeezing SGD Parallelization Performance in Distributed Training Using Delayed Averaging | State-of-the-art deep learning algorithms rely on distributed training to tackle the increasing model size and training data. Mini-batch Stochastic Gradient Descent (SGD) requires workers to halt forward/backward propagations, to wait for gradients synchronized among all workers before the next batch of tasks. The synchronous execution model exposes the overhead of gradient communication among a large number of workers in a distributed training system.
To this end, we propose a new SGD algorithm with delayed averaging, namely DaSGD, which can fully parallelize SGD and forward/backward propagations to hide 100\% of gradient communication. By adjusting the gradient update scheme, this algorithm uses hardware resources more efficiently and reduces the reliance on high-throughput inter-connects. The theoretical analysis and experimental results conducted in this paper both show its convergence rate of $ O (1 / \sqrt {K} )$ stays the same as Mini-batch SGD. A analytical model shows that it enables linear performance scalability with the cluster size. | Reject | This paper proposes a variant of stochastic gradient descent that parallelizes the algorithm for distributed training via delayed gradient averaging. While the algorithm (DaSGD) proposed is sensible and seems to work, it also seems to miss a lot of related work. As pointed out by one of the reviewers, the class of asynchronous decentralized methods already seem to cover the space of DaSGD, and it's not clear how DaSGD differs from the existing methods in this space. As a result of this lack of comparison to related work, the reviewers recommended that the paper not be accepted at this time, and this evaluation was not challenged by an author response. I agree with this consensus. | train | [
"RzPuPGvlgQ3",
"fWjuw1AA4PH",
"M-yTqXFxoPc",
"D72mYagpLlk"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed DaSGD, an algorithm for large-scale large-batch training of deep neural networks. The algorithm combines Local SGD with delayed averaging steps to hide the communication overhead. However, workers still synchronize their forward/backward passes in each iteration. A convergence rate of O(1/sqrt(... | [
3,
3,
3,
3
] | [
5,
3,
3,
4
] | [
"iclr_2022_DtfrnB1fiX",
"iclr_2022_DtfrnB1fiX",
"iclr_2022_DtfrnB1fiX",
"iclr_2022_DtfrnB1fiX"
] |
iclr_2022_QkfMWTl520U | When do Convolutional Neural Networks Stop Learning? | Convolutional Neural Networks (CNNs) is one of the most essential architectures that has shown impressive performance in computer vision tasks such as image classification, detection, and segmentation. In training phase of CNN, an arbitrary number of epochs is used to train the neural networks. In a single epoch, the entire training data---divided by batch size---are fed to the network. However, the optimal number of epochs required to train a neural network is not well established. In practice, validation data is used to identify the generalization gap. To avoid overfitting, it is recommended to stop training when the generalization gap increases. However, this is a trial and error based approach. This raises a critical question: Is it possible to estimate when neural networks stop learning based on only the training data? In this research work, we introduce the stability property of data in layers and based on this property, we predict the near optimal epoch number of a CNN. We do not use any validation data to predict the near optimal epoch number. We experiment our hypothesis on six different CNN models and on three different datasets (CIFIR 10, CIFIR 100, SVHN). We save on average 58.49\% computational time to train a CNN model. Our code is available at https://github.com/PaperUnderReviewDeepLearning/Optimization. | Reject | This work presents a simple method for early stopping that is based on layer statistics. The reviewers have all commented on the work's relative lack of novelty, poor writing and insufficiently general empirical evidence for the method working. There are very few baselines being compared and little in terms of ablation studies. All the reviewers have provide extensive constructive comments about how this work can be improved and while there was no rebuttal or discussion, I feel that there is sufficient feedback in the process for the authors to improve this work further.
In conclusion: while the idea of using stability of layer statistics has merit, at this point this work is not ready to be published at ICLR. | train | [
"85KLrfBVqcx",
"YSLwiuZDtR",
"GOzn6MIRA6n",
"p5j6cuqg5dW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a data stability measure for early stopping in training various CNN architectures (RestNet18, VGG16), and a shallow CNN. Experiments demonstrate that comparing the means of layer weights across epochs to a certain number of decimal places allows for a significant (~30%-~70%) computational time s... | [
3,
1,
5,
5
] | [
4,
5,
5,
4
] | [
"iclr_2022_QkfMWTl520U",
"iclr_2022_QkfMWTl520U",
"iclr_2022_QkfMWTl520U",
"iclr_2022_QkfMWTl520U"
] |
iclr_2022_FNSR8Okx8a | Beyond Prioritized Replay: Sampling States in Model-Based Reinforcement Learning via Simulated Priorities | Prioritized Experience Replay (ER) has been empirically shown to improve sample efficiency across many domains and attracted great attention; however, there is little theoretical understanding of why such prioritized sampling helps and its limitations. In this work, we take a deep look at the prioritized ER. In a supervised learning setting, we show the equivalence between the error-based prioritized sampling method for mean squared error and uniform sampling for cubic power loss. We then provide theoretical insight into why it improves convergence rate upon uniform sampling during early learning. Based on the insight, we further point out two limitations of the prioritized ER method: 1) outdated priorities and 2) insufficient coverage of the sample space. To mitigate the limitations, we propose our model-based stochastic gradient Langevin dynamics sampling method. We show that our method does provide states distributed close to an ideal prioritized sampling distribution estimated by the brute-force method, which does not suffer from the two limitations. We conduct experiments on both discrete and continuous control problems to show our approach's efficacy and examine the practical implication of our method in an autonomous driving application. | Reject | This work provides a theoretical analysis of Prioritized Experience Replay (PER ) in a supervised learning setting, points out limitations of PER and proposes a model-based approach to address these shortcomings for continuous control problems.
Strengths:
-----------
The overall problem was motivated well
Reviewers agree that this proposed algorithm has promise
Overall the paper is well written
a diverse set of experiments is provided
Weaknesses:
---------------
reviewers point out some clarity issues
The theoretical analysis is performed in a supervised learning setting, and it is unclear how the resulting analysis transfers to the RL setting
There are some concerns (theoretical/technical) wrt to the proposed algorithm.
The analysis of the experiments is lacking in depth. For instance, no analysis of why the proposed algorithm outperforms very related baselines. Furthermore, it's unclear why for the autonomous driving experiment the algorithms achieve the same return, but the proposed method leads to less crashes.
Rebuttal:
----------
The authors have addressed many of the clarity issues. However, I agree with the reviewers theoretical concerns and deeper analysis requests were not addressed in a significant manner.
Summary:
------------
Overall this manuscript investigates an important problem and provides a promising algorithm. However, some theoretical/technical concerns remain and a deeper analysis of results is required. Hence my recommendation is that in it's current form the manuscript is not quite ready yet for publication. | train | [
"Q8VLkyOC_kT",
"-q2NbZVZJrC",
"_0gY9oQfe5W",
"I5xIxz5Lz5",
"c4Vu1qAlPSz",
"XlSuijfUlb",
"4J-U-FnBz84",
"Mit48twF5ZY",
"J0MIsCQZDIX",
"KsdG_q9JjXS",
"IOGVDK41MV",
"gyCJ9klaX1",
"v2E5P2r4x9",
"AnmjgsMoq6T"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors' responses.\n\nI think there may be a lack of hindsight both in the paper and the authors' response (no offense meant, I strongly encourage strengthening the analysis in future work). I also feel the authors dodge a number of important questions. A few examples below. \nBoth DQN or DDPG a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"4J-U-FnBz84",
"_0gY9oQfe5W",
"Mit48twF5ZY",
"c4Vu1qAlPSz",
"J0MIsCQZDIX",
"iclr_2022_FNSR8Okx8a",
"AnmjgsMoq6T",
"v2E5P2r4x9",
"gyCJ9klaX1",
"IOGVDK41MV",
"iclr_2022_FNSR8Okx8a",
"iclr_2022_FNSR8Okx8a",
"iclr_2022_FNSR8Okx8a",
"iclr_2022_FNSR8Okx8a"
] |
iclr_2022_7sz69eztw9 | Context-invariant, multi-variate time series representations | Modern time series corpora, in particular those coming from sensor-based data, exhibit characteristics that have so far not been adequately addressed in the literature on representation learning for time series. In particular, such corpora often allow to distinguish between \emph{exogenous} signals that describe a context which influences a given appliance and \emph{endogenous} signals that describe the internal state of the appliance. We propose a temporal convolution network based embedding that improves on the state-of-the-art by incorporating recent advances in contrastive learning to the time series domain and by adopting a multi-resolution approach. Employing techniques borrowed from domain-adversarial learning, we achieve an invariance of the embeddings with respect to the context provided by the exogenous signal. To show the effectiveness of our approach, we contribute new data sets to the research community and use both new as well as existing data sets to empirically verify that we can separate normal from abnormal internal appliance behaviour independent of the external signals in data sets from IoT and DevOps. | Reject | This is a representation learning time series paper.
The reviewers appreciated aspects of the paper, but all agreed that primarily the experiments are lacking and to a lesser degree the presentation is unclear and needs further proofreading.
So definitely this work has merits. It is also much appreciated that the authors throughout the discussion have been engaged in adding results and further clarification. This can be used for an updated version for the next conference. | train | [
"2tBrlT77Umo",
"AqEkLa-tmXc",
"IZBwNCJTNee",
"Tl7pqxhNWAK",
"azTm45BWIIx",
"2Hi7NMGdg7t",
"YN1ulhrJwy",
"3Ga67ano9be",
"8lQF6Eso3Mz",
"SmSPRQQDijk",
"QDohAB8z4QU",
"qkgOn0xibjk"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors propose a lot of improvement on their paper. However, it corresponds to so many changes that it modifies the original contribution: I still think that this work is promising but the new version of this article has to be submitted in another conference.",
" As promised, we are providing more results ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"AqEkLa-tmXc",
"2Hi7NMGdg7t",
"8lQF6Eso3Mz",
"QDohAB8z4QU",
"QDohAB8z4QU",
"QDohAB8z4QU",
"qkgOn0xibjk",
"qkgOn0xibjk",
"SmSPRQQDijk",
"iclr_2022_7sz69eztw9",
"iclr_2022_7sz69eztw9",
"iclr_2022_7sz69eztw9"
] |
iclr_2022_-ybZRQktdgc | LRN: Limitless Routing Networks for Effective Multi-task Learning | Multi-task learning (MTL) is a field involved with learning multiple tasks simultaneously typically through using shared model parameters. The shared representation enables generalized parameters that are task invariant and assists in learning tasks with sparse data. However, the presence of unforeseen task interference can cause one task to improve at the detriment of another. A recent paradigm constructed to tackle these types of problems is the routing network, that builds neural network architectures from a set of modules conditioned on the input instance, task, and previous output of other modules. This approach has many constraints, so we propose the Limitless Routing Network (LRN) which removes the constraints through the usage of a transformer-based router and a reevaluation of the state and action space. We also provide a simple solution to the module collapse problem and display superior accuracy performance over several MTL benchmarks compared to the original routing network. | Reject | This paper proposed a transformer based routing network which removes the constraints in the original routing network such as the depth of a network. Multi-Task learning (MTL) based on routing has been an interesting topic in the deep learning research community. Our reviewers have serious concerns on the experiments. The presented empirical results do not seem to be able to sufficiently support the claims in this paper. Comparing with SOTA MTL methods is needed to make the proposed method convincing. | train | [
"3kYgbOPtyJ_",
"t1bCeXBqcni",
"JCFMRzbN8i",
"g5rBKyVAeN6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a transformer based routing module to improve the routing network for multitask learning. The main contributions of this paper can be summarized as follows.\n1. The paper introduced transformer based routing to solve the input length limitation in the previous routing network.\n2. The transform... | [
3,
3,
3,
3
] | [
4,
4,
5,
4
] | [
"iclr_2022_-ybZRQktdgc",
"iclr_2022_-ybZRQktdgc",
"iclr_2022_-ybZRQktdgc",
"iclr_2022_-ybZRQktdgc"
] |
iclr_2022_EFSctTwY4xn | Towards Generalizable Personalized Federated Learning with Adaptive Local Adaptation | Personalized federated learning aims to find a shared global model that can be adapted to meet personal needs on each individual device. Starting from such a shared initial model, devices should be able to easily adapt to their local dataset to obtain personalized models. However, we find that existing works cannot generalize well on non-iid scenarios with different heterogeneity degrees of the underlying data distribution among devices. Thus, it is challenging for these methods to train a suitable global model to effectively induce high-quality personalized models without changing learning objectives. In this paper, we point out that this issue can be addressed by balancing information flow from the initial model and training dataset to the local adaptation. We then prove a theorem referred to as the {\em adaptive trade-off theorem}, showing adaptive local adaptation is equivalent to optimizing such information flow based on the information theory. With these theoretical insights, we propose a new framework called {\em adaptive federated meta-learning} (AFML), designed to achieve generalizable personalized federated learning that maintains solid performance under non-IID data scenarios with different degrees of diversity among devices. We test AFML in an extensive set of these non-IID data scenarios, with both CIFAR-100 and Shakespeare datasets. Experimental results demonstrate that AFML can maintain the highest personalized accuracy compared to alternative leading frameworks, yet with a minimal number of communication rounds and local updates needed. | Reject | This paper approaches personalized federated learning from the perspective of meta-learning and use the mutual information framework developed in a recent work to regularize local model training. All the reviewers consider the writing very poor and hard to understand, and the contributions not sufficient for acceptance. | train | [
"eMXRg9YQoYc",
"sWaYOSptltI",
"MK6UxYLx8pL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper poses an \"optimality vs adaptability\" tradeoff for federated learning, where optimality corresponds to strong performance on a global dataset and adaptability corresponds to a model's ability to adapt to perform well on local datasets. To deal with this tradeoff, this paper seems to draw motivation fr... | [
3,
6,
5
] | [
4,
4,
3
] | [
"iclr_2022_EFSctTwY4xn",
"iclr_2022_EFSctTwY4xn",
"iclr_2022_EFSctTwY4xn"
] |
iclr_2022_2I1wy0y6xo | Stability analysis of SGD through the normalized loss function | We prove new generalization bounds for stochastic gradient descent for both the convex and non-convex cases. Our analysis is based on the stability framework. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability instead of euclidean stability in weights. For neural networks, the measure of distance we consider is invariant to rescaling the weights of each layer. Furthermore, we exploit the notion of on-average stability in order to obtain a data-dependent quantity in the bound. This data-dependent quantity is seen to be more favorable when training with larger learning rates in our numerical experiments. This might help to shed some light on why larger learning rates can lead to better generalization in some practical scenarios. | Reject | The paper focuses on providing generalization bounds for SGD for functions that are invariant under scaling. The paper's analysis is based on the stability framework but instead focuses on a metric that is based on the anglular distance as compared to the euclidean distance.
Overall the reviewers found the paper to be interesting and the results to be useful. However the reviewers found the paper to be significantly lacking in terms of its presentation. In particular a clear exposition of the central object of the paper, i.e. normalized loss function was missing as well as clear comparisons between the presented results and existing results. I recommend the authors to motivate their results better and contrast their presented results with existing results to fully highlight the impact of their presented result. Hopefully the suggestions made by the reviewers in terms of presentation will be helpful to the authors towards improving the paper. | train | [
"fh0GR6ZqO_f",
"QIrHdkbq9h",
"v5V7hL4mMYt",
"ZuUcOmTSDzQ",
"Nv4ziyZ0n7i",
"XKbs_v6u1yb",
"d7jBPK06yDX",
"NFAPDnyq5Dm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are in the process of doing new experiments. In Nagarajan and Kolter, it is argued that the bounds of Neyshabur18 and Bartlett17 do not behave properly when increasing the training set size. Indeed, the bounds grow with the sample size. This comes from the fact that the norm of the parameters of the solution a... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"QIrHdkbq9h",
"v5V7hL4mMYt",
"NFAPDnyq5Dm",
"XKbs_v6u1yb",
"d7jBPK06yDX",
"iclr_2022_2I1wy0y6xo",
"iclr_2022_2I1wy0y6xo",
"iclr_2022_2I1wy0y6xo"
] |
iclr_2022_dut7suZoRqv | SparRL: Graph Sparsification via Deep Reinforcement Learning | Graph sparsification concerns data reduction where an edge-reduced graph of a similar structure is preferred. Existing methods are mostly sampling-based, which introduce high computation complexity in general and lack of flexibility for a different reduction objective. We present SparRL, the first general and effective reinforcement learning-based framework for graph sparsification. SparRL can easily adapt to different reduction goals and promise graph-size-independent complexity. Extensive experiments show that SparRL outperforms all prevailing sparsification methods in producing high-quality sparsified graphs concerning a variety of objectives. As graph representations are very versatile, SparRL carries the potential for a broad impact. | Reject | The paper presents an RL approach to the problem of graph sparsification. The reviewers expressed concerns about novelty, presentation, the correctness of some claims, and experimental validation. While the authors provided rebuttal and addressed some questions (leading to the increase of the score), some reviewers thought the authors focused on a justification of why suggested experiments were not done rather than doing them. We believe the paper in its current state is below the bar and recommend rejection. | train | [
"8bMAHMchoBX",
"cUrdpWbqi_",
"oYa6kbp1gQA",
"tswD5yd2y9k",
"vmOrnPzOusw",
"YHXF3tm_uz1",
"8Uncy1nXROj",
"Ujh58Picd1W",
"9-CzdP9ArLK",
"EDrbJgfZnU1",
"70IVv13ddVt",
"s7Naud7MYbY",
"lfG6VudXC7J",
"nhh1DfqGIK0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the responses carefully and I would like to retain my score. The authors have responded to all of my concerns, however, most of them are expressed as justifications of not doing what was requested instead of resolving the concern or performing the experiments. I believe that most of the comparisons an... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"8Uncy1nXROj",
"iclr_2022_dut7suZoRqv",
"EDrbJgfZnU1",
"9-CzdP9ArLK",
"YHXF3tm_uz1",
"nhh1DfqGIK0",
"Ujh58Picd1W",
"s7Naud7MYbY",
"nhh1DfqGIK0",
"cUrdpWbqi_",
"lfG6VudXC7J",
"iclr_2022_dut7suZoRqv",
"iclr_2022_dut7suZoRqv",
"iclr_2022_dut7suZoRqv"
] |
iclr_2022_hqkN6lE1fFQ | Kernel Deformed Exponential Families for Sparse Continuous Attention | Attention mechanisms take an expectation of a data representation with respect to probability weights. This creates summary statistics that focus on important features. Recently, Martins et al. (2020, 2021) proposed continuous attention mechanisms, focusing on unimodal attention densities from the exponential and deformed exponential families: the latter has sparse support. Farinhas et al. (2021) extended this to use Gaussian mixture attention densities, which are a flexible class with dense support. In this paper, we extend this to two general flexible classes: kernel exponential families and our new sparse counterpart kernel deformed exponential families. Theoretically, we show new existence results for both kernel exponential and deformed exponential families, and that the deformed case has similar approximation capabilities to kernel exponential families. Experiments show that kernel deformed exponential families can attend to non-overlapping intervals of time. | Reject | This paper extends the recent work on continuous-domain sparse attention mechanisms to use kernel parametrizations, and thus allow more flexible multi-modal shapes. Continuous attention extends the standard attention mechanisms to continuous-valued key/value/query functions, involving integrals over probability measures instead of sums over softmax-weighted sums.
Kernel methods fit very well in the framework and provide great expressivity. Reviewers agree it is an interesting and well-motivated idea. The contribution of incorporating kernel families in continuous attention seems substantially novel in comparison to the previous work on the topic.
The main concern, however, is that the paper focuses too much on the theory and not enough on the modeling benefits enabled by flexible kernels. I would stress that this isn't a question of *improving performance* purely (although quantitative results would help!) but perhaps more of qualitative results, demonstrating e.g. multimodality, selectivity, interpretability.
I very much look forward to a revised version, which I expect would be a strong paper. | train | [
"DmGYclO7x_",
"soFRhxau__j",
"0GWUUsM4qWg",
"Y-wwY5GZ-DF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nThank you for the helpful comments. It appears that reviewers feel that it is not quite ready for publication. We will work to incorporate the comments to submit to the next conference. Particularly we will characterize the rate of convergence of numerical integration in our setting both theore... | [
-1,
6,
5,
3
] | [
-1,
2,
3,
2
] | [
"iclr_2022_hqkN6lE1fFQ",
"iclr_2022_hqkN6lE1fFQ",
"iclr_2022_hqkN6lE1fFQ",
"iclr_2022_hqkN6lE1fFQ"
] |
iclr_2022_AOn-gHymcx | A neural network framework for learning Green's function | Green's function plays a significant role in both theoretical analysis and numerical computing of partial differential equations (PDEs). However, in most cases, Green's function is difficult to compute. The troubles arise in the following three folds. Firstly, compared with the original PDE, the dimension of Green's function is doubled, making it impossible to be handled by traditional mesh-based methods. Secondly, Green's function usually contains singularities which increase the difficulty to get a good approximation. Lastly, the computational domain may be very complex or even unbounded. To override these problems, we leverage the fundamental solution, boundary integral method and neural networks to develop a new method for computing Green's function with high accuracy in this paper. We focus on Green's function of Poisson and Helmholtz equations in bounded domains, unbounded domains and domains with interfaces. Extensive experiments illustrate the efficiency and the accuracy of our method for solving Green's function. In addition, we also use the Green's function calculated by our method to solve a class of PDE, and also obtain high-precision solutions, which shows the good generalization ability of our method on solving PDEs. | Reject | All reviewers vote for rejecting this paper. The main points of criticism shared by the reviewers are missing novelty and missing/unclear significance of the contribution. There was no rebuttal, so this is a clear reject. | train | [
"lSzCEr910qw",
"H8If-7HFrft",
"gP-zBc1sADg",
"DYPOHcrkza"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper explores a deep learning scheme to solve partial differential equations on regular domains (mostly 2D). This paper focuses on the method of Greens function for solving a class of linear PDE’s. More specifically: the Greens function corresponding to a PDE is a function that depends on (1.) the differenti... | [
3,
1,
3,
3
] | [
3,
5,
5,
4
] | [
"iclr_2022_AOn-gHymcx",
"iclr_2022_AOn-gHymcx",
"iclr_2022_AOn-gHymcx",
"iclr_2022_AOn-gHymcx"
] |
iclr_2022_0oSM3TC9Z5a | Learning to Persuade | In the standard Bayesian persuasion model, an informed sender looks to design a signaling scheme to partially reveal the information to an uninformed receiver, so as to influence the behavior of the receiver. This kind of strategic interaction abounds in the real world. However, the standard model relies crucially on some stringent assumptions that usually do not hold in reality. For example, the sender knows the receiver's utility function and the receiver's behavior is completely rational.
In this paper, we aim to relax these assumptions using techniques from the AI domain. We put forward a framework that contains both a receiver model and a sender model. We first train a receiver model through interactions between the sender and the receiver. The model is used to predict the receiver's behavior when the sender's scheme changes. Then we update the sender model to obtain an approximately optimal scheme using the receiver model. Experiments show that our framework has comparable performance to the optimal scheme. | Reject | The paper studies the Bayesian persuasion model in a more realistic setting where the sender does not know the receiver’s utility but can interact with the receiver repeatedly to learn the utility. The paper proposes a learning-based framework to optimize the sender’s strategy, then analyze the theoretical properties of the proposed framework, and perform extensive experiments. The reviewers acknowledged that the paper investigates an important problem of relaxing the practical shortcomings of the Bayesian persuasion model. However, the reviewers pointed out several weaknesses in the paper, and there was a clear consensus that the work is not ready for publication. The reviewers have provided very detailed and constructive feedback to the authors. We hope that the authors can incorporate this feedback when preparing future revisions of the paper. | train | [
"ii8zX8Yf9cP",
"1zazcDz_22",
"vcpUB206j_",
"y0lsOZim-9i",
"yYkYwySO2hb",
"otd84Bjt_8H",
"y0HTMOld10a",
"H4bQ22wNC20"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies how to relax certain stringent assumptions made in a Bayesian persuasion framework. In particular, the paper looks to relax assumptions where the sender knows the receiver’s utility function and the receiver’s behavior is completely rational. The authors also claim that the proposed framework wor... | [
3,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
5,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2022_0oSM3TC9Z5a",
"H4bQ22wNC20",
"ii8zX8Yf9cP",
"y0HTMOld10a",
"otd84Bjt_8H",
"iclr_2022_0oSM3TC9Z5a",
"iclr_2022_0oSM3TC9Z5a",
"iclr_2022_0oSM3TC9Z5a"
] |
iclr_2022_Y8KfxdZl-rI | Weakly Supervised Label Learning Flows | Supervised learning usually requires a large amount of labelled data. However, attaining ground-truth labels is costly for many tasks. Alternatively, weakly supervised methods learn with only cheap weak signals that only approximately label some data. Many existing weakly supervised learning methods learn a deterministic function that estimates labels given the input data and weak signals. In this paper, we develop label learning flow (LLF), a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many state-of-the-art alternatives. | Reject | The paper proposes a new approach for weakly supervised learning, based on conditional normalizing flows. Reviewers generally found the paper to have an interesting, novel proposal with empirical promise. However, some concerns were raised: to name a few,
(1) _Clarity._ Several reviewers found portions of the technical content hard to follow, e.g., the description of constraints in Sec 4.
(2) _Scalability compared to data programming._ One reviewer was unsure of how the present approach compares in terms of inference time and/or accuracy to a two-stage data programming approach.
(3) _Infeasibility of sampling from Equation 2._ One reviewer suggested the paper discuss and compare to a simpler baseline, which is to perform rejection sampling from the constraint set.
(4) _Suitability of point cloud problem._ One reviewer was unsure of whether the point cloud problem, considered as an experimental setting in this paper, is reflective of weakly supervised learning.
(5) _Practicality of knowing weak labeler error rates._ The paper assumes knowledge of the weak labeler error rates in constructing constraints. Some reviewers raised concerns on the practical viability of this assumption.
For point (2), the relevant reviewer was not convinced following the discussion. The suggestion is to treat LLF as a label model, which serves as input to a non-MC predictor. The question then is what the predictive performance of this combined approach looks like, as opposed to the LLF's themselves.
For point (3), the response clarified that the number of constraints might make rejection sampling infeasible. This appears to be true, but it is suggested that the paper at a minimum discuss this, and ideally also clarify claims about the general-purpose need for the proposed approach (since in some cases one might be able to do rejection sampling).
For point (4), the discussion was somewhat inconclusive. It is suggested that the authors explicitly discuss some of the points brought up in the response.
For point (5), while the assumption not wholly uncommon in the literature, it would be better for the authors to perform some sensitivity analysis against misspecification of the error rates.
Overall, the paper has some interesting ideas that are well worth exploring. The present execution appears to have some scope for improvement, with the reviews providing a range of suggestions of areas of the paper that could be made clearer or strengthened. The paper would be best served by incorporating these comments and undergoing a fresh review. | train | [
"4FRvOeSjlhs",
"B4TDZ2Hnvx",
"fzfQ6JYRTcH",
"Dv7Q2ux_mln",
"wb9QvceM6f4",
"Ed7giUouxta",
"yUNwz-D9rY",
"MdPOAUgo1p",
"LTu5rrrptX",
"EhA03n4sH7m",
"vbrshp4QB5l",
"GkVdDRFCohN",
"UipLQn_4eAL",
"RqtYyjRCgxW",
"1-5FB72tO4H",
"_WKOk_eg2j",
"lhmK128uje",
"DzUmWY4IesV",
"hJE_XDAg6Ka",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you for your comments! We will add them to our paper.",
" Thank you for your insightful comments.\n\n1. About the unpaired point cloud completion. Based on the definition in [1]: ''Inexact supervision concerns the situation in which some supervision information is given, but not as exact as desired.''\n ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"yUNwz-D9rY",
"wb9QvceM6f4",
"Ed7giUouxta",
"iclr_2022_Y8KfxdZl-rI",
"MdPOAUgo1p",
"GkVdDRFCohN",
"UipLQn_4eAL",
"EhA03n4sH7m",
"RqtYyjRCgxW",
"RqtYyjRCgxW",
"Dv7Q2ux_mln",
"UipLQn_4eAL",
"qV6ZBWVrDII",
"hJE_XDAg6Ka",
"DzUmWY4IesV",
"lhmK128uje",
"iclr_2022_Y8KfxdZl-rI",
"iclr_2022... |
iclr_2022_WYDzDksK5b | DiBB: Distributing Black-Box Optimization | We present a novel framework for Distributing Black-Box Optimization (DiBB). DiBB can encapsulate any Black Box Optimization (BBO) method, making it of particular interest for scaling and distributing modern Evolution Strategies (ES), such as CMA-ES and its variants, which maintain a sampling covariance matrix throughout the run. Due to high algorithmic complexity however, such methods are unsuitable alone to address high-dimensional problems, e.g. for sophisticated Reinforcement Learning (RL) control. This limits the applicable methods to simpler ES, which trade off faster updates for lowered sample efficiency. DiBB overcomes this limitation by means of problem decomposition, leveraging expert knowledge in the problem structure such as a known topology for a neural network controller. This allows to distribute the workload across an arbitrary number of nodes in a cluster, while maintaining the feasibility of second order (covariance) learning on high-dimensional problems. The computational complexity per node is bounded by the (arbitrary) size of blocks of variables, which is independent of the problem size. | Reject | The authors propose a novel framework for Distributing Black-Box Optimization (DiBB) which can encapsulate any Black Box Optimization (BBO) method. DiBB overcomes some of the limitations of existing methods by leveraging expert knowledge in the problem. The reviewers raised a variety of important technical concerns. The authors seem to agree that they need to substantially rewrite the paper. Therefore I recommend a rejection. | train | [
"ksHEthTC2E",
"dk0wT7cZJR4",
"Y1msP8BRWL",
"m0yWP67MiKz",
"xxklRpVhpRW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all Reviewers and Chairs for their effort and contribution. Four reviews constitute significant work for the community, which we truly appreciate. We decided to take heed of the comments: we will improve our work over time and resubmit in the near future. As this is OpenReview however, we w... | [
-1,
3,
6,
3,
3
] | [
-1,
4,
3,
3,
4
] | [
"iclr_2022_WYDzDksK5b",
"iclr_2022_WYDzDksK5b",
"iclr_2022_WYDzDksK5b",
"iclr_2022_WYDzDksK5b",
"iclr_2022_WYDzDksK5b"
] |
iclr_2022_u4C_qLuEpZ | Exploring General Intelligence of Program Analysis for Multiple Tasks | Artificial intelligence are gaining more attractions for program analysis and semantic understanding. Nowadays, the prevalent program embedding techniques usually target at one single task, for example detection of binary similarity, program classification, program comment auto-complement, etc, due to the ever-growing program complexities and scale. To this end, we explore a generic program embedding approach that aim at solving multiple program analysis tasks. We design models to extract features of a program, represent the program as an embedding, and use this embedding to solve various analysis tasks. Since different tasks require not only access to the features of the source code, but also are highly relevant to its compilation process, traditional source code or AST-based embedding approaches are no longer applicable. Therefore, we propose a new program embedding approach that constructs a program representation based on the assembly code and simultaneously exploits the rich graph structure information present in the program. We tested our model on two tasks, program classification and binary similarity detection, and obtained accuracy of
80.35% and 45.16%, respectively. | Reject | The paper presents an approach to neural analysis of programs whose main feature is that it operates on assembly code, therefore can account for issues that depend on things like compiler settings. The other important claim of the paper is that by combining information from the control flow and data-flow graphs extracted from the assembly code, they are able to produce an embedding that can support a variety of tasks.
The meta-reviewer agrees with the reviewers that this paper is not suitable for acceptance at ICLR. The novelty in the approach is quite limited, and the gap between the extremely bold claims of the introduction of the paper and what is actually proven in the experiments is quite significant. The evaluation is not strong enough to be considered state of the art. | train | [
"YpNIcY6y6jF",
"3YRgq_PhbS0",
"X1aca934657",
"291416-tJA4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a graph neural network-based approach to solve two binary analysis tasks (program classification and binary similarity\ndetection). The key idea of the paper is to merge different forms of representation of binary code (compiler IR, assembly code, etc.). Strengths\n-------------\nThe paper pre... | [
3,
1,
3,
3
] | [
5,
4,
4,
4
] | [
"iclr_2022_u4C_qLuEpZ",
"iclr_2022_u4C_qLuEpZ",
"iclr_2022_u4C_qLuEpZ",
"iclr_2022_u4C_qLuEpZ"
] |
iclr_2022_2yITmG7YIFT | HD-cos Networks: Efficient Neural Architechtures for Secure Multi-Party Computation | Multi-party computation (MPC) is a branch of cryptography where multiple non-colluding parties execute a well designed protocol to securely compute a function. With the non-colluding party assumption, MPC has a cryptographic guarantee that the parties will not learn sensitive information from the computation process, making it an appealing framework for applications that involve privacy-sensitive user data.
In this paper, we study training and inference of neural networks under the MPC setup. This is challenging because the elementary operations of neural networks such as the ReLU activation function and matrix-vector multiplications are very expensive to compute due to the added multi-party communication overhead.
To address this, we propose the HD-cos network that uses 1) cosine as activation function, 2) the Hadamard-Diagonal transformation to replace the unstructured linear transformations. We show that both of the approaches enjoy strong theoretical motivations and efficient computation under the MPC setup. We demonstrate on multiple public datasets that HD-cos matches the quality of the more expensive baselines. | Reject | There appears to be to be a fundamental error in the paper, w.r.t. the application of the proposed approach to finite fields. As a result, the paper cannot be accepted in its current form. | train | [
"eYB4Fs6kN6z",
"_GQ5uGOWrG2",
"lDhaETpfKdi",
"Dbm_cg_Pano",
"IhiKYZdpGru",
"CkA83Zf2kVs",
"bSFVtmBwlX",
"aVxaV9pH_O",
"PQgSM78Bf47",
"oa5WGIx8pO9",
"OdpVp0fCXNh",
"NNDg8h34IAU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for further comment. To our knowledge, most MPC algorithms used in practice are 2PC. Could the reviewer please help send links to papers that discuss that MPC can be more practical? ",
" The assumption of 2PC is restrictive as in practical settings as they only allow collaboration between ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
1,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"_GQ5uGOWrG2",
"aVxaV9pH_O",
"bSFVtmBwlX",
"CkA83Zf2kVs",
"NNDg8h34IAU",
"OdpVp0fCXNh",
"oa5WGIx8pO9",
"PQgSM78Bf47",
"iclr_2022_2yITmG7YIFT",
"iclr_2022_2yITmG7YIFT",
"iclr_2022_2yITmG7YIFT",
"iclr_2022_2yITmG7YIFT"
] |
iclr_2022_VDdDvnwFoyM | TimeVAE: A Variational Auto-Encoder for Multivariate Time Series Generation | Recent work in synthetic data generation in the time-series domain has focused on the use of Generative Adversarial Networks. We propose a novel architecture for synthetically generating time-series data with the use of Variational Auto-Encoders (VAEs). The proposed architecture has several distinct properties: interpretability, ability to encode domain knowledge, and reduced training times. We evaluate data generation quality by similarity and predictability against four multivariate datasets. We experiment with varying sizes of training data to measure the impact of data availability on generation quality for our VAE method as well as several state-of-the-art data generation methods. Our results on similarity tests show that the VAE approach is able to accurately represent the temporal attributes of the original data. On next-step prediction tasks using generated data, the proposed VAE architecture consistently meets or exceeds performance of state-of-the-art data generation methods. While noise reduction may cause the generated data to deviate from original data, we demonstrate the resulting de-noised data can significantly improve performance for next-step prediction using generated data. Finally, the proposed architecture can incorporate domain-specific time-patterns such as polynomial trends and seasonalities to provide interpretable outputs. Such interpretability can be highly advantageous in applications requiring transparency of model outputs or where users desire to inject prior knowledge of time-series patterns into the generative model. | Reject | The authors propose a VAE-based architecture for generating multivariate time series. The base version of TimeVAE models a distribution over a fixed-length sequences of observations using a latent vector of fixed dimensionality and a convolutional encoder and decoder. The Interpretable TimeVAE model incorporates additional features from traditional time series models such as explicit modelling of trends and seasonality. TimeVAE is compared to several baselines such as TimeGAN on four small times series dataset and seems to perform competitively according to two custom evaluation metrics and a visualization.
The reviewers thought that the paper was interesting but not ready for publication due to the following:
-The paper's contributions and their significance are not clear
-Interpretable VAE was not used in the experiments and its interpretability has not been verified
-Coverage of related work is insufficient | test | [
"uVOESFjnmf",
"0gffk67PcC-",
"8XdBV6d5Ebt",
"YjQAZ6UovRg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an architecture for synthetically generating time-series data with the use of VAE. The authors claim that the contributions for this paper are interpretability, capable of encoding domain knowledge, and reduced training time. This paper is motivated by the limited data access for the time-seri... | [
3,
3,
3,
3
] | [
4,
4,
4,
4
] | [
"iclr_2022_VDdDvnwFoyM",
"iclr_2022_VDdDvnwFoyM",
"iclr_2022_VDdDvnwFoyM",
"iclr_2022_VDdDvnwFoyM"
] |
iclr_2022_nKZvpGRdJlG | Mind Your Solver! On Adversarial Attack and Defense for Combinatorial Optimization | Combinatorial optimization (CO) is a long-standing challenging task not only in its inherent complexity (e.g. NP-hard) but also the possible sensitivity to input conditions. In this paper, we take an initiative on developing the mechanisms for adversarial attack and defense towards combinatorial optimization solvers, whereby the solver is treated as a black-box function and the original problem's underlying graph structure (which is often available and associated with the problem instance, e.g. DAG, TSP) is attacked under a given budget. Experimental results on three real-world combinatorial optimization problems reveal the vulnerability of existing solvers to adversarial attack, including the commercial solvers like Gurobi. In particular, we present a simple yet effective defense strategy to modify the graph structure to increase the robustness of solvers, which shows its universal effectiveness across tasks and solvers. | Reject | The paper aims at developing mechanisms for adversarial attack and defense towards combinatorial optimization solvers, where the solver is treated as a black-box function and the original problem’s underlying graph structure is attacked under a given budget. While the reviewers found the problem novel and interesting, they are not convinced by the problem formulation and the proposed solutions, as well as the experimental setup. Some of the points that the reviewers brought up during the discussion include: (i) the attack to the TSP does not follow the main paper's attack principle of adding and deleting edges, (ii), in general, it has not been explained why all these modification are really "relaxations", (iii) the notations are very confusing, and (iv) while authors' response on loosening the constraints makes sense, but the experiments (i.e., the TSP problem setting) in this work are not consistent with such clarification. Addressing the above points will significantly improve the manuscript. | val | [
"l_c4-L15YRg",
"xet3Ro63JXc",
"b279wPlcqK1",
"cW6EfNpsU83",
"0wflik6lrhC",
"u2HuzO10Bu2",
"UZ7oVuVlIh8",
"ERXHKhJW-ot",
"2-n06NFlFXn",
"ner2akXsuBg",
"3un6WZb5UYj",
"sXuG8Sj1jLg",
"HXP95yp7d-Q",
"ocU7ZHxW0n",
"Iasr8sgc2w"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply.\n\nCurrently in my opinion, maybe we cannot impose meaning on the difference 110-100. As you've claimed before, the problem after the attack has been changed and the difference only makes sense under the setting \"the optimal solution becomes better\" since it ensures that the gap becomes l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
4
] | [
"xet3Ro63JXc",
"b279wPlcqK1",
"cW6EfNpsU83",
"0wflik6lrhC",
"u2HuzO10Bu2",
"2-n06NFlFXn",
"ocU7ZHxW0n",
"Iasr8sgc2w",
"HXP95yp7d-Q",
"sXuG8Sj1jLg",
"iclr_2022_nKZvpGRdJlG",
"iclr_2022_nKZvpGRdJlG",
"iclr_2022_nKZvpGRdJlG",
"iclr_2022_nKZvpGRdJlG",
"iclr_2022_nKZvpGRdJlG"
] |
iclr_2022_2jYxq9_TkpG | Network Pruning Optimization by Simulated Annealing Algorithm | One critical problem of large neural networks is over-parameterization with a large number of weight parameters. This becomes an obstacle to implement networks in edge devices as well as limiting the development of industrial applications by engineers for machine learning problems. Plenty of papers have shown that the redundant branches can be erased strategically in a fully connected network. In this work, we reduce network complexity by pruning and structure optimization. We propose to do network optimization by Simulated Annealing, a heuristic based non-convex optimization method which can potentially solve this NP-hard problem and find the global minimum for a given percentage of branch pruning given sufficient amount of time. Our results have shown that Simulated Annealing can significantly reduce the complexity of a fully connected neural network with only limited loss of performance. | Reject | This paper presents the use of Simulated Annealing (SA) for pruning and optimizing the architecture of a neural network. After reviewing the paper and taking into consideration of the reviewing process, here are my comments:
- The contribution of the paper and the novelty is limited and not well presented
- The related work is very sparse. It requires a major improvement.
- The main concern is about the simplistic experiments and the lack of comparison between the results of the proposal and the SOTA methods.
- Conclusions are not well supported by the results.
From the above, the paper does not fulfill the standards of the ICLR. | val | [
"_UZnnE3lmS-",
"KCCfEkoRdr",
"EQvbHK0__Uy",
"Ton7WOsnVQJ",
"oxChfxw5_8k",
"xyZr8EP62xK",
"JyMf0ECzPZe",
"hksxp8KokLW",
"1pAzpZFpPOC",
"bQchaZkK6C",
"dk2aIUSgr4S"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I agree that observing the effectiveness in a smaller setting is a prerequisite to larger scale experiments. Since these changes in experimental design are not incorporated (and nor should we should expect them to be in the revision period), I will maintain my score.",
" Thank you fo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"oxChfxw5_8k",
"JyMf0ECzPZe",
"Ton7WOsnVQJ",
"dk2aIUSgr4S",
"bQchaZkK6C",
"1pAzpZFpPOC",
"hksxp8KokLW",
"iclr_2022_2jYxq9_TkpG",
"iclr_2022_2jYxq9_TkpG",
"iclr_2022_2jYxq9_TkpG",
"iclr_2022_2jYxq9_TkpG"
] |
iclr_2022_tvwNdOKhuF5 | Superior Performance with Diversified Strategic Control in FPS Games Using General Reinforcement Learning | This paper offers an overall solution for first-person shooter (FPS) games to achieve superior performance using general reinforcement learning (RL). We introduce an agent in ViZDoom that can surpass previous top agents ranked in the open ViZDoom AI Competitions by a large margin. The proposed framework consists of a number of generally applicable techniques, including hindsight experience replay (HER) based navigation, hindsight proximal policy optimization (HPPO), rule-guided policy search (RGPS), prioritized fictitious self-play (PFSP), and diversified strategic control (DSC). The proposed agent outperforms existing agents by taking advantage of diversified and human-like strategies, instead of larger neural networks, more accurate frag skills, or hand-craft tricks, etc. We provide comprehensive analysis and experiments to elaborate the effect of each component in affecting the agent performance, and demonstrate that the proposed and adopted techniques are important to achieve superior performance in general end-to-end FPS games. The proposed methods can contribute to other games and real-world tasks which also require spatial navigation and diversified behaviors. | Reject | The authors propose a method for training agents in FPS games, and achieve good results in a VizDoom setting. The method combines a number of different components and ideas, and it is not clear which of these are crucial to the success. In particular, ablations of the method are missing, as well as more runs to test variability and diversity. In addition, the paper is not all that easy to read. Reviewers had a number of partly overlapping concerns, of which I've tried to distil the main ones above. While the empirical results are promising, it is clear that much more work is needed to distil this method into generalizable knowledge. | train | [
"pUH--VhQLCP",
"iJntoNk75E",
"gMceTRJO_i6",
"Ejvr4OalSyW",
"Nq_3T_U-346",
"mA7PNWkOsYw",
"VMXR2Fvupxc",
"VYTWgKFko-9",
"-pKvzWAHsYf",
"j89yKWr6_PH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a multi-stage learning framework for solving FPS games, with hindsight experience replay, goal conditioned reinforcement learning, and prioritized self-play. This whole work includes an overall solution, with combinations of existing work. Strengths:\n1. The overall performance seems to be bett... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2022_tvwNdOKhuF5",
"gMceTRJO_i6",
"pUH--VhQLCP",
"VMXR2Fvupxc",
"j89yKWr6_PH",
"-pKvzWAHsYf",
"VYTWgKFko-9",
"iclr_2022_tvwNdOKhuF5",
"iclr_2022_tvwNdOKhuF5",
"iclr_2022_tvwNdOKhuF5"
] |
iclr_2022_SZRqWWB4AAh | SABAL: Sparse Approximation-based Batch Active Learning | We propose a novel and general framework (i.e., SABAL) that formulates batch active learning as a sparse approximation problem. SABAL aims to find a weighted subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart. We realize the general framework as a sparsity-constrained discontinuous optimization problem that explicitly balances uncertainty and representation for large-scale applications, for which we propose both greedy and iterative hard thresholding schemes. The proposed method can adapt to various settings, including both Bayesian and non-Bayesian neural networks. Numerical experiments show that that SABAL achieves state-of-the-art performance across different settings with lower computational complexity. | Reject | This paper presents a batch active learning approach (where in each active learning round, instead of a single input, we wish to select several inputs to be labeled). The paper attempts to solve this problem by posing it as a sparse approximation problem and shows that their approach performs favorably as compared to some of the existing methods such as BALD and Bayesian Coresets for batch active learning.
While the reviewers appreciated the basic idea and the general framework, there were several concerns from the reviewers (as well as myself upon reading the manuscript). Firstly, the idea of batch active learning as a sparse subset selection problem is not new (Pinsler et al, 2019). While previous methods such as (Pinsler et al, 2019) have used ideas such as Coresets, this paper uses sparse optimization techniques such as Greedy and IHT. Moreover, there were concerns about experimental settings relying on various heuristics, and lack of a more extensive and thorough comparison with important baselines, such as BatchBALD and others, which the authors acknowledged.
The reviewers have read the authors' response and engaged in discussion but their assessment remained unchanged. Based on their assessment and my own reading of the manuscript, the paper does not seem to be ready for publication. The authors are advised to consider the points raised in the reviews which I hope will help strengthen the paper for a future submission. | train | [
"JLaaRCM-LPe",
"jw_r26KvJM3",
"1cMgwzIqZVm",
"R7xmvA8FLwm",
"_D4YKUuUKY",
"sz4HVmoErv2",
"4w5DEtwsKrh",
"auks8l8N9R-",
"8I_KImWAaUG",
"MEVYUs0pXuX",
"tyqGw-1VjtA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing the detailed response. I would like to acknowledge that I have read the response. \nBy reading other reviews, I can see that the paper is unfortunately still under the acceptance bar. I hope the authors can take the reviewer's feedbacks in a constructive way to improve the paper for future su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"_D4YKUuUKY",
"1cMgwzIqZVm",
"4w5DEtwsKrh",
"MEVYUs0pXuX",
"auks8l8N9R-",
"tyqGw-1VjtA",
"8I_KImWAaUG",
"iclr_2022_SZRqWWB4AAh",
"iclr_2022_SZRqWWB4AAh",
"iclr_2022_SZRqWWB4AAh",
"iclr_2022_SZRqWWB4AAh"
] |
iclr_2022_zuDmDfeoB_1 | How Does the Task Landscape Affect MAML Performance? | Model-Agnostic Meta-Learning (MAML) has become increasingly popular for training models that can quickly adapt to new tasks via one or few stochastic gradient descent steps. However, the MAML objective is significantly more difficult to optimize compared to standard non-adaptive learning (NAL), and little is understood about how much MAML improves over NAL in terms of the fast adaptability of their solutions in various scenarios. We analytically address this issue in a linear regression setting consisting of a mixture of easy and hard tasks, where hardness is related to the rate that gradient descent converges on the task. Specifically, we prove that in order for MAML to achieve substantial gain over NAL, (i) there must be some discrepancy in hardness among the tasks, and (ii) the optimal solutions of the hard tasks must be closely packed with the center far from the center of the easy tasks optimal solutions. We also give numerical and analytical results suggesting that these insights apply to two-layer neural networks. Finally, we provide few-shot image classification experiments that support our insights for when MAML should be used and emphasize the importance of training MAML on hard tasks in practice. | Reject | The paper compares MAML and NAL for meta-learning, and provides theoretical explanations on some very simple models when MAML can be significantly better than NAL, related to a definition of task hardness. The findings are also supported by experimental results.
While the results are plausible and can mark the starting point of a useful analysis, the models analyzed in the paper are too simplistic to warrant publication at ICLR. The authors are encouraged to extend their methodology to more complicated task models, as well as to, e.g., multi-step versions of MAML (since the considered version of MAML makes a single step, the proposed problem hardness may not be applicable in more general situations). It is also not clear how the derived insights can guide the practical applications of MAML. | train | [
"D9pMv0kL9V_",
"ofPS6AiV_fA",
"K2YhoV1DNE8",
"_fkuuAGRmvf",
"99e6fKhP1pi",
"dEt0JTe2EWM",
"9AIdfS-BrE",
"UpZzEQBYUh",
"m6MhRKu6hD",
"tItlGpWqdR1",
"nOMar2XTAK-",
"XV8T-2YJQ4O",
"RQljuCcy-hp"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for considering our response and for your detailed feedback. \n\n- Regarding comparison with other baselines: Ultimately our aim is to determine when and why MAML is effective. To do so, we need to compare against the algorithm that does not make MAML's distinctive inner loop update. Unlike [1], we perf... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"K2YhoV1DNE8",
"iclr_2022_zuDmDfeoB_1",
"tItlGpWqdR1",
"dEt0JTe2EWM",
"9AIdfS-BrE",
"m6MhRKu6hD",
"XV8T-2YJQ4O",
"nOMar2XTAK-",
"RQljuCcy-hp",
"ofPS6AiV_fA",
"iclr_2022_zuDmDfeoB_1",
"iclr_2022_zuDmDfeoB_1",
"iclr_2022_zuDmDfeoB_1"
] |
iclr_2022_ldkunzUzRWj | A Simple and Debiased Sampling Method for Personalized Ranking | Pairwise ranking models have been widely used to address various problems, such as recommendation. The basic idea is to learn the rank of users' preferred items through separating items into positive samples if user-item interactions exist, and negative samples otherwise. Due to the limited number of observed interactions, pairwise ranking models face serious class-imbalance issue. Our theoretical analysis shows that current sampling-based methods cause the vertex-level imbalance problem, which makes the norm of learned item embeddings towards infinite after a certain training iterations, and consequently results in vanishing gradient and affects the model performance. To this end, we propose VINS, an efficient \emph{\underline{Vi}tal \underline{N}egative \underline{S}ampler}, to alleviate the class-imbalance issue for pairwise ranking models optimized by gradient methods. The core of VINS is a bias sampler with reject probability that will tend to accept a negative candidate with a larger popularity than the given positive item. Evaluation results on several real datasets demonstrate that the proposed sampling method speeds up the training procedure 30\% to 50\% for ranking models ranging from shallow to deep, while maintaining and even improving the quality of ranking results in top-N item recommendation. | Reject | The reviewers remained concerned about the overall novelty of the paper, finding the contributions somewhat incremental. The authors are encouraged to better substantiate design choices that they make, to improve the overall presentation, and to contrast with the works/line of research brought up by the reviewers. | train | [
"auZE3yd5GZ1",
"BsNdJspE5bF",
"rWCJdU1xf9D",
"Rr220AajxoF",
"AApkPXZv6FC",
"xJIwwjzGOF-",
"dsF9PSJ6gP0",
"RYxjn_6NYuZ",
"GVBIHMp0lBX",
"khJVYsaTgiZ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear QcpY:\n\nThank you so much for reading our response and giving a further feedback to let us know your concerns.\n\n1. Both of VINS and PRIS employ the reject sampling strategy. However, they have fundamental difference from each other. PRIS assumes that an item to be sampled as a negative one follows distrib... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"BsNdJspE5bF",
"Rr220AajxoF",
"khJVYsaTgiZ",
"GVBIHMp0lBX",
"RYxjn_6NYuZ",
"dsF9PSJ6gP0",
"iclr_2022_ldkunzUzRWj",
"iclr_2022_ldkunzUzRWj",
"iclr_2022_ldkunzUzRWj",
"iclr_2022_ldkunzUzRWj"
] |
iclr_2022__55bCXzj3D9 | Exploring and Evaluating Personalized Models for Code Generation | Large Transformer models achieved the state-of-the-art status for Natural Language Understanding and are increasingly the baseline architecture for source code generation models. Transformers are usually pre-trained on a large unsupervised corpus, learning token representations and transformations relevant to modeling generally available text, and then fine-tuned on a particular task of interest. While fine-tuning is a tried-and-true method for adapting a model to a new domain, for example question-answering on a given topic or a source code generation model, generalization remains an on-going challenge. Here we explore the ability of various levels of model fine-tuning to improve generalization by personalized fine-tuning. In the context of generating unit tests for Java methods, here we evaluate learning to personalize to a specific project using several methods to personalize transformer models for unit test generation for a specific Java project. We consider three fine-tuning approaches: (i) custom fine-tuning, which allows all the model parameters to be tuned; (ii) lightweight fine-tuning, which freezes most of the model's parameters, allowing a tuning of the token embeddings and softmax layer or the final layer alone; (iii) prefix tuning, which keeps language model parameters frozen, but optimizes a small project-specific prefix vector. Each of these techniques offers a different trade-off in total compute cost and prediction performance, which we evaluate by code and task-specific metrics, training time, and total computational operations. We compare these fine-tuning strategies for code generation and discuss the potential generalization and cost benefits of each in deployment scenarios. | Reject | The paper presents an empirical study of different strategies for fine tuning a large language model for the task of generating Java Unit tests *for a specific project*.
As several reviewers pointed out, the setup itself is fairly impractical, requiring fine-tuning on an individual project, thus making it applicable only to the very tail-end of very large projects where the investment of doing this would make sense and where one could reasonably collect sufficient data for that project.
On top of that, the paper contributes relatively little in terms of novel techniques. This in itself would be OK if the paper presented some extremely important empirical evidence. However, reviewers also raised some important concerns with the empirical evaluation itself. For example, as reviewer 1jM4 pointed out, there is prior research explicitly showing that the BLEU score is not a good measure for code evaluation.
Overall, the meta-reviewer agrees with the reviewers that this paper is below the bar for publication. | train | [
"FbPHR3QhBJd",
"Xzeao9MU-zX",
"7bin_tpv32c",
"fOBU1wV6enr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"- This work studies customizing models for code towards specific projects/coding standards/preferences for unit test case generation task. It specifically studies this in the context of server-side customization, where one entity would need to maintain multiple customized models, as opposed to client-side customiz... | [
3,
3,
3,
1
] | [
5,
5,
4,
5
] | [
"iclr_2022__55bCXzj3D9",
"iclr_2022__55bCXzj3D9",
"iclr_2022__55bCXzj3D9",
"iclr_2022__55bCXzj3D9"
] |
iclr_2022_GBszJ1XlKDj | Quasi-Newton policy gradient algorithms | Policy gradient algorithms have been widely applied to reinforcement learning (RL) problems in recent years. Regularization with various entropy functions is often used to encourage exploration and improve stability. In this paper, we propose a quasi-Newton method for the policy gradient algorithm with entropy regularization. In the case of Shannon entropy, the resulting algorithm reproduces the natural policy gradient (NPG) algorithm. For other entropy functions, this method results in brand new policy gradient algorithms. We provide a simple proof that all these algorithms enjoy the Newton-type quadratic convergence near the optimal policy. Using synthetic and industrial-scale examples, we demonstrate that the proposed quasi-Newton method typically converges in single-digit iterations, often orders of magnitude faster than other state-of-the-art algorithms. | Reject | This is a nice paper which shows that KL-regularized natural policy gradient (assuming exact access to the MDP, meaning no noise in the reward and Q function estimates), which achieves linear convergence, can use ideas from quasi-newton methods and recover their quadratic convergence. Given the excitement surrounding policy gradient methods and their convergence rates, this is a valuable direction and family of ideas. Unfortunately, the reviewers had many concerns about presentation, and also of the exact meaning and relationship of the results to prior work; I'll add to this and note that one issue with quasi-newton methods is that it is unclear how long the "burn-in" phase is, meaning the phase before their quadratic convergence kicks in, and this is still an issue in the present work's theory; another issue, as raised by reviewers, is the difference between the regularized and unregularized optimal policies. As such, it makes sense for this paper to receive more time and polish. | train | [
"yxY-lS5hAp",
"1ALjUfUAHR5",
"mmvjtiEalif",
"O-UTBZc9Gz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a quasi-Newton method for policy gradient algorithm with entropy regularization, which is popular in solving the reinforcement learning problem. With various entropy functions, this paper establishes quadratic convergence rate for the proposed algorithm. Such convergence rate is verified using ... | [
5,
3,
5,
5
] | [
3,
3,
4,
3
] | [
"iclr_2022_GBszJ1XlKDj",
"iclr_2022_GBszJ1XlKDj",
"iclr_2022_GBszJ1XlKDj",
"iclr_2022_GBszJ1XlKDj"
] |
iclr_2022_K47zHehHcRc | On the interventional consistency of autoencoders | Autoencoders have played a crucial role in the field of representation learning since its inception, proving to be a flexible learning scheme able to accommodate various notions of optimality of the representation. The now established idea of disentanglement and the recently popular perspective of causality in representation learning identify modularity and robustness to be essential characteristics of the optimal representation. In this work, we show that the current conceptual tools available to assess the quality of the representation against these criteria (e.g. latent traversals or disentanglement metrics) are inadequate. In this regard, we introduce the notion of \emph{interventional consistency} of a representation and argue that it is a desirable property of any disentangled representation. We develop a general training scheme for autoencoders that takes into account interventional consistency in the optimality condition. We present empirical evidence toward the validity of the approach on three different autoencoders, namely standard autoencoders (AE), variational autoencoders (VAE) and structural autoencoders (SAE).
Another key finding in this work is that differentiating between information and structure in the latent space of autoencoders can increase the modularity and interpretability of the resulting representation. | Reject | The paper introduces the notion of interventional consistency of a representation learned using autoencoders, which is claimed to be a desirable property for disentanglement. The reviewers agree that the contributions are novel and relevant, but they also found the paper hard to follow due to a lack of clarity and motivation. Further, they considered the underlying assumptions very strong and possibly hard to find practical instances where they may hold (e.g., the assumption that statistical dependencies in the prior are preserved by the response map). The reviewers also noted that some real-world examples showing the interventional consistency would be helpful.
After all, the paper contains interesting ideas and we would like to encourage the authors to pursue this line of work. Still, the paper in its current form is not ready for publication. We encourage the authors to address the reviewers' comments explicitly in a future version of the manuscript. | val | [
"wYd0GrX_07R",
"AjARk6018KU",
"GY6sdbJB8Wr",
"ANLSfcZR3NC",
"7H06pPFF9H",
"qPXSIdCl_XP",
"zX-PvVUkhu7",
"XKK_JFhCzW",
"2iYRhhyNvMg",
"zdCVf8cLjja",
"BMWvcXtlDpQ",
"MEYKsj4sff"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nI appreciate your comments. \n\n1. I believe the paper could be an impactful paper if equipped properly with a gentle guide to background knowledge and a clear exposition.\n2. I asked about real-world examples because I was wondering how scalable the proposed approach would be. Many challenges in... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"XKK_JFhCzW",
"qPXSIdCl_XP",
"7H06pPFF9H",
"iclr_2022_K47zHehHcRc",
"MEYKsj4sff",
"BMWvcXtlDpQ",
"ANLSfcZR3NC",
"zdCVf8cLjja",
"iclr_2022_K47zHehHcRc",
"iclr_2022_K47zHehHcRc",
"iclr_2022_K47zHehHcRc",
"iclr_2022_K47zHehHcRc"
] |
iclr_2022_aq6mqSkwApo | Meta-OLE: Meta-learned Orthogonal Low-Rank Embedding | We introduce Meta-OLE, a new geometry-regularized method for fast adaptation to novel tasks in few-shot image classification. The proposed method learns to adapt for each few-shot classification task a feature space with simultaneous inter-class orthogonality and intra-class low-rankness. Specifically, a deep feature extractor is trained by explicitly imposing orthogonal low-rank subspace structures among features corresponding to different classes within a given task. To adapt to novel tasks with unseen categories, we further meta-learn a light-weight transformation to enhance the inter-class margins. As an additional benefit, this light-weight transformation lets us exploit the query data for label propagation from labeled to unlabeled data without any auxiliary network components. The explicitly geometry-regularized feature subspaces allow the classifiers on novel tasks to be inferred in a closed form, with an adaptive subspace truncation that selectively discards non-discriminative dimensions. We perform experiments on standard few-shot image classification tasks, and observe performance superior to state-of-the-art meta-learning methods. | Reject | This paper proposes a meta-learning method with a latent feature space with a special structure of orthogonality and low-rankness. This paper is well written, and the use of the orthogonal low-rank embedding for meta-learning is interesting. The experimental results (including additional experiments in the author response) demonstrate the effectiveness of the proposed method. The author response addressed some concerns of the reviewers. However, the novelty of the proposed method is not high enough. | train | [
"GOnnssZLVe",
"7oc1Aymhw55",
"42WoSwsLLT",
"9XDDPu4N_y9",
"BGzBRQ4AegM",
"61R-n0EXsB",
"2O4HZNBljq2",
"NgourNxiIYB",
"t-Nd2XJamPP",
"3g98rJ9SYL",
"GdekuO38A0m",
"uvXAeHbGDU",
"P2u0hvwtGMa"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" \nThe key concern is actually still the performance that may not fully support the claim of this paper, whilst the proposed model in transductive setting is inferior to many existing inductive works. The validated novelty should in principle be supported by sufficient evidence to show the efficacy, typically beat... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"NgourNxiIYB",
"BGzBRQ4AegM",
"61R-n0EXsB",
"iclr_2022_aq6mqSkwApo",
"t-Nd2XJamPP",
"3g98rJ9SYL",
"iclr_2022_aq6mqSkwApo",
"uvXAeHbGDU",
"P2u0hvwtGMa",
"2O4HZNBljq2",
"iclr_2022_aq6mqSkwApo",
"iclr_2022_aq6mqSkwApo",
"iclr_2022_aq6mqSkwApo"
] |
iclr_2022_ffS_Y258dZs | Meta-Referential Games to Learn Compositional Learning Behaviours | Referring to compositional learning behaviours as the ability to learn to generalise compositionally from a limited set of stimuli, that are combinations of supportive stimulus components, to a larger set of novel stimuli, i.e. novel combinations of those same stimulus components, we acknowledge compositional learning behaviours as a valuable feat of intelligence that human beings often rely on, and assume their collaborative partners to use similarly. In order to build artificial agents able to collaborate with human beings, we propose a novel benchmark to investigate state-of-the-art artificial agents abilities to exhibit compositional learning behaviours. We provide baseline results on the single-agent tasks of learning compositional learning behaviours, using state-of-the-art RL agents, and show that our proposed benchmark is a compelling challenge that we hope will spur the research community towards developing more capable artificial agents. | Reject | All reviewers have agreed that the topic of evaluating compositional skills of agents is an important one and cast it as compositional learning as meta-reinforcement learning is an interesting approach. At the same time, reviewers have raised concerns with respect to the benchmark itself, the exposition and clarify of the ideas as well as the experimental evidence used to support some of the claims. The authors have not provided an author response but have acknowledged the reviewers feedback.
As this paper stands I cannot recommend acceptance for the current manuscript. | test | [
"LB0_8JW7qb8",
"BJz8rBDXnp",
"6iHow_EeoC",
"6yYdglgLmal",
"-McbUfD4UvY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their insightful reviews: thank you very much!\n\nWe will improve the paper, making it easier to understand and easier to build upon, with the reviewers' comments in mind.\n",
"In this paper, the authors propose a new benchmark to investigate state-of-the-art artificial agents' abilit... | [
-1,
3,
3,
3,
1
] | [
-1,
2,
3,
3,
4
] | [
"iclr_2022_ffS_Y258dZs",
"iclr_2022_ffS_Y258dZs",
"iclr_2022_ffS_Y258dZs",
"iclr_2022_ffS_Y258dZs",
"iclr_2022_ffS_Y258dZs"
] |
iclr_2022_t1QXzSGwr9 | Image Compression and Classification Using Qubits and Quantum Deep Learning | Recent work suggests that quantum machine learning techniques can be used for classical image classification by encoding the images in quantum states and using a quantum neural network for inference. However, such work has been restricted to very small input images, at most $4 \times 4$, that are unrealistic and cannot even be accurately labeled by humans. The primary difficulties in using larger input images is that hitherto-proposed encoding schemes necessitate more qubits than are physically realizable. We propose a framework to classify larger, realistic images using quantum systems. Our approach relies on a novel encoding mechanism that embeds images in quantum states while necessitating fewer qubits than prior work. Our framework is able to classify images that are larger than previously possible, up to $16 \times 16$ for the MNIST dataset on a personal laptop, and obtains accuracy comparable to classical neural networks with the same number of learnable parameters. We also propose a technique for further reducing the number of qubits needed to represent images that may result in an easier physical implementation at the expense of final performance. Our work enables quantum machine learning and classification on classical datasets of dimensions that were previously intractable by physically realizable quantum computers or classical simulation.
| Reject | This submission proposes a new encoding mechanism, i.e. a new quantum data loader for images with a reduced number of qubits, which is then used for image classification with off-the-shelf quantum neural networks (from TensorFlow Quantum). There are two major concerns raised by most reviewers. The first concern regards the novelty in the design of the quantum data loader and the use of off-the-shelf quantum neural networks (QNNs), the latter of which is neither novel nor close to state-of-art QNNs for the same purpose. The quantum data loading procedure also assumes a binary representation of images that might not be enough for low-contrast images. Although the number of qubits is reduced, the circuit tends to have a large depth which makes it hard for practical implementations. The second concern regards the overall performance of the proposed solution for image classification, where a clear quantum benefit is missing, or in some cases, a quantum disadvantage shows up. Based on these discussions, we believe that the submission requires substantial improvements before its publication. | train | [
"UAx0K8pksi",
"-fY8VzUu-75",
"cOXpTIprygq",
"dxJnZaG8M70",
"ATRw4_CZ99F"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree that our paper requires substantial improvements to meet the standards for publication at ICLR, and appreciate the time the reviewers have spent to provide thoughtful feedback. We will use this to improve our submission in the future",
"In this paper, the authors propose a certain type of trainable qua... | [
-1,
3,
1,
5,
3
] | [
-1,
4,
5,
2,
5
] | [
"iclr_2022_t1QXzSGwr9",
"iclr_2022_t1QXzSGwr9",
"iclr_2022_t1QXzSGwr9",
"iclr_2022_t1QXzSGwr9",
"iclr_2022_t1QXzSGwr9"
] |
iclr_2022_Kvbr8NicKq | Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack | The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available. However, the high computational cost (e.g., 100 times more than that of the project gradient descent attack) makes AA infeasible for practitioners with limited computational resources, and also hinders applications of AA in the adversarial training (AT). In this paper, we propose a novel method, minimum-margin (MM) attack, to fast and reliably evaluate adversarial robustness. Compared with AA, our method achieves comparable performance but only costs 3% of the computational time in extensive experiments. The reliability of our method lies in that we evaluate the quality of adversarial examples using the margin between two targets that can precisely identify the most adversarial example. The computational efficiency of our method lies in an effective Sequential TArget Ranking Selection (STARS) method, ensuring that the cost of the MM attack is independent of the number of classes. The MM attack opens a new way for evaluating adversarial robustness and contributes a feasible and reliable method to generate high-quality adversarial examples in AT. | Reject | The paper focuses on the strong adversarial attack, i.e., an attack that can generate strong adversarial examples and thus can better evaluate the adversarial robustness of given deep learning models. One review gave a score of 8 while the other 3 reviewers gave negative scores. The main issue lies in the limited experiments, as a potential substitute for AA, the proposed MM should be widely tested against different defenses, just as done in the AA paper. The writing of the paper is somehow is not rigorous including many incorrect statements and unsupported claims which should be well addressed in the revision. Thus, it cannot be accepted to ICLR for its current version. | train | [
"JjjvbSLcC1j",
"f_xT_PzdYef",
"RxbXZXcSIFI",
"ayTXEckd31H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose minimum-margin (MM) attack to provide comparable performance with AutoAttack while significantly decreasing the computational cost. They propose Sequential TArget Ranking Selection (STARS) to make the computational cost independent of the number of classes. Strengths:\n\n-- The paper is well-w... | [
5,
8,
3,
3
] | [
4,
4,
4,
4
] | [
"iclr_2022_Kvbr8NicKq",
"iclr_2022_Kvbr8NicKq",
"iclr_2022_Kvbr8NicKq",
"iclr_2022_Kvbr8NicKq"
] |
iclr_2022_v-27phh2c8O | AARL: Automated Auxiliary Loss for Reinforcement Learning | A good state representation is crucial to reinforcement learning (RL) while an ideal representation is hard to learn only with signals from the RL objective. Thus, many recent works manually design auxiliary losses to improve sample efficiency and decision performance. However, handcrafted auxiliary losses rely heavily on expert knowledge, and therefore lack scalability and can be suboptimal for boosting RL performance. In this work, we introduce Automated Auxiliary loss for Reinforcement Learning (AARL), a principled approach that automatically searches the optimal auxiliary loss function for RL. Specifically, based on the collected trajectory data, we define a general auxiliary loss space of size $4.6\times10^{19}$ and explore the space with an efficient evolutionary search strategy. We evaluate AARL on the DeepMind Control Suite and show that the searched auxiliary losses have significantly improved RL performance in both pixel-based and state-based settings, with the largest performance gain observed in the most challenging tasks. AARL greatly outperforms state-of-the-art methods and demonstrates strong generalization ability in unseen domains and tasks. We further conduct extensive studies to shed light on the effectiveness of auxiliary losses in RL. | Reject | This paper proposes to use evolutionary methods to learn auxiliary loss functions, demonstrating superior performance vs. typical auxiliary losses previously proposed in the RL literature.
Demonstrating that it is possible to learn auxiliary losses by evolution, both for pixel and state representations, that help train significantly faster (even on new environments) is definitely a meaningful contribution, as acknowledge by the majority of reviewers.
Although many of the original reviews' concerns were addressed by the authors during the discussion period, two major ones were only partially answered, both related to the limited empirical evaluation of the proposed approach (which is crucial for such a contribution that aims to demonstrate an improvement over existing related techniques):
1. The limited set of environments used for evaluation (and in particular the lack of partially observable environments)
2. The fact that the baseline being compared to was CURL, which the paper describes as "the state-of-the-art pixel-based RL algorithm", while reviewers mentioned DrQ and RAD as two more recent (and better) algorithms that were known well ahead of the ICLR submission deadline (note that the more recent DrQ-v2 is now even better). Since the data augmentation techniques used by these algorithms help shape the internal representation, like auxiliary losses do, it would have been important to validate that the proposed technique could be useful when plugged on top of such baselines.
The authors did try their best to address these major concerns during the rebuttal period, but the discussion between reviewers and myself came to the conclusion that this wasn't quite convincing enough yet. I encourage the authors to investigate these points in more depth in a future version of this work so as to make the empirical validation stronger (NB: the links provided in the last comment by authors on Nov. 30th didn't work, but this wasn't the main factor in the decision). | val | [
"dmJB-ZjzMVn",
"2Fp-UwI5n8",
"t0LVfmmFqy",
"_BqaHYfGB-C",
"6qXcQu_wqP3",
"1VjnXwaScV0",
"gC7xouSNJ01",
"avvQhiWVyfC",
"JjNH1Knuvs",
"KfJZebE0PA9",
"aFs-QMp8L0",
"_eOb8uuwvI",
"vNh3pGap7Uw"
] | [
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To further address the concerns of the architecture of the encoder from Reviewer xAxX, we conduct an ablation study on the impact of the architecture of the pixel encoder on the performance of AARL agents. The default architecture used in CURL is a 4-layer convolutional encoder. We test AARL with a convolutional ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
4,
3
] | [
"6qXcQu_wqP3",
"6qXcQu_wqP3",
"iclr_2022_v-27phh2c8O",
"gC7xouSNJ01",
"vNh3pGap7Uw",
"gC7xouSNJ01",
"aFs-QMp8L0",
"iclr_2022_v-27phh2c8O",
"_eOb8uuwvI",
"avvQhiWVyfC",
"iclr_2022_v-27phh2c8O",
"iclr_2022_v-27phh2c8O",
"iclr_2022_v-27phh2c8O"
] |
iclr_2022_Qb07sqX7dVl | Label Augmentation with Reinforced Labeling for Weak Supervision | Weak supervision (WS) is an alternative to the traditional supervised learning to address the need for ground truth. Data programming is a practical WS approach that allows programmatic labeling data samples using labeling functions (LFs) instead of hand-labeling each data point. However, the existing approach fails to fully exploit the domain knowledge encoded into LFs, especially when the LFs' coverage is low. This is due to the common data programming pipeline that neglects to utilize data features during the generative process.
This paper proposes a new approach called reinforced labeling (RL). Given an unlabeled dataset and a set of LFs, RL augments the LFs' outputs to cases not covered by LFs based on similarities among samples. Thus, RL can lead to higher labeling coverage for training an end classifier. The experiments on several domains (classification of YouTube comments, wine quality, and weather prediction) result in considerable gains. The new approach produces significant performance improvement, leading up to +21 points in accuracy and +61 points in F1 scores compared to the state-of-the-art data programming approach.
| Reject | The paper presents an approach to weak supervision to address the possibly low-coverage of rule-based labeling functions, by assigning similar labels to similar instances (where the similarity is computed in feature space).
The reviewers main concerns were the presentation, as well as the experimental protocol and results. Several directions for improvement have been identified by the reviewers and acknowledged by the authors, but in the current state of the submission the consensus is that the paper is not ready for publication. | train | [
"_M1ms7G7ucA",
"Q-Tcu1sCN-5",
"-uob3F0D3Xb",
"4aCfPbSod2v",
"4d8Nj6JoTXy",
"obEdTHFwix3",
"yakm2zAVe82",
"Bpli8yCLZaU",
"yZqGU3Matli"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I agree that applying your method to the datasets in the Wrench benchmark in future iterations would improve the paper, but as-is I will be keeping my score.\n\nI agree that the justification of looking at the setting with limited LF's is an interesting one, but I would suggest compar... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"4aCfPbSod2v",
"yZqGU3Matli",
"Bpli8yCLZaU",
"yakm2zAVe82",
"obEdTHFwix3",
"iclr_2022_Qb07sqX7dVl",
"iclr_2022_Qb07sqX7dVl",
"iclr_2022_Qb07sqX7dVl",
"iclr_2022_Qb07sqX7dVl"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.