paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_7HhX4mbern | Randomized Signature Layers for Signal Extraction in Time Series Data | Time series analysis is a widespread task in Natural Sciences, Social Sciences, and Engineering. A fundamental problem is finding an expressive yet efficient-to-compute representation of the input time series to use as a starting point to perform arbitrary downstream tasks.
In this paper, we build upon recent work using the signature of a path as a feature map and investigate a computationally efficient technique to approximate these features based on linear random projections. We present several theoretical results to justify our approach and empirically validate that our random projections can effectively retrieve the underlying signature of a path.
We show the surprising performance of the proposed random features on several tasks, including (1) mapping the controls of Stochastic Differential Equations to the corresponding solutions and (2) using the random signatures as time series representation for classification tasks. Besides providing a new tool to extract signatures and further validating the high level of expressiveness of such features, we believe our results provide interesting conceptual links between several existing research areas, suggesting new intriguing directions for future investigations. | Reject | This paper empirically evaluates the performance (in time and accuracy) of randomized signatures for time series, an idea that was developed theoretically in a series of recent paper. While reviewers acknowledge that implementing and testing this idea is relevant, they also consider that the lack of methodological and theoretical novelty, combined with the fact that the experimental results do not convincingly show that randomized signatures outperform existing methods on a variety of tasks, puts the paper below the acceptance bar. | train | [
"8ui8etJTBwe",
"mmgvFk_chTn",
"uO54UhLSCIT",
"K_AhHPuCCzR",
"xaO-zgU9xfg",
"bU9cdbsgSmS",
"ZQjC2H3JEvt",
"tfvlOw8rjg4",
"65tlS3s-UB",
"g8R-99ampP8",
"2JmhrtTugPs"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their feedback\n\n**W2 : it is expected to show the proposed advantages clearly**\n\nThe advantages of Randomized Signature (RS) compared to Signature are clearly shown in Fig. 7. Theoretical considerations are provided in Section 3.2.\n\nIn particular, looking at Fig. 7, it is apparent... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"mmgvFk_chTn",
"iclr_2022_7HhX4mbern",
"iclr_2022_7HhX4mbern",
"2JmhrtTugPs",
"g8R-99ampP8",
"65tlS3s-UB",
"tfvlOw8rjg4",
"iclr_2022_7HhX4mbern",
"iclr_2022_7HhX4mbern",
"iclr_2022_7HhX4mbern",
"iclr_2022_7HhX4mbern"
] |
iclr_2022_7N-6ZLyFUXz | Thompson Sampling for (Combinatorial) Pure Exploration | Pure exploration plays an important role in online learning. Existing work mainly focuses on the UCB approach that uses confidence bounds of all the arms to decide which one is optimal. However, the UCB approach faces some challenges when looking for the best arm set under some specific combinatorial structures. It uses the sum of upper confidence bounds within arm set $S$ to judge whether $S$ is optimal. This sum can be much larger than the exact upper confidence bound of $S$, since the empirical means of different arms in $S$ are independent. Because of this, the UCB approach requires much higher complexity than necessary. To deal with this challenge, we explore the idea of Thompson Sampling (TS) that uses independent random samples instead of the upper confidence bounds to make decisions, and design the first TS-based algorithm framework TS-Verify for (combinatorial) pure exploration. In TS-Verify, the sum of independent random samples within arm set $S$ will not exceed the exact upper confidence bound of $S$ with high probability. Hence it solves the above challange, and behaves better than existing UCB-based algorithms under the general combinatorial pure exploration setting. As for pure exploration of classic multi-armed bandit, we show that TS-Verify achieves an asymptotically optimal complexity upper bound. | Reject | The reviewers overall thought the problem was worth studying. However, no reviewer was particularly excited about this work. The main concern was that the new problem formulation is difficult to compare to prior work. Reviewers felt both more explanation and a deeper detailed comparison would make this a stronger paper. | train | [
"18Wpl1Pn2Eo",
"7IxFDmNlREV",
"05K8LyeN6Rj",
"M_SRNdDiYnP",
"wL0kr-E1wDA",
"mKSQtnm7fg9"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper designs the first TS-based algorithm to solve the pure exploration problem for both multi-armed bandit (MAB) and combinatorial MAB (CMAB) problems. It mainly focuses on the verification problem where a target arm is given and the algorithm needs to determine whether it is the optimal one. \n\nBenefiting... | [
5,
-1,
6,
-1,
6,
5
] | [
4,
-1,
4,
-1,
3,
2
] | [
"iclr_2022_7N-6ZLyFUXz",
"M_SRNdDiYnP",
"iclr_2022_7N-6ZLyFUXz",
"05K8LyeN6Rj",
"iclr_2022_7N-6ZLyFUXz",
"iclr_2022_7N-6ZLyFUXz"
] |
iclr_2022_FuLL40HLCRn | ST-DDPM: Explore Class Clustering for Conditional Diffusion Probabilistic Models | Score-based generative models involve sequentially corrupting the data distribution with noise and then learns to recover the data distribution based on score matching. In this paper, for the diffusion probabilistic models, we first delve into the changes of data distribution during the forward process of the Markov chain and explore the class clustering phenomenon. Inspired by the class clustering phenomenon, we devise a novel conditional diffusion probabilistic model by explicitly modeling the class center in the forward and reverse process, and make an elegant modification to the original formulation, which enables controllable generation and gets interpretability. We also provide another direction for faster sampling and more analysis of our method. To verify the effectiveness of the formulated framework, we conduct extensive experiments on multiple tasks, and achieve competitive results compared with the state-of-the-art methods(conditional image generation on CIFAR-10 with an inception score of 9.58 and FID score of 3.05). | Reject | The paper presents an interesting approach for defining conditional diffusion models. The core idea of this work is based on a new analysis of how class centers evolve in the forward diffusion process. On this positive side, this work builds on top of this analysis and introduces conditional diffusion processes that are guided towards class centers. This paper shows marginal improvements in small image datasets (MNIST and CIFAR-10) and auxiliary applications such as image inpainting and attribute-based image synthesis (demonstrated through only qualitative experiments). On the negative side, the proposed approach has the fundamentally limiting assumption that a class can be represented by a cluster center in the RGB space. Unfortunately, this assumption does not hold for practical datasets such as ImageNet where samples in each class have high diversity, and the class centers in the pixel space are not very distinct for different categories. The reviewers have rated this paper slightly above the borderline. They have acknowledged the novelty of the proposed guided diffusion. But they have criticized the submission for the lack of experiments on more common and challenging benchmarks. They also have criticized this work for not providing quantitative results on the auxiliary tasks. I agree with the reviewers that these experiments would shed light on whether the class center idea would hold for more challenging scenarios.
In the rebuttal, the authors provided additional quantitative results for the text-to-image generation task. However, these results show that the proposed method is outperformed by prior works. Most other auxiliary tasks including image inpainting and attribute-to-face generation are still demonstrated through qualitative experiments without detailed quantitative results.
In summary, given the limitation of the proposed approach, this submission currently lacks an in-depth analysis of the proposed work on challenging benchmarks and a detailed quantitative comparison to relevant baselines for the auxiliary tasks. Because of these concerns, we believe that the paper in its current form is not ready for publication at ICLR at this point. | train | [
"Rt4fFLtjeu6",
"6ObJaHK93K8",
"rkXnvLL3Mek",
"y7krOJyvoP",
"mK3cTOb_3Mg",
"JVb1EX0UfUt",
"A69y26I8uYQ",
"mbyVdrwjDk",
"u525n5VIZ6W",
"dFKIfTMoVl9",
"xjI5h2sNGuE",
"0qqeRTGmYG",
"CR6vkwR_hmo",
"BCHCx0DoLy",
"AMprEs8d0ay",
"azjdcIwSNGy",
"d-am7ZxXz44",
"fn7MtTThl7m",
"ILzXMqS_iZ9",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"... | [
" We thank the reviewer for the valuable feedback on improving the quality of our work. \nWe are delighted that the reviewer admits the novelty of our method and the value of our works.\nBelow we address specific questions and comments:\n#### **1. About the conditional baselines.**\nThanks for your comments. In pra... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"0_R_eVBah-j",
"iclr_2022_FuLL40HLCRn",
"JVb1EX0UfUt",
"mK3cTOb_3Mg",
"azjdcIwSNGy",
"A69y26I8uYQ",
"mbyVdrwjDk",
"u525n5VIZ6W",
"dFKIfTMoVl9",
"xjI5h2sNGuE",
"ILzXMqS_iZ9",
"iclr_2022_FuLL40HLCRn",
"BCHCx0DoLy",
"AMprEs8d0ay",
"d-am7ZxXz44",
"2SCAT-wxvaa",
"0qqeRTGmYG",
"iclr_2022... |
iclr_2022_UI4K-I2ypG | A Survey on Evidential Deep Learning For Single-Pass Uncertainty Estimation | Popular approaches for quantifying predictive uncertainty in deep neural networks often involve a set of weights or models, for instance via ensembling or Monte Carlo Dropout. These techniques usually produce overhead by having to train multiple model instances or do not produce very diverse predictions. This survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit “what they don’t know” and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting. Finally, we survey the application of the same paradigm to regression problems. We also provide a reflection on the strengths and weaknesses of the mentioned approaches compared to existing ones and provide the most central theoretical results in order to inform future research. | Reject | This paper surveys a collection of existing works that the author frames as evidential deep learning.
While the paper has been recognized as a nicely written survey, all reviewers have raised the major concern that the paper does not have a sufficient academic contribution compared to the surveyed papers. In particular, novelty appears to be limited as the paper does not offer novel views into the surveyed subfield.
Given the strong consensus among reviewers, I recommend rejecting this paper. | train | [
"JD6SD0mZ_L",
"O6S-L2vcWqE",
"j0ZvIIwdlHt",
"_7R1pRrJr6l",
"DRj7Zuyli5Y",
"8318Zou1j7U",
"RYyB9a24zTG",
"mJdLHol7juC",
"ZF2zI654ThV",
"H8urSpN6gb",
"0Dvrf8sLebb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors’ response as well as other reviewers’ comments. While I appreciate the response, I agree with other reviewers on the lack of concrete technical contribution as a conference paper.\n\nMy general take is that there is certainly value in this informative survey and the authors might want to c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"ZF2zI654ThV",
"j0ZvIIwdlHt",
"0Dvrf8sLebb",
"H8urSpN6gb",
"ZF2zI654ThV",
"mJdLHol7juC",
"iclr_2022_UI4K-I2ypG",
"iclr_2022_UI4K-I2ypG",
"iclr_2022_UI4K-I2ypG",
"iclr_2022_UI4K-I2ypG",
"iclr_2022_UI4K-I2ypG"
] |
iclr_2022_MOm8xik_TmO | Isotropic Contextual Representations through Variational Regularization | Contextual language representations achieve state-of-the-art performance across various natural language processing tasks. However, these representations have been shown to suffer from the degeneration problem, i.e. they occupy a narrow cone in the latent space. This problem can be addressed by enforcing isotropy in the latent space. In analogy to variational autoencoders, we suggest applying a token-level variational loss to a Transformer architecture and introduce the prior distribution's standard deviation as model parameter to optimize isotropy. The encoder-decoder architecture allows for learning interpretable embeddings that can be decoded into text again. Extracted features at sentence-level achieve competitive results on benchmark classification tasks. | Reject | This paper develops a variational auto transformer model (VAT), a VAE based on the transformer (encoder-decoder) architecture designed to provide isotropic representations by adding a token-level loss for isotropy. All the reviewers agree that this is a novel architecture with a valid and interesting goal behind it.
Reviewers varied somewhat on their impressions of the paper, but none were strongly positive on accepting it. I think the strongest and most aligned concerns were from reviewers ZoL1 and pcez. They both feel that the experiments do not convincingly demonstrate what is required. It would be good to better establish the success of variational sampling and the usefulness of isotropic representations. I would think that even a page of examples in the appendix, contrasting sampling by various methods, would add a lot of information to what is presented here. It would be even better to have experiments showing the relation between improved isotropy and improved task performance (suggested by j72L). Both reviewers are concerned about the small model and weak results and whether these results would extend to larger models that people actually use. While on the one hand, controlled comparisons are valuable, it is also true that people in NLP routinely like to see results on models of a reasonably competitive size. In practice, for 2019-2021, it seems that people regard having models of BERT-base size as the "reasonable" small size that they will accept and for which there is reasonably good performance and lots of available empirical results. Transformers directly trained with very few layers do not perform that well. Reviewer pcez is also concerned about the change of the data set in the MiniBERT comparison, which seems valid, and reviewer 5v5U is concerned about what's fair in terms of parameter counts.
This paper needs further work with larger and more careful experimental comparisons to meet the needed level of experimental rigor to be convincing. The authors were not able to iterate sufficiently quickly to achieve this during the ICLR reviewing period, so it seems best that the paper be rejected for now, and the authors look to subsequently submit a more developed version of this work. | train | [
"9VA7g4itybD",
"q6DQ8_sktNv",
"6aLoWI8o73R",
"1UanoiTdBAY",
"H-l3aBPO7nG",
"2K7x1Coce9B",
"RFgYfZqC-7",
"YOrLJN615m7",
"3W7n7EgMpEn",
"U06b5gKoC8n",
"Er4gL1CeIol",
"fBkexAqz_IF",
"TbFOyoMosjS",
"h1yYZBVSOYC",
"3gdpaLhzI8h",
"0y-u4qTQru"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the reply, updating, and clarification.",
" Thank you for your response!\n\nI believe that the overall description of the manuscript has been improved. Since connections to related research have been made clear, the manuscript is now easier for the reader to judge the contribution of the proposed ... | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"U06b5gKoC8n",
"fBkexAqz_IF",
"iclr_2022_MOm8xik_TmO",
"2K7x1Coce9B",
"RFgYfZqC-7",
"YOrLJN615m7",
"3W7n7EgMpEn",
"Er4gL1CeIol",
"0y-u4qTQru",
"h1yYZBVSOYC",
"3gdpaLhzI8h",
"6aLoWI8o73R",
"iclr_2022_MOm8xik_TmO",
"iclr_2022_MOm8xik_TmO",
"iclr_2022_MOm8xik_TmO",
"iclr_2022_MOm8xik_TmO"... |
iclr_2022_vDa28vlSBCP | Interactively Generating Explanations for Transformer Language Models | Transformer language models are state-of-the-art in a multitude of NLP tasks. Despite these successes, their opaqueness remains problematic. Recent methods aiming to provide interpretability and explainability to black-box models primarily focus on post-hoc explanations of (sometimes spurious) input-output correlations. Instead, we emphasize using prototype networks directly incorporated into the model architecture and hence explain the reasoning process behind the network's decisions. Moreover, while our architecture performs on par with several language models, it enables one to learn from user interactions. This not only offers a better understanding of language models but uses human capabilities to incorporate knowledge outside of the rigid range of purely data-driven approaches. | Reject | The authors propose a method—"Proto-Trex"—that incorporates prototype networks into text classification architectures to facilitate model explanations via presentation of similar training examples. There was agreement that the direction here is promising and the work contains some nice ideas and a good initial set of evaluations. However, the presentation can be improved substantially to better situate the contribution with respect to related work (and clarify the specific contributions on offer here), and to clarify the key technical details of the proposed approach. | test | [
"Wt2op2uhEcN",
"62YSALI2eLZ",
"5YayRTFs9-",
"epcxgv6fZ_o",
"N32bs7No9K-",
"QhzFUm4_Djp",
"LKT0dRimfeI",
"UyqPTuf7h0d",
"1S3j26IfTj2",
"a2p_xWJ-XhX"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their diligent reviewing and comments. We have provided a complete set of responses to their comments and updated the paper based on the reviewers’ feedback. We would be happy to receive feedback on our responses before the discussion phase ends.",
" We computed the accuracy for the S... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"iclr_2022_vDa28vlSBCP",
"5YayRTFs9-",
"a2p_xWJ-XhX",
"1S3j26IfTj2",
"UyqPTuf7h0d",
"LKT0dRimfeI",
"iclr_2022_vDa28vlSBCP",
"iclr_2022_vDa28vlSBCP",
"iclr_2022_vDa28vlSBCP",
"iclr_2022_vDa28vlSBCP"
] |
iclr_2022_BAtutOziapg | Can Stochastic Gradient Langevin Dynamics Provide Differential Privacy for Deep Learning? | Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has been suggested for differentially private learning. While previous research provides differential privacy bounds for SGLD when close to convergence or at the initial steps of the algorithm, the question of what differential privacy guarantees can be made in between remains unanswered. This interim region is essential, especially for Bayesian neural networks, as it is hard to guarantee convergence to the posterior. This paper will show that using SGLD might result in unbounded privacy loss for this interim region, even when sampling from the posterior is as differentially private as desired. | Reject | This paper shows that SLGD can be non-private (in the sense of differential privacy) even when a single step satisfies DP and also when sampling from the true posterior distribution is DP. I believe that it is useful to understand the behavior of SLGD in the intermediate regime. At the same time the primary question is whether SLGD is DP when the parameters are chosen so as to achieve some meaningful approximation guarantees after some fixed number of steps T and the algorithm achieves them while satisfying DP (but at the same does not satisfy DP for some number of step T' >T). Otherwise the setting is somewhat artificial and I find the result to be less interesting and surprising. So while I think the overall direction of this work is interesting I believe it needs to be strengthened to be sufficiently compelling. | train | [
"g4J5osMChrq",
"ZnZL-D3Rx7S",
"7t9ud2fCz6C",
"A666u0mWv76",
"sgLrVREJhi",
"3-YEJPNEHjA",
"KtqPFJSAB9P",
"xWiOFZbWGco",
"MwmrwQErgEC",
"jPvOkjaGSHT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read author's rebuttal. While there were several clarity issues, the authors have promised to improve these issues in the new version, so I keep my score of marginally above the acceptance threshold.",
" We thank the reviewer for his comments. Please see our responses below.\n\n**1**\n> “Theoretical resu... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"MwmrwQErgEC",
"jPvOkjaGSHT",
"xWiOFZbWGco",
"MwmrwQErgEC",
"KtqPFJSAB9P",
"iclr_2022_BAtutOziapg",
"iclr_2022_BAtutOziapg",
"iclr_2022_BAtutOziapg",
"iclr_2022_BAtutOziapg",
"iclr_2022_BAtutOziapg"
] |
iclr_2022_xRK8xgFuiu | Causal Discovery via Cholesky Factorization | Discovering the causal relationship via recovering the directed acyclic graph (DAG) structure from the observed data is a challenging combinatorial problem. This paper proposes an extremely fast, easy to implement, and high-performance DAG structure recovering algorithm. The algorithm is based on the Cholesky factorization of the covariance/precision matrix. The time complexity of the algorithm is $\mathcal{O}(p^2n + p^3)$, where $p$ and $n$ are the numbers of nodes and samples, respectively. Under proper assumptions, we show that our algorithm takes $\mathcal{O}(\log(p/\epsilon))$ samples to exactly recover the DAG structure with probability at least $1-\epsilon$. In both time and sample complexities, our algorithm is better than previous algorithms. On synthetic and real-world data sets, our algorithm is significantly faster than previous methods and achieves state-of-the-art performance. | Reject | This paper proposes an algorithm for learning linear SEMs via the Cholesky factorization and provides a detailed theoretical analysis of the algorithm. After an extensive discussion and clarification from the authors, there was a consensus that the theoretical results are incremental compared to existing work and many of the claims need additional context in light of existing work. In particular, I recommend the authors pay careful attention to the presentation of the sample complexity bounds, which were revealed to be substantially weaker than initially claimed, and to validate these bounds with careful experiments. | train | [
"HuQhQQvkpQp",
"jZeFI95tMo",
"cuf34xrFbO0",
"vpdKlmySdIb",
"B_nGTFodhR3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new algorithm for learning linear SEMs using Cholesky factorization of the covariance matrix induced by the SEM. While the paper essentially follows the ideas in Ghoshal and Honorio 2018 and Chen et al 2019, the main innovation is combining the order search step followed by parent set recovery... | [
3,
3,
6,
3,
6
] | [
5,
5,
2,
4,
3
] | [
"iclr_2022_xRK8xgFuiu",
"iclr_2022_xRK8xgFuiu",
"iclr_2022_xRK8xgFuiu",
"iclr_2022_xRK8xgFuiu",
"iclr_2022_xRK8xgFuiu"
] |
iclr_2022_QhHMf5J5Jom | A Scaling Law for Syn-to-Real Transfer: How Much Is Your Pre-training Effective? | Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks. The most significant advantage of using synthetic images is that the ground-truth labels are automatically available, enabling unlimited data size expansion without human cost. However, synthetic data may have a huge domain gap, in which case increasing the data size does not improve the performance. How can we know that? In this study, we derive a simple scaling law that predicts the performance from the amount of pre-training data. By estimating the parameters of the law, we can judge whether we should increase the data or change the setting of image synthesis. Further, we analyze the theory of transfer learning by considering learning dynamics and confirm that the derived generalization bound is compatible with our empirical findings. We empirically validated our scaling law on various experimental settings of benchmark tasks, model sizes, and complexities of synthetic images. | Reject | This paper discusses an empirical scaling law in terms of samples needed for pretraining for effective downstream transfer. The reviewers liked the premise but had major concerns with the evaluation and some clarifications about empirical choices made. The paper initially received reviews tending towards rejection. The authors provided a thoughtful rebuttal that addressed some of the questions. The paper was discussed heavily and all the reviewers updated their reviews in the post-rebuttal phase. In conclusion, all reviewers still believed that their concerns regarding empirical evaluation like why evaluate only sim2real transfer, etc. still stand. AC agrees with the reviewers' consensus and encourages the authors to take the feedback into account for future submissions. | train | [
"NPhjE9PMmKx",
"yY_nG0FJdg",
"btIQyX56ikV",
"nsVYeJ7pD4_",
"QJFWdeG08Dd",
"rQ6gkwr_hGG",
"zivcbLfjPbt",
"zpOkdeszrZ",
"0BPfCcMllCc",
"f5nKPHyH-xp",
"FP-XJvLMLWG",
"muyt7xVgzBq",
"AGByA7qAla",
"MPyh765UlgA",
"WvPRcYWWrbQ",
"ajMSXGSCKml"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I've detailed in my main review the list of concerns that still remain after rebuttal.\n\nI hope this helps to improve your paper.",
"This paper proposes a scaling law for transfer learning that takes into account the size of both the pretraining and finetuning datasets. They showed the parameters of the scali... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"QJFWdeG08Dd",
"iclr_2022_QhHMf5J5Jom",
"zpOkdeszrZ",
"rQ6gkwr_hGG",
"zivcbLfjPbt",
"0BPfCcMllCc",
"yY_nG0FJdg",
"muyt7xVgzBq",
"WvPRcYWWrbQ",
"yY_nG0FJdg",
"yY_nG0FJdg",
"ajMSXGSCKml",
"iclr_2022_QhHMf5J5Jom",
"iclr_2022_QhHMf5J5Jom",
"iclr_2022_QhHMf5J5Jom",
"iclr_2022_QhHMf5J5Jom"
] |
iclr_2022_zxEfpcmTDnF | Learning and controlling the source-filter representation of speech with a variational autoencoder | Understanding and controlling latent representations in deep generative models is a challenging yet important problem for analyzing, transforming and generating various types of data. In speech processing, inspiring from the anatomical mechanisms of phonation, the source-filter model considers that speech signals are produced from a few independent and physically meaningful continuous latent factors, among which the fundamental frequency and the formants are of primary importance. In this work, we show that the source-filter model of speech production naturally arises in the latent space of a variational autoencoder (VAE) trained in an unsupervised fashion on a dataset of natural speech signals. Using speech signals generated with an artificial speech synthesizer, we experimentally demonstrate that the fundamental frequency and formant frequencies are encoded in orthogonal subspaces of the VAE latent space and we develop a weakly-supervised method to accurately and independently control these speech factors of variation within the learned latent subspaces. Without requiring additional information such as text or human-labeled data, we propose a deep generative model of speech spectrograms that is conditioned on the fundamental frequency and formant frequencies, and which is applied to the transformation of speech signals. | Reject | This work shows that the source-filter model of speech production naturally arises in the latent space of a variational autoencoder (VAE). It is interesting that the fundamental frequency and formant frequencies are encoded in orthogonal subspaces of the VAE latent space -- this opens up a possible way of easily controlling these.
The key motivation/goal of the paper has caused some confusion. The abstract highlights an observation about VAE’s learned representation. In retrospection, some reviewers have not found the findings very surprising. On the other hand, the authors also do not attempt at developing and evaluating a speech generation method. As is, the paper seems to be much more suitable to a specialized workshop on speech. Alternatively, the paper could be extended to other modalities to show steerability of a representation using a synthetic dataset. However, the current scope seems to be somewhat limited hence I am not able to recommend the current manuscript for acceptance. | train | [
"DTy5yZ1Waow",
"56VxUr-PO5h",
"jNzLKrKxn5y",
"D0HViznyxVO",
"-FDe0IbTpS",
"ykG9jchkfd",
"hE4oUufb-B-",
"mFA48Alz64_",
"rDvlrgohJe2",
"hbAidjYtZ3R",
"dmsguLC6z12",
"-xKT0ecAnH",
"PDA22CvGVWF",
"NlKhpX5h5At",
"ax96UgPRKBa",
"fRw37tJ8uwv",
"DmvCsvKK-C0",
"z-B1OnWOtp_",
"zuD4vNVLIxz"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I read the authors' response and other reviews. Thanks for the responses to my review comments.\n\nThe issue about having separate embeddings for each fj is a valid approach if the aim of the paper is to develop a system that allows f0 and formant modification. However the authors claim to have a scientific curio... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"hE4oUufb-B-",
"iclr_2022_zxEfpcmTDnF",
"-xKT0ecAnH",
"dmsguLC6z12",
"hbAidjYtZ3R",
"DmvCsvKK-C0",
"DmvCsvKK-C0",
"iclr_2022_zxEfpcmTDnF",
"zuD4vNVLIxz",
"zuD4vNVLIxz",
"z-B1OnWOtp_",
"56VxUr-PO5h",
"56VxUr-PO5h",
"56VxUr-PO5h",
"56VxUr-PO5h",
"56VxUr-PO5h",
"iclr_2022_zxEfpcmTDnF",
... |
iclr_2022_uydP1ykieNv | Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness | Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples with small perturbations to fool the convolutional neural networks (CNNs). Ensemble training methods are promising to facilitate better adversarial robustness by diversifying the vulnerabilities among the sub-models, simultaneously maintaining comparable accuracy as standard training. Previous practices also demonstrate that enlarging the ensemble can improve the robustness. However, existing ensemble methods are with poor scalability, owing to the rapid complexity increase when including more sub-models in the ensemble. Moreover, it is usually infeasible to train or deploy an ensemble with substantial sub-models, owing to the tight hardware resource budget and latency requirement. In this work, we propose Ensemble-in-One (EIO), a simple but effective method to enlarge the ensemble within a random gated network (RGN). EIO augments the original model by replacing the parameterized layers with multi-path random gated blocks (RGBs) to construct an RGN. By diversifying the vulnerability of the numerous paths through the super-net, it provides high scalability because the paths within an RGN exponentially increase with the network depth. Our experiments demonstrate that EIO consistently outperforms previous ensemble training methods with even less computational overhead, simultaneously achieving better accuracy-robustness trade-offs than adversarial training. | Reject | The paper proposes a stochastic network, named Ensebmle-in-One (EIO), to increase adversarial robustness. EIO replaces the layers in a given architecture by so called random gated blocks (RGBs) in which a random gate switches between multiple copies of the original layers. By sampling from the random gates different subnetworks can be sampled which can be arranged to form an ensemble. During training non-robust feature distillation (as proposed in previous work) between models is applied. For inference in the experiments a single subnetwork is sampled, and the robustness of that subnetwork is compared against several ensemble methods and adversarial training.
One reviewer was worried about model capacity and recommended to perform experiments on image net to demonstrate scalability to large datasets. In turn, authors added experiments on CIFAR-100 during rebuttal period. Another critique was that the model does not show a significant advantage over vanilla adversarial training (AT), which can easily tuned with different perturbation strengths and only takes half of the training time. While other ensemble techniques, like DVERGE, can be combined with AT to improve their robustness, combing EIO with AT does not lead to improvements as shown by experiments performed during the rebuttal period. Two reviewers stated that adding a theoretical analysis will improve the paper. Another suggestion for improving the paper was to add a comparison to stochastic path networks, which is related work, and to investigate model performance when results from several sub-networks are aggregated.
Overall, the paper can not be accepted in its current state, but I would recommend the authors to continue the direction of work and to incorporate reviewers suggestions in a future version of the manuscript. | train | [
"luRxFYCaDoh",
"t4pykbtwvmo",
"Y5zRakQTX4k",
"bn1MLp8qlOo",
"ZB8F7L3VH5M",
"laPeQ9OFKXG",
"iNC9YPXFCvB",
"9TC1m20QZkR",
"5gBWTyK6DKs",
"46xJx3WhiJK",
"HgA1l2CK70",
"jT9RBnT4Upz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a new way to generate an ensemble of networks against adversarial attacks. Different from other methods, which train different sub-models, the proposed method repeats convolution layers multiple times and controls them with random gates. The experiment demonstrates that it outperforms other en... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_uydP1ykieNv",
"9TC1m20QZkR",
"HgA1l2CK70",
"46xJx3WhiJK",
"laPeQ9OFKXG",
"jT9RBnT4Upz",
"HgA1l2CK70",
"5gBWTyK6DKs",
"luRxFYCaDoh",
"iclr_2022_uydP1ykieNv",
"iclr_2022_uydP1ykieNv",
"iclr_2022_uydP1ykieNv"
] |
iclr_2022_qynwf18DgXM | Manifold Micro-Surgery with Linearly Nearly Euclidean Metrics | The Ricci flow is a method of manifold surgery, which can trim manifolds to more regular. However, in most cases, the Rich flow tends to develop singularities and lead to divergence of the solution. In this paper, we propose linearly nearly Euclidean metrics to assist manifold micro-surgery, which means that we prove the dynamical stability and convergence of such metrics under the Ricci-DeTurck flow. From the information geometry and mirror descent points of view, we give the approximation of the steepest descent gradient flow on the linearly nearly Euclidean manifold with dynamical stability. In practice, the regular shrinking or expanding of Ricci solitons with linearly nearly Euclidean metrics will provide a geometric optimization method for the solution on a manifold. | Reject | Ricci flow is a central topic in geometric analysis. It has had stunning applications in mathematics, most notably the proof of the Poincare conjecture. The major issue is that it, while it can be used to make a manifold more well-behaved, it frequently develops singularities. The main contribution fo this paper is in introducing linearly nearly Euclidean metrics. They give a proof of convergence in both short and infinite time, under the Ricci-DeTurk flow, and exploit connections to information geometric and mirror descent to develop methods for approximating the gradient flow. The paper is confusingly written (compounded by poor organizational structure and many grammatical mistakes). Perhaps the biggest issue is that it does not have any clear relevance to machine learning. Some sections mention connections to neural networks, but the reviewers found these sections to be indecipherable. | train | [
"uHcCsn_Vz-t",
"lKHWQIhjhug",
"1R8vRphCwPh",
"SHhN2HDuYj",
"bQvyzlahQeb",
"heNrgnenSV_",
"mibfe7LvOGP",
"fLe3Pz8LJm",
"5vIakQ0xxRB",
"mAHMAkPFryY"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Well, the focus of the paper is never to compare with the Euclidean space, and the metric is not to converge to Euclidean metric, but to linearly nearly Euclidean metric. We only found the convergence of the metric for a neural network in the linearly nearly Euclidean space consistent with the behavior of Ricci f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"lKHWQIhjhug",
"1R8vRphCwPh",
"SHhN2HDuYj",
"mibfe7LvOGP",
"5vIakQ0xxRB",
"mAHMAkPFryY",
"fLe3Pz8LJm",
"iclr_2022_qynwf18DgXM",
"iclr_2022_qynwf18DgXM",
"iclr_2022_qynwf18DgXM"
] |
iclr_2022_1JN7MepVDFv | On the relationship between disentanglement and multi-task learning | One of the main arguments behind studying disentangled representations is the assumption that they can be easily reused in different tasks. At the same time finding a joint, adaptable representation of data is one of the key challenges in the multi-task learning setting. In this paper, we take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing. We perform a thorough empirical study of the representations obtained by neural networks trained on automatically generated supervised tasks. Using a set of standard metrics we show that disentanglement appears in a natural way during the process of multi-task neural network training. | Reject | This paper aims to look at the relationship between disentanglement
and multi-task learning. The authors claim to show that disentanglement
emerges naturally from MTL.
The main discussion was whether the claim that disentanglement emerges
naturally from MTL has been adequately demonstrated. The main
issue is that MTL results in more extraction of information and that
is hard to disentangle from the disentanglement metrics used.
Reviewers agreed the work was interesting but not as complete as would
be desirable. I also feel it is not ready for ICLR presentation, but
with further work could be a nice future contribution. | train | [
"0jAPXxzkaMN",
"TmnZM73rUl",
"PkBfyABDuQz",
"O05KBc4BHdt",
"nWsM0KdxWxY",
"gIZ1gCoEr08",
"KSb29Ei3dvm",
"Flo5o1x0SY",
"zcQ1isEk7H",
"y0G-KLPqpEU",
"NYM7w9VZ5wL",
"EtBibP8Sew9",
"28GvhWEzP3",
"N6et8kwivlh",
"rmOdWD1GYiJ",
"CqUxqcJ-mlA",
"0LKpQ0n58Y",
"0YFWNwOVHej"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the relationship between disentanglement and multi-task learning (hard parameter sharing) via empirical study. The authors very carefully examined if multi-task learning encourages disentanglement. The authors performed an extensive empirical study and looked at different metrics on a couple of ... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_1JN7MepVDFv",
"gIZ1gCoEr08",
"iclr_2022_1JN7MepVDFv",
"N6et8kwivlh",
"KSb29Ei3dvm",
"y0G-KLPqpEU",
"EtBibP8Sew9",
"N6et8kwivlh",
"CqUxqcJ-mlA",
"NYM7w9VZ5wL",
"PkBfyABDuQz",
"28GvhWEzP3",
"0YFWNwOVHej",
"rmOdWD1GYiJ",
"0LKpQ0n58Y",
"0jAPXxzkaMN",
"iclr_2022_1JN7MepVDFv",
... |
iclr_2022_nsjkNB2oKsQ | Off-Policy Reinforcement Learning with Delayed Rewards | We study deep reinforcement learning (RL) algorithms with delayed rewards. In many real-world tasks, instant rewards are often not readily accessible or even defined immediately after the agent performs actions. In this work, we first formally define the environment with delayed rewards and discuss the challenges raised due to the non-Markovian nature of such environments. Then, we introduce a general off-policy RL framework with a new $Q$-function formulation that can handle the delayed rewards with theoretical convergence guarantees. For practical tasks with high dimensional state spaces, we further introduce the HC-decomposition rule of the $Q$-function in our framework which naturally leads to an approximation scheme that helps boost the training efficiency and stability. We finally conduct extensive experiments to demonstrate the superior performance of our algorithms over the existing work and their variants. | Reject | The paper introduces an interesting new model for MDPs, where the time is divided into random segments, and at the end of each segment the cumulative reward for the given segment is communicated to the agent. Some theoretical results with a policy improvement algorithm, as well as a more practical algorithm are presented. While the reviewers valued these contributions, they all had issues with the presentation of the paper.
These presentation issues make the paper extremely hard to follow -- this was a problem for all reviewers, and I also verified it myself. The reviewers also raised issues regarding the experiments, where the algorithms should be tuned properly to be able to draw valid conclusions.
While unfortunately the above issues prevent me from recommending acceptance of the paper, the authors are strongly encouraged to revise their paper and resubmit to the next venue, with a special emphasis on making the presentation proper. There are several problems/recommendations mentioned in the reviews which will certainly help in this regard (I would also add that special care should be made that everything is defined properly, e.g., the equation for your policy iteration should appear in the main text not in a proof in the appendix, or $\hat{Q}_\phi$ should be defined, etc.). | train | [
"YKtxXTjrRka",
"gzhRQ7gPt30",
"DAWiQanVS7U",
"D4L0gL7Hvii"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work considers a modified MDP model where the rewards are subject to random delays. Specifically, the timesteps are divided into random batches and the agent observes the cumulative rewards of the batch when it ends. For such a model, Markov property does not hold and classical methods do not apply. The autho... | [
5,
3,
3,
3
] | [
4,
4,
4,
4
] | [
"iclr_2022_nsjkNB2oKsQ",
"iclr_2022_nsjkNB2oKsQ",
"iclr_2022_nsjkNB2oKsQ",
"iclr_2022_nsjkNB2oKsQ"
] |
iclr_2022_tsg-Lf1MYp | Natural Attribute-based Shift Detection | Despite the impressive performance of deep networks in vision, language, and healthcare, unpredictable behaviors on samples from the distribution different than the training distribution cause severe problems in deployment. For better reliability of neural-network-based classifiers, we define a new task, natural attribute-based shift (NAS) detection, to detect the samples shifted from the training distribution by some natural attribute such as age of subjects or brightness of images. Using the natural attributes present in existing datasets, we introduce benchmark datasets in vision, language, and medical for NAS detection. Further, we conduct an extensive evaluation of prior representative out-of-distribution (OOD) detection methods on NAS datasets and observe an inconsistency in their performance. To understand this, we provide an analysis on the relationship between the location of NAS samples in the feature space and the performance of distance- and confidence-based OOD detection methods. Based on the analysis, we split NAS samples into three categories and further suggest a simple modification to the training objective to obtain an improved OOD detection method that is capable of detecting samples from all NAS categories. | Reject | This paper offers "natural attribute" methods for shift detection. Several reviewers are positive, but reviewer czxP is the most authoritative in the eyes of the area chair. In particular the AC is concerned that the task that this paper defines is artificial and not useful. Naturally occurring shifts are real, happen all the time and meaningfully affect model performance... Natural shifts should be defined over distributions not singular instances. The authors create artificial instantiations of natural shifts to illustrate a well known flaw in OOD detection algorithms.
To demonstrate the usefulness of this approach to "natural shifts" the authors should should show how this algorithm performs in settings like "WILDS"... where the shift are meaningful and not artificial, in the opinion of the AC. The paper should cite and contrast WILDS (https://arxiv.org/abs/2012.07421) and Mandoline (http://proceedings.mlr.press/v139/chen21i.html) in a future revision. | val | [
"9hvqzxOVQCK",
"54yZzQOcdtk",
"ThLIdMKk-FP",
"LLIOas-DwIU",
"CpxTBwpT8f8",
"GhEnCqkP9V2",
"Vesb9Q0-z2n",
"GmTacvC0ed",
"-W_Q27TVCEO",
"EGGNX69EXoc",
"z2Wjynwl12",
"IiKQ3UaJ9_b"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer bsNT,\n\nWe thank you again for taking your precious time to review our paper and for providing us with your valuable and constructive feedback. We would really appreciate a reply as to whether our response and clarifications have addressed the concerns raised in your review. Please let us know if t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"EGGNX69EXoc",
"z2Wjynwl12",
"IiKQ3UaJ9_b",
"IiKQ3UaJ9_b",
"iclr_2022_tsg-Lf1MYp",
"Vesb9Q0-z2n",
"z2Wjynwl12",
"-W_Q27TVCEO",
"EGGNX69EXoc",
"iclr_2022_tsg-Lf1MYp",
"iclr_2022_tsg-Lf1MYp",
"iclr_2022_tsg-Lf1MYp"
] |
iclr_2022_1v1N7Zhmgcx | Maximum Likelihood Training of Parametrized Diffusion Model | Whereas the diverse variations of the diffusion model exist in image synthesis, the previous variations have not innovated the diffusing mechanism by maintaining the static linear diffusion. Meanwhile, it is intuitive that there would be more promising diffusion pattern adapted to the data distribution. This paper introduces such adaptive and nonlinear diffusion method for the score-based diffusion models. Unlike the static and linear VE-or-VP SDEs of the previous diffusion models, our parameterized diffusion model (PDM) learns the optimal diffusion process by combining the normalizing flow ahead of the diffusion process. Specifically, PDM utilizes the flow to non-linearly transform a data variable into a latent variable, and PDM applies the diffusion process to the transformed latent distribution with the linear diffusing mechanism. Subsequently, PDM enjoys the nonlinear and learned diffusion from the perspective of the data variable. This model structure is feasible because of the invertibility of the flow. We train PDM with the variational proxy of the log-likelihood, and we prove that the variational gap between the variational bound and the log-likelihood becomes tight when the normalizing flow becomes the optimal. | Reject | This paper presents a simple approach called PDM for composing non-linear and complex normalizing flows with score-based generative models. Since score-based models can be considered as a special form of continuous-time normalizing flows, PDM corresponds to a composition of different classes of normalizing flows.
Pros:
* Combining generic normalizing flows with score-based models is an interesting direction as they have different characteristics and can be complementary to each other.
* Using Ito's lemma to show that the model learns a non-linear SDE in data space is valuable.
* The authors show that the variational gap can be reduced using normalizing flows.
Cons:
* The proposed method does not exhibit a clear advantage compared to the diffusion baseline without the normalizing flow component. On the CIFAR10 dataset, the best NLL and FID results are obtained by the diffusion baseline.
* Theorem 2 makes a very unrealistic assumption that a flow network is flexible enough to transform $p_r$ to any arbitrary distribution. If this holds, we wouldn't need the score-based generation model anymore. We could simply train the normalizing flow to map the input data distribution to a Normal distribution.
* This submission chooses to discuss differences with the recent LSGM framework. However, in doing so, several inaccurate claims are made. The lack of inference data diffusion in LSGM is mentioned as one of its drawbacks. However, it is not clear what is the value of having such a mechanism and what implications it may have on the expressivity of the model. Note that mapping from data space to latent space in VAEs can be considered as a stochastic inversion rather than an exact inversion. Ito's lemma does not require invertibility and it can be easily applied to the forward and generative diffusion in LSGM. The authors argue that applying it to the forward diffusion in LSGM will result in $\hat{p_{r}}\ne p_{r}$. But, $\hat{p_{r}}$ would be only considered for visualization of the forward diffusion and it is not used for training or any other purposes. LSGM, the proposed PDM, and score-based models are all trained with a reweighting of ELBO (see [here](https://arxiv.org/abs/2106.02808)). It is not clear if the drawback mentioned above has an impact on the training or expressivity of the model.
* The presentation in the paper requires improvement. The motivation on why invertibility plays a key role is not clear beyond generating the visualization in Figure 2.
In summary, the paper proposes an interesting idea and explores directions very relevant to the current focus in generative learning. However, given the concerns above, we don't believe that the paper in its current form is ready for presentation at ICLR. | train | [
"fhont96AVa",
"3QLkqL_y38",
"rQ89nz-1rAk",
"P7-joPb4BdF",
"gJ1h3Z4o8B",
"DhQtWWSv7r0",
"92JSBFcuh1W",
"Hv5z8jeF-LZ",
"_mlyQnaU3c",
"dhkpvdlg9Kq",
"ozFccUodRB",
"8a2cRoJdfpz",
"9bqODOKKSmm",
"3_JW_5xr7t6",
"irsSw_7nB2v",
"azI2-Q6W5-4",
"4i6hYLUI7v",
"h7b9CyzUWX2",
"RgkaiFIRon",
... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" I would like to thank the authors for their further explanations and acknowledge that I have read them. However, my raised concerns mostly remain. Therefore, I will keep my score.",
" One thing really confusing is that ScoreFlow also uses a normalizing flow. Regardless of the naming motivation of ScoreFlow, the... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"Hv5z8jeF-LZ",
"_mlyQnaU3c",
"_mlyQnaU3c",
"_mlyQnaU3c",
"iclr_2022_1v1N7Zhmgcx",
"iclr_2022_1v1N7Zhmgcx",
"ozFccUodRB",
"ozFccUodRB",
"RgkaiFIRon",
"iclr_2022_1v1N7Zhmgcx",
"3_JW_5xr7t6",
"iclr_2022_1v1N7Zhmgcx",
"gJ1h3Z4o8B",
"dhkpvdlg9Kq",
"dhkpvdlg9Kq",
"D6Asjzsvslx",
"D6Asjzsvsl... |
iclr_2022_T8BnDXDTcFZ | Accelerating Training of Deep Spiking Neural Networks with Parameter Initialization | Despite that spiking neural networks (SNNs) show strong advantages in information encoding, power consuming, and computational capability, the underdevelopment of supervised learning algorithms is still a hindrance for training SNN. Our consideration is that proper weight initialization is a pivotal issue for efficient SNN training. It greatly influences gradient generating with the method of back-propagation through time at the initial training stage. Focusing on the properties of spiking neurons, we first derive the asymptotic formula of their response curve approximating the actual neuron response distribution. Then, we propose an initialization method obtained from the slant asymptote to overcome gradient vanishing. Finally, experiments with different coding schemes on classification tasks show that our method can effectively improve training speed and the final model accuracy compared with traditional deep learning initialization methods and existing SNN initialization methods. Further validation on different neuron types and training hyper-parameters has shown comparably good versatility and superiority over the other methods. Some suggestions are given to SNN training based on the analyses. | Reject | The paper derives a new parameter initialization for deep spiking neural networks to overcome the vanishing gradient problem.
During the review, concerns were expressed about how well the method would scale to larger neural networks. It was also questioned how this parameter initialization technique compares with a recently proposed batch normalization technique, especially when training larger neural network on more challenging datasets. There were also concerns raised about the readability of the paper.
I commend the authors for improving the readability of their paper in their revision. I also commend them for taking the time to implement the comparisons requested by the reviewers. These new comparisons revealed that batch normalization and its recently proposed variant were superior to the initialization method on its own, and that the initialization proposed in the paper did not significantly improve performance when paired with batch norm [[1](https://openreview.net/forum?id=T8BnDXDTcFZ¬eId=yIAPcSbUAQ0)]. The authors also acknowledged based on the new results, that their proposed parameter initialization scheme appears to fail to scale to more complex datasets and networks, especially relative to competing methods, which invalidates a key claim that their approach can "accelerate training and get better accuracy compared with existing methods" [[2](https://openreview.net/forum?id=T8BnDXDTcFZ¬eId=j12fwayWEb)].
The recommendation is to reject the paper in its current form. | test | [
"8ZephAuqaHA",
"j12fwayWEb",
"pz5Al9bl90G",
"1O5JTrVoVLB",
"GA2GqjEpCH",
"2RbkiTQCrHq",
"yIAPcSbUAQ0",
"8n-Y1ph4luv",
"7IholsEb7Mu",
"EqD7RNmXetN",
"jyV8bO5wxU",
"wFh0s3O1D2"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to express our great appreciation for your reading and patience on our response, as well as your careful and objective comments on our work. We also sincerely accept your judgment after the review period.\n\nFor in-depth exploration on training of spiking neural networks, we always have been maintai... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"pz5Al9bl90G",
"iclr_2022_T8BnDXDTcFZ",
"2RbkiTQCrHq",
"iclr_2022_T8BnDXDTcFZ",
"EqD7RNmXetN",
"1O5JTrVoVLB",
"wFh0s3O1D2",
"wFh0s3O1D2",
"jyV8bO5wxU",
"iclr_2022_T8BnDXDTcFZ",
"iclr_2022_T8BnDXDTcFZ",
"iclr_2022_T8BnDXDTcFZ"
] |
iclr_2022_u7PVCewFya | Losing Less: A Loss for Differentially Private Deep Learning | Differentially Private Stochastic Gradient Descent, DP-SGD, is the canonical approach to training deep neural networks with guarantees of Differential Privacy (DP). However, the modifications DP-SGD introduces to vanilla gradient descent negatively impact the accuracy of deep neural networks. In this paper, we are the first to observe that some of this performance can be recovered when training with a loss tailored to DP-SGD; we challenge cross-entropy as the de facto loss for deep learning with DP. Specifically, we introduce a loss combining three terms: the summed squared error, the focal loss, and a regularization penalty. The first term encourages learning with faster convergence. The second term emphasizes hard-to-learn examples in the later stages of training. Both are beneficial because the privacy cost of learning increases with every step of DP-SGD. The third term helps control the sensitivity of learning, decreasing the bias introduced by gradient clipping in DP-SGD. Using our loss function, we achieve new state-of-the-art tradeoffs between privacy and accuracy on MNIST, FashionMNIST, and CIFAR10. Most importantly, we improve the accuracy of DP-SGD on CIFAR10 by $4\%$ for a DP guarantee of $\varepsilon=3$. | Reject | The reviewers all seemed to agree that the investigation of other losses is an interesting direction of study, and acknowledged there was some empirical performance improvement for standard computer vision tasks. However, they felt the justification of the specific form of loss was a bit shaky and heuristic, and were furthermore unconvinced by results exclusively for image classification (one reviewer was unmoved by the magnitude of improvement). This was a borderline decision, but we hope the authors refine and resubmit their work as this is an interesting but underexplored direction within DPML.
As one recent related work which investigates the effect of other architecture differences in the DP setting, the authors may be interested in https://arxiv.org/abs/2110.08557. | train | [
"0XcZcKE5BRK",
"DFhBaqJOH7W",
"6g356CsxFKo",
"TaP_O9YOqSj",
"otPX9MVlShm",
"Sgyzs-nkV1",
"L8vFHNVtNZy",
"1Z7-w7cOBuW",
"4xubzROqF14",
"I0FJOlIkghH",
"ytg0JLchjlC",
"luWkwr9PtVK",
"bMA1SdtZXW",
"Jtrgr5gyP-"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new loss function to improve the performance of neural network models trained by DP-SGD. The new loss function is a weighted average of the sum of squared error, the focal loss, and a penalty on the squared norm of the pre-activation output of different layers. The new loss achieves state-of-t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"iclr_2022_u7PVCewFya",
"1Z7-w7cOBuW",
"TaP_O9YOqSj",
"otPX9MVlShm",
"I0FJOlIkghH",
"iclr_2022_u7PVCewFya",
"4xubzROqF14",
"0XcZcKE5BRK",
"bMA1SdtZXW",
"luWkwr9PtVK",
"Jtrgr5gyP-",
"iclr_2022_u7PVCewFya",
"iclr_2022_u7PVCewFya",
"iclr_2022_u7PVCewFya"
] |
iclr_2022_B0JH7vR2iGh | PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration | Learning to collaborate is critical in multi-agent reinforcement learning (MARL). A branch of previous works proposes to promote collaboration by maximizing the correlation of agents’ behaviors, which is typically characterised by mutual information (MI) in different forms. However, simply maximizing the MI of agents’ behaviors cannot guarantee achieving better collaboration because suboptimal collaboration can also lead to high MI. In this paper, we first propose a new collaboration criterion to evaluate collaboration from three perspectives, which arrives at a form of the mutual information between global state and joint policy. This bypasses the introduction of explicit additional input of policies and mitigates the scalability issue meanwhile. Moreover, to better leverage MI-based collaboration signals, we propose a novel MARL framework, called Progressive Mutual Information Collaboration (PMIC) which contains two main components. The first component is Dual Progressive Collaboration Buffer (DPCB) which separately stores superior and inferior trajectories in a progressive manner. The second component is Dual Mutual Information Estimator (DMIE), including two neural estimators of our new designed MI based on separate samples in DPCB. We then make use of the neural MI estimates to improve agents' policies: to maximize the MI lower bound associated with superior collaboration to facilitate better collaboration and to minimize the MI upper bound associated with inferior collaboration to avoid falling into local optimal. PMIC is general and can be combined with existing MARL algorithms. Experiments on a wide range of MARL benchmarks show the superior performance of PMIC compared with other MARL algorithms. | Reject | This paper includes an interesting idea of pushing towards good, and away from bad, trajectories, in a natural clean way.
The main problem of the paper is one of clarity. The paper could be written to be more concise and clear, which would allow, for instance, for sufficient space for the figures (which are currently sometimes rather tiny) as well as not having to fiddle with the margins and spaces quite as much as the current submission seems to do (which would be strictly disallowed at most conferences). The issue of clarity was also clear during discussion, where sometimes multiple rounds of clarifications were needed to allow the reviewers to correctly interpret parts.
For these reasons, I recommend that the authors resubmit a new, cleaned-up, version of the work, with all the changes neatly incorporated. Then I think this could make for a nice addition to the literature.
I appreciate this will be a disappointment to the authors, but I think ultimately it will make their work more impactful, and longer-lasting. | train | [
"IPZsgod96EW",
"kfzXiEizTNo",
"IAd_3LI6Wg",
"FzkzWXOZI1S",
"hr7DK-pVded",
"qcIETwpfbVZ",
"gbvnlJ3MgW",
"y46qzi5YNs3",
"Pi5UwOvaZy",
"TKXEC7rjavv",
"mTf_fctOYF",
"EnzxjQbGI-V",
"t-iDgKqO1Y-",
"bVUE5IeljG9",
"XmHl1XkDq68",
"fc5QrKciTkO",
"Z7UQrVIWnhw",
"dsXLEX_sIbr",
"HlQDswXHXWS",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" \nWe appreciate the reviewer's quick response very much!\nFor the remaining concern, we provide some more discussion on it below.\n\nWe have to clarify that, when we discuss $I(s;\\pi(u|s))$ in Section 3.2, we consider the general case of multiple agents' policies. We agree that $-H(\\pi_{-i}|\\pi_{i},s)$ reduces... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"kfzXiEizTNo",
"IAd_3LI6Wg",
"FzkzWXOZI1S",
"dsXLEX_sIbr",
"VFhndk9ngV2",
"gbvnlJ3MgW",
"iclr_2022_B0JH7vR2iGh",
"iclr_2022_B0JH7vR2iGh",
"TKXEC7rjavv",
"mTf_fctOYF",
"Z7UQrVIWnhw",
"iclr_2022_B0JH7vR2iGh",
"gbvnlJ3MgW",
"gbvnlJ3MgW",
"gbvnlJ3MgW",
"r4Qxy1BOpik",
"y46qzi5YNs3",
"VF... |
iclr_2022_GIEPR9OomyX | Langevin Autoencoders for Learning Deep Latent Variable Models | Markov chain Monte Carlo (MCMC), such as Langevin dynamics, is valid for approximating intractable distributions. However, its usage is limited in the context of deep latent variable models since it is not scalable to data size owing to its datapoint-wise iterations and slow convergence. This paper proposes the amortized Langevin dynamics (ALD), wherein datapoint-wise MCMC iterations are entirely replaced with updates of an inference model that maps observations into latent variables. Since it no longer depends on datapoint-wise iterations, ALD enables scalable inference from large-scale datasets. Despite its efficiency, it retains the excellent property of MCMC; we prove that ALD has the target posterior as a stationary distribution with a mild assumption. Furthermore, ALD can be extended to sampling from an unconditional distribution such as an energy-based model, enabling more flexible generative modeling by applying it to the prior distribution of the latent variable. Based on ALD, we construct a new deep latent variable model named the Langevin autoencoder (LAE). LAE uses ALD for autoencoder-like posterior inference and sampling from the latent space EBM. Using toy datasets, we empirically validate that ALD can properly obtain samples from target distributions in both conditional and unconditional cases, and ALD converges significantly faster than traditional LD. We also evaluate LAE on the image generation task using three datasets (SVHN, CIFAR-10, and CelebA-HQ). Not only can LAE be trained faster than non-amortized MCMC methods, but LAE can also generate better samples in terms of the Fréchet Inception Distance (FID) compared to AVI-based methods, such as the variational autoencoder. | Reject | This paper proposes an amortization strategy for MC sampling from a single chain rather than per-datapoint chains, and uses this strategy to define a new Bayesian autoencoder based on Langevin dynamics.
The reviewers find the line of thought very promising, and a potentially interesting addition to the latent variable literature, while also raising some concerns. The dimension of the single chain must match the dataset size, which limits the computational benefits coming from amortization, and in fact this restriction seems hard, as empirical results (added in the discussion period) are qualitatively worse in the `d<n` case. This could be emphasized much more strongly in the current version, and seems worth deeper investigation. In the discussion, the authors agreed that in the case when the feature matrix G is the identity matrix then there can be no amortization improvement, but for other choices of (fixed) features, amortization *can* yield improvements; this is quite unclear. In addition, in response to the reviewers' observation, the authors improved in the discussion period the implementation of the EBM baseline, leading to much less clear cut differences on metrics. To improve the work further, the authors should clarify the source of amortization improvement, and discuss more the relationship to Bayesian Neural Networks (perhaps by evaluating against Bayesian / hyper-net / hyper-GAN generative models.) | train | [
"GH1FumG9Cjq",
"7AmSDUYxt7G",
"vS63yFItIQl",
"eCnThGocvN4",
"O6Qj74lz8e",
"Ts3ICWX76-i",
"bjVCxjmfhit",
"KmKLAO1mETx",
"E3w0GYI6r5I",
"WMrnUn2UuDR",
"zXJAqFpSiH",
"Cbrz7aYM6Hm",
"SITiyB2Dnf_",
"b1lQKMqxXEj",
"9DyCPQ3VYn",
"j-Q_GvM6t1W",
"Q9iyyf-9VBl",
"NHZtWndeva",
"oNVg8Gji1h6",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thanks for your reply.\n\nWe understand your point, and we think you are correct: only orthogonal subspaces of $\\Phi$ corresponding to minibatch are updated even when $G$ is not an identity matrix.\nHowever, what is important here is that posterior samples of latent variables ($Z$) for *all data points* (not onl... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"7AmSDUYxt7G",
"eCnThGocvN4",
"iclr_2022_GIEPR9OomyX",
"O6Qj74lz8e",
"Ts3ICWX76-i",
"bjVCxjmfhit",
"KmKLAO1mETx",
"E3w0GYI6r5I",
"b1lQKMqxXEj",
"vS63yFItIQl",
"Nqap9Km0Dt7",
"sLE1tvD9dc6",
"CKfl_fACzft",
"9DyCPQ3VYn",
"oNVg8Gji1h6",
"vS63yFItIQl",
"Nqap9Km0Dt7",
"CKfl_fACzft",
"j... |
iclr_2022_Fl3Mg_MZR- | On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning | The lottery ticket hypothesis questions the role of overparameterization in supervised deep learning. But how is the performance of winning lottery tickets affected by the distributional shift inherent to reinforcement learning problems? In this work, we address this question by comparing sparse agents who have to address the non-stationarity of the exploration-exploitation problem with supervised agents trained to imitate an expert. We show that feed-forward networks trained with behavioural cloning compared to reinforcement learning can be pruned to higher levels of sparsity without performance degradation. This suggests that in order to solve the RL-specific distributional shift agents require more degrees of freedom. Using a set of carefully designed baseline conditions, we find that the majority of the lottery ticket effect in both learning paradigms can be attributed to the identified mask rather than the weight initialization. The input layer mask selectively prunes entire input dimensions that turn out to be irrelevant for the task at hand. At a moderate level of sparsity the mask identified by iterative magnitude pruning yields minimal task-relevant representations, i.e., an interpretable inductive bias. Finally, we propose a simple initialization rescaling which promotes the robust identification of sparse task representations in low-dimensional control tasks. | Accept (Spotlight) | This paper studies the Lottery Ticket hypothesis in reinforcement learning for identifying good sparse representations for low-dimensional tasks. The paper received initial reviews tended towards acceptance. However, the reviewers had some clarification questions and concerns. The authors provided a thoughtful rebuttal. The paper was discussed and most reviewers updated their reviews in the post-rebuttal phase. Reviewers generally agree that the paper should be accepted but still have good feedback. AC agrees with the reviewers and suggests acceptance. However, the authors are urged to look at reviewers' feedback and incorporate their comments in the camera-ready. | train | [
"euLde50XxCd",
"5IeGAdW4Se",
"yPbakVLi4ko",
"d8DsQu9tAMO",
"SW23aNneTE1",
"UA0Kwo1OISv",
"D5SNgta0PCo",
"iWFZd5WX5XO"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their response, in particular on the clarification of Figure 6. This has adequately addressed my concerns, and I remain happy to recommend the paper for acceptance.",
"**Update after reading the other reviews and authors' responses:** Some valid criticism has been raised and addressed ... | [
-1,
8,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"d8DsQu9tAMO",
"iclr_2022_Fl3Mg_MZR-",
"UA0Kwo1OISv",
"iWFZd5WX5XO",
"D5SNgta0PCo",
"5IeGAdW4Se",
"iclr_2022_Fl3Mg_MZR-",
"iclr_2022_Fl3Mg_MZR-"
] |
iclr_2022_Clre-Prt128 | Complex-valued deep learning with differential privacy | We present $\zeta$-DP, an extension of differential privacy (DP) to complex-valued functions. After introducing the complex Gaussian mechanism, whose properties we characterise in terms of $(\varepsilon, \delta)$-DP and Rényi-DP, we present $\zeta$-DP stochastic gradient descent ($\zeta$-DP-SGD), a variant of DP-SGD for training complex-valued neural networks. We experimentally evaluate $\zeta$-DP-SGD on three complex-valued tasks, i.e. electrocardiogram classification, speech classification and magnetic resonance imaging (MRI) reconstruction. Moreover, we provide $\zeta$-DP-SGD benchmarks for a large variety of complex-valued activation functions and on a complex-valued variant of the MNIST dataset. Our experiments demonstrate that DP training of complex-valued neural networks is possible with rigorous privacy guarantees and excellent utility. | Reject | The reviewers in general agree that the proposed complex valued DP method is interesting and novel. However, there are two key concerns due to which the paper might not be ready for publication at ICLR: a. the key technical contribution of the work is not clear, as the methods seem relatively straightforward extension of real valued DP methods to complex valued domains.
b. More importantly, the experimental results (and hence the motivating applications) are not convincing and do not strongly support the claims of i) complex data provides more flexibility and hence provide better model, ii) proposed method is accurate.
For example, the accuracy numbers for SpeechCommands even without DP seem quite low. For example, standard methods like matchboxnet for keyword detection have accuracy numbers in the range of 97%. While the work considers a subset of keywords, but it would be important to show how the standard methods work on this dataset. If the gap is this large, then the case for using complex valued datasets itself is weak.
Similarly, on CIFAR10 it seems like that the considered architecture is quite poor as the accuracy is just ~80% while most standard architectures get >93% on the dataset. So the experiment claims of the paper might not hold for practically relevant architectures. | train | [
"z9acWr0GoT2",
"pUFzm-dc82y",
"ssz5tVHEz9v",
"Ght1FamJfc_",
"ZgIWAajg9Ub",
"TgG2-wSTq5",
"9UfmcQg7VJ",
"2lwkqkqrM9",
"SIw6E85v0MU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new framework, which extends differential privacy to complex-valued functions. The authors name this framework $\\zeta$-DP and introduce their main privacy mechanism, the complex Gaussian mechanism. The authors also show how to adapt the private gradient descent algorithm into a private algori... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_Clre-Prt128",
"ZgIWAajg9Ub",
"iclr_2022_Clre-Prt128",
"SIw6E85v0MU",
"TgG2-wSTq5",
"z9acWr0GoT2",
"2lwkqkqrM9",
"iclr_2022_Clre-Prt128",
"iclr_2022_Clre-Prt128"
] |
iclr_2022_cdZLe5S0ur | AQUILA: Communication Efficient Federated Learning with Adaptive Quantization of Lazily-Aggregated Gradients | The development and deployment of federated learning (FL) have been bottlenecked by the heavy communication overheads of high-dimensional models between the distributed client nodes and the central server. To achieve better error-communication tradeoffs, recent efforts have been made to either adaptively reduce the communication frequency by skipping unimportant updates, a.k.a. lazily-aggregated quantization (LAQ), or adjust the quantization bits for each communication. In this paper, we propose a unifying communication efficient framework for FL based on adaptive quantization of lazily-aggregated gradients (AQUILA), which adaptively adjusts two mutually-dependent factors, the communication frequency and the quantization level, in a synergistic way. Specifically, we start from a careful investigation on the classical LAQ scheme and formulate AQUILA as an optimization problem where the optimal quantization level per communication is selected by minimizing the gradient loss caused by updates skipping. Meanwhile, we adjust the LAQ strategy to better fit the novel quantization criterion and thus keep the communication frequency at an appropriate level. The effectiveness and convergence of the proposed AQUILA framework are theoretically verified. The experimental results demonstrate that AQUILA can reduce around 50% of overall transmitted bits compared to existing methods while achieving the same level of model accuracy in a number of non-homogeneous FL scenarios, including Non-IID data distribution and heterogeneous model architecture. The proposed AQUILA is highly adaptive and compatible to existing FL settings. | Reject | Two of the initial reviews of the paper were mildly positive (2 scores of 6), and one was very positive (score of 8). However, these reviews failed to notice some severe issues with the paper, which were detailed by the Area Chair in an Extra Review which was provided late. The severe issues include: clarity of exposition (undefined notation in many places) and theory (vacuous or meaningless theorems and assumptions). I apologize to the authors for not having had the chance to defend against this late review. However, the issues are indeed severe. | train | [
"ugrqKquAF4i",
"pF25UUPmdcE",
"LSZzsaBTfgl",
"HgSbiPCt2t",
"99-9Cdgdo_",
"SZoY8MidhkB",
"A0imvM3D8Sq",
"Z7DnQiUbik6",
"NR2xa1QNCz",
"48Q-ymnbQv5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 3. The authors do not compare against the state of the art methods that use communication compression. In the strongly convex case, accelerated methods exist. Granted, they do not use adaptive quantization, nor lazy aggregation. But the combination of acceleration and quantization with a fixed number of quantizat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2022_cdZLe5S0ur",
"iclr_2022_cdZLe5S0ur",
"iclr_2022_cdZLe5S0ur",
"99-9Cdgdo_",
"48Q-ymnbQv5",
"NR2xa1QNCz",
"Z7DnQiUbik6",
"iclr_2022_cdZLe5S0ur",
"iclr_2022_cdZLe5S0ur",
"iclr_2022_cdZLe5S0ur"
] |
iclr_2022_L2V-VQ7Npl0 | Reward Learning as Doubly Nonparametric Bandits: Optimal Design and Scaling Laws | Specifying reward functions for complex tasks like object manipulation or driving is challenging to do by hand. Reward learning seeks to address this by learning a reward model using human feedback on selected query policies. This shifts the burden of reward specification to the optimal design of the queries. We propose a theoretical framework for studying reward learning and the associated optimal experiment design problem. Our framework models rewards and policies as nonparametric functions belonging to subsets of Reproducing Kernel Hilbert Spaces (RKHSs). The learner receives (noisy) oracle access to a true reward and must output a policy that performs well under the true reward. For this setting, we first derive non-asymptotic excess risk bounds for a simple plug-in estimator based on ridge regression. We then solve the query design problem by optimizing these risk bounds with respect to the choice of query set and obtain a finite sample statistical rate, which depends primarily on the eigenvalue spectrum of a certain linear operator on the RKHSs. Despite the generality of these results, our bounds are stronger than previous bounds developed for more specialized problems. We specifically show that the well-studied problem of Gaussian process (GP) bandit optimization is a special case of our framework, and that our bounds either improve or are competitive with known regret guarantees for the Mat\'ern kernel. | Reject | While the reviewers found several interesting points about the paper, they raised several issues, which prevents me from recommending acceptance of the paper. In particular, the paper is not positioned properly in the literature, hence the novelty and the contributions are not properly clarified. The approach of the paper is reasonably simple (which would be a good thing by itself), but there seem to be natural avenues along which more complete results could be obtained, as mentioned in the reviews. Finally, the experiments should be improved (e.g., comparing with other algorithms from the literature). In summary, this is a promising work, but it requires some improvements before it can be published. | train | [
"ih2RjBvbIa6",
"lxIDJRA9e_9",
"S7524Hqg6pG",
"hzkwfecgeAM",
"scmSDbDeDA8",
"wnyvUoF0dXK",
"x1SFf7UyQtc",
"EHmhknX35za",
"dmR9cabK3K",
"oaoENkmXY_A",
"qgQsrsNHtUr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper is motivated by learning optimal actions in tasks where both the reward function, and policy (actions) are nonparametric. Previous literature has typically only considered one of these two components as being nonparametric. The main focus is on reliably identifying a policy with low instantaneous regret... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_L2V-VQ7Npl0",
"scmSDbDeDA8",
"iclr_2022_L2V-VQ7Npl0",
"ih2RjBvbIa6",
"hzkwfecgeAM",
"qgQsrsNHtUr",
"oaoENkmXY_A",
"dmR9cabK3K",
"iclr_2022_L2V-VQ7Npl0",
"iclr_2022_L2V-VQ7Npl0",
"iclr_2022_L2V-VQ7Npl0"
] |
iclr_2022_PeG-8G5ua3W | Normalized Attention Without Probability Cage | Despite the popularity of attention based architectures like Transformers, the geometrical implications of softmax-attention remain largely unexplored. In this work we highlight the limitations of constraining attention weights to the probability simplex and the resulting convex hull of value vectors. We show that Transformers are biased towards local information at initialization and sensitive to hyperparameters, contrast attention to max- and sum-pooling and show the performance implications of different architectures with respect to biases in the data. Finally, we propose to replace the softmax in self-attention with normalization, resulting in a generally applicable architecture that is robust to hyperparameters and biases in the data. We support our insights with empirical results from more than 30,000 trained models. Implementations are in the supplementary material. | Reject | This paper studies potential drawbacks in using softmax over attention in Transformers and evaluates other normalization approaches. Reviewers, while had been positive about the empirical analysis and the insights from the synthetic data experiments, agree that the paper lacks real world experiments/insights. I agree with that and believe the paper falls short in several areas.
1) Drawbacks of softmax: Paper states several generic drawbacks of softmax such as saturation issue leading to vanishing gradients. However the paper does not demonstrate if Transformer models used in practice suffer from this issue under standard training settings. Even the arguments that attention layer focuses on local information is quite vague and not well supported. Overall the analysis is quite weak without much concrete statements and demonstration in real settings.
2) Experiments: The paper presents many synthetic experiments evaluating alternates to softmax varying from layer normalization to pooling. There are no experiments showing if the studied variations actually solve the issues discussed in earlier section. Finally due to the lack of any real world experiments (even small scale ones), it is not clear if the results apply in real world settings.
Overall I think the paper needs significant work in formalizing the drawbacks of using softmax in Transformers and demonstrating that the proposed solutions indeed solve this problem. | train | [
"QWSDK79XhKS",
"gELQTQxAE2I",
"qpoGdyd7L6_",
"MugZ1P9tB4V",
"C2drfvKo5O4",
"WMD_lzx9bxH",
"xUOJi3ocAM_",
"RtPM_f7ETba",
"2m0st8y_Ako",
"bbOL3EDUTMn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The response has partly resolved my concerns. I've raised my score to 6.",
"This paper starts from the observation that current self-attention modules is sensitive to hyperparameter changing. The authors conjecture that this is due to the softmax operator in self-attention, and give some intuitive examples to s... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
5,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"C2drfvKo5O4",
"iclr_2022_PeG-8G5ua3W",
"iclr_2022_PeG-8G5ua3W",
"bbOL3EDUTMn",
"gELQTQxAE2I",
"2m0st8y_Ako",
"RtPM_f7ETba",
"iclr_2022_PeG-8G5ua3W",
"iclr_2022_PeG-8G5ua3W",
"iclr_2022_PeG-8G5ua3W"
] |
iclr_2022_OgCcfc1m0TO | Learning to Prompt for Vision-Language Models | Vision-language pre-training has recently emerged as a promising alternative for representation learning. It shifts from the tradition of using images and discrete labels for learning a fixed set of weights, seen as visual concepts, to aligning images and raw text for two separate encoders. Such a paradigm benefits from a broader source of supervision and allows zero-shot transfer to downstream tasks since visual concepts can be diametrically generated from natural language, known as prompt. In this paper, we identify that a major challenge of deploying such models in practice is prompt engineering. This is because designing a proper prompt, especially for context words surrounding a class name, requires domain expertise and typically takes a significant amount of time for words tuning since a slight change in wording could have a huge impact on performance. Moreover, different downstream tasks require specific designs, further hampering the efficiency of deployment. To overcome this challenge, we propose a novel approach named \emph{context optimization (CoOp)}. The main idea is to model context in prompts using continuous representations and perform end-to-end learning from data while keeping the pre-trained parameters fixed. In this way, the design of task-relevant prompts can be fully automated. Experiments on 11 datasets show that CoOp effectively turns pre-trained vision-language models into data-efficient visual learners, requiring as few as one or two shots to beat hand-crafted prompts with a decent margin and able to gain significant improvements when using more shots (e.g., at 16 shots the average gain is around 17\% with the highest reaching over 50\%). CoOp also exhibits strong robustness to distribution shift. | Reject | Given the increasing scale of large models (e.g. CLIP), there's an argument that we need better automated techniques for properly utilizing (prompting) these models. Given the success of prompt learning within pure NLP models, the authors apply the same approach to the V+L domain and show that it also is applicable here. Generally, reviewers felt that the results were clear and thorough, yet technically limited. The approach is not novel and the result not surprising. There is a documentary benefit to having this work out in the community for others to reference and extend. | train | [
"GClEN1b4a1M",
"UBvArpkLtaq",
"X5WAqRUtXIg",
"kQJtX83Zj0",
"dBpQ5TvSpYO",
"poAn9cNoHvx",
"10KOSclsdSw",
"iQ0p4tsDGS",
"uLmfK84ybVF",
"mo5m2Y33ekj",
"CwHrJmanoON"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi there --- just confirming that I saw this and am considering your response in my continued capacity as a reviewer for this work; thank you for detailing your thoughts.",
" > Despite extensive evaluation, the paper presents in very low quality and lack of description of key methods and the theory behind it.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
5
] | [
"X5WAqRUtXIg",
"kQJtX83Zj0",
"iclr_2022_OgCcfc1m0TO",
"CwHrJmanoON",
"mo5m2Y33ekj",
"uLmfK84ybVF",
"iQ0p4tsDGS",
"iclr_2022_OgCcfc1m0TO",
"iclr_2022_OgCcfc1m0TO",
"iclr_2022_OgCcfc1m0TO",
"iclr_2022_OgCcfc1m0TO"
] |
iclr_2022_oVfIKuhqfC | Non-Denoising Forward-Time Diffusions | The scope of this paper is generative modeling through diffusion processes.
An approach falling within this paradigm is the work of Song et al. (2021), which relies on a time-reversal argument to construct a diffusion process targeting the desired data distribution.
We show that the time-reversal argument, common to all denoising diffusion probabilistic modeling proposals, is not necessary.
We obtain diffusion processes targeting the desired data distribution by taking appropriate mixtures of diffusion bridges.
The resulting transport is exact by construction, allows for greater flexibility in choosing the dynamics of the underlying diffusion, and can be approximated by means of a neural network via novel training objectives.
We develop an unifying view of the drift adjustments corresponding to our and to time-reversal approaches and make use of this representation to inspect the inner workings of diffusion-based generative models.
Finally, we leverage on scalable simulation and inference techniques common in spatial statistics to move beyond fully factorial distributions in the underlying diffusion dynamics.
The methodological advances contained in this work contribute toward establishing a general framework for generative modeling based on diffusion processes. | Reject | This paper introduces a new method for diffusion-based generative modeling through a Brownian bridge formulation, where the data and latent variable can be coupled. They extend their method to mixtures of diffusion bridges and spatially correlated processes that go beyond the factorial diffusion processes used in prior work.
We thank the authors for engaging with the reviewers and addressing many of their detailed concerns. While reviewers agreed that the proposed theory and methodology were novel and interesting, there are no small or large scale experiments or empirical comparisons to the relevant prior work. In the absence of theoretical justification (bound or proof) as to why the proposed diffusion bridge mixture transport method would result in better performance, more empirical comparisons and evaluations are needed. Additionally, several reviewers found the presentation confusing and overly complex, including the notation, writing, and figures. Given the lack of experimental results and concerns over presentation, I’m inclined to reject this paper. | train | [
"uwuUjUx4IwY",
"f7tFUToKjXM",
"Jm6-sL5yEF",
"QgiZ3Gzy_j",
"1vx-EFsndxe",
"02TPXxQEV7b",
"An8iI0Ng2uN",
"spYTu859uwa",
"yEhUNIrUzbN",
"7bzZBcvrlry",
"6utX2FcCWiE",
"VQKzVKzd9UB",
"WA4iX6atAo2",
"pM8dy3b6TA",
"XcQFv91yGea",
"Y-aSfiBc0TW",
"A7YcqVNibHs"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > I still find the experiment reported in Figure 1 to be hard to read [...] I think I understand how the weights are computed but an explicit formula (even in the appendix) would be much useful.\n \nWe are unable to revise the manuscript as we are past the allowed rebuttal time.\nWe can add this formula to the Ap... | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2
] | [
"f7tFUToKjXM",
"02TPXxQEV7b",
"iclr_2022_oVfIKuhqfC",
"XcQFv91yGea",
"A7YcqVNibHs",
"An8iI0Ng2uN",
"Y-aSfiBc0TW",
"yEhUNIrUzbN",
"7bzZBcvrlry",
"Jm6-sL5yEF",
"VQKzVKzd9UB",
"WA4iX6atAo2",
"iclr_2022_oVfIKuhqfC",
"iclr_2022_oVfIKuhqfC",
"iclr_2022_oVfIKuhqfC",
"iclr_2022_oVfIKuhqfC",
... |
iclr_2022_6ya8C6sCiD | Multi-Agent Language Learning: Symbolic Mapping | The study of emergent communication has long been devoted to coax neural network agents to learn a language sharing similar properties with human language. In this paper, we try to find a natural way to help agents learn a compositional and symmetric language in complex settings like dialog games. Inspired by the theory that human language was originated from simple interactions, we hypothesize that language may evolve from simple tasks to difficult tasks. We propose a novel architecture called symbolic mapping as a basic component of the communication system of agent. We find that symbolic mapping learned in simple referential games can notably promote language learning in difficult tasks. Further, we explore vocabulary expansion, and show that with the help of symbolic mapping, agents can easily learn to use new symbols when the environment becomes more complex. All in all, we probe into how symbolic mapping helps language learning and find that a process from simplicity to complexity can serve as a natural way to help multi-agent language learning. | Reject | This manuscript presents a novel approach to learning a shared language between multiple agents.
In general, reviewers had difficulty understanding the symbolic mapping component. For such a critical part of the manuscript, questions by multiple reviewers were extremely basic, asking what symbolic mapping even is. Authors did clarify this in the discussion and updated the manuscript, but further improvements to the manuscript are warranted.
Reviewers had concerns about the novelty of the approach. Including being confused about whether this is just an application of curriculum learning. Reviewers were also concerned about the lack of ablations.
Reviewers also had concerns about the fact that this is a toy domain. Symbolic mapping as defined in the manuscript appears to be possible only for such toy domains. It fundamentally wouldn't scale to simple language games with real images. This significantly limits the scope of the work. More broadly, reviewers wanted to see symbolic mapping exercised much more. If this is a useful idea, they wanted to see the authors apply it to other domains.
Reviewers were confused about many other details in the manuscript. For example, about the fact that refdis is later discarded as a metric, which the authors answered is due to redundant symbols ("the symbolic mapping is not a highly compositional representation here because of the redundant symbols"). Why redundant symbols lead to less compositional representations seem unclear.
With significant additional improvements to the clarity of the manuscript, a demonstration of how symbolic mapping is useful in another domain, and additional experiments suggested by multiple reviewers this could be a strong submission in the future. | train | [
"IL_eoIId_Yp",
"EoPZsJzRUNE",
"AZJXqJJ5dGT",
"BcmxSwU_4MK",
"XXRgyuJkztU",
"eM4TqkEon3",
"t5IX0jgsI_C",
"JKqHkSDCOuq",
"8wFiaN3cRoD",
"rKAMJNXraQB",
"v5Cw45N_9v4",
"CwXp6yJ5Er2",
"TS9UEUc8L8a",
"lHnjIIQPAj",
"gXuhRes2ght",
"2l4azLGN8KY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your careful review, and we will address grammatical mistakes.",
"This paper has several main contributions: (1) A novel architecture, symbolic mapping (SM), (2) Showing SM outperforms vanilla LSTM and NIL in terms of success rate, degree of compositionality, and degree of symmetry on the description... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"AZJXqJJ5dGT",
"iclr_2022_6ya8C6sCiD",
"BcmxSwU_4MK",
"EoPZsJzRUNE",
"iclr_2022_6ya8C6sCiD",
"t5IX0jgsI_C",
"8wFiaN3cRoD",
"XXRgyuJkztU",
"XXRgyuJkztU",
"EoPZsJzRUNE",
"2l4azLGN8KY",
"gXuhRes2ght",
"lHnjIIQPAj",
"iclr_2022_6ya8C6sCiD",
"iclr_2022_6ya8C6sCiD",
"iclr_2022_6ya8C6sCiD"
] |
iclr_2022_o9DnX55PEAo | Cross-Architecture Distillation Using Bidirectional CMOW Embeddings | Large pretrained language models (PreLMs) are revolutionizing natural language processing across all benchmarks. However, their sheer size is prohibitive for small laboratories or deployment on mobile devices. Approaches like pruning and distillation reduce the model size but typically retain the same model architecture. In contrast, we explore distilling PreLMs into a different, more efficient architecture CMOW, which embeds each word as a matrix and uses matrix multiplication to encode sequences. We extend the CMOW architecture and its CMOW/CBOW-Hybrid variant with a bidirectional component, per-token representations for distillation during pretraining, and a two-sequence encoding scheme that facilitates downstream tasks on sentence pairs such as natural language inferencing. Our results show that the embedding-based models yield scores comparable to DistilBERT on QQP and RTE, while using only half of its parameters and providing three times faster inference speed. We match or exceed the scores of ELMo, and only fall behind more expensive models on linguistic acceptability. Still, our distilled bidirectional CMOW/CBOW-Hybrid model more than doubles the scores on linguistic acceptability compared to previous cross-architecture distillation approaches. Furthermore, our experiments confirm the positive effects of bidirection and the two-sequence encoding scheme. | Reject | This paper presents a method for distilling pretrained models (such as BERT) into a different student architecture (CMOW), and extend the CMOW architecture with a bidirectional component. On a couple of datasets, results are comparable to DistilBERT a previous baseline. This paper is nice, but can be stronger with more empirical experiments on non-GLUE tasks (TriviaQA, Natural Questions, SQUAD for example). Furthermore, I agree with Reviewer M3tk that there are many empirical comparisons with baselines such as TinyBERT missing and the argument of not needing the teacher model to be super convincing. | train | [
"IVhmiDE7jl_",
"S9N9sD-IL3",
"kbiiXznY4hL",
"QHAFy4Ztr7A",
"IpUQRJPjYN6",
"ZsfpmNK6YFP",
"RykvEvCRu3v",
"CYhkub6LfA",
"tC4EV7mCIh1",
"xElIeb0YV6a",
"NoiuP13gdgW",
"T7m0J4s_zGl",
"worx0ExiHWu",
"xOwQ6qJCfDJ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for going so much into the details. We pretty much agree with the statements above.\n\nTransformers have $\\mathcal{O}(L^2)$ compute effort (the attention scores), and $\\mathcal{O}(1)$ sequential steps.\n\nCMOW/CBOW-Hybrid have $\\mathcal{O}(L)$ compute effort and $\\mathcal{O}(\\log L)$ sequential ste... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"kbiiXznY4hL",
"NoiuP13gdgW",
"QHAFy4Ztr7A",
"IpUQRJPjYN6",
"RykvEvCRu3v",
"xElIeb0YV6a",
"CYhkub6LfA",
"tC4EV7mCIh1",
"xOwQ6qJCfDJ",
"worx0ExiHWu",
"T7m0J4s_zGl",
"iclr_2022_o9DnX55PEAo",
"iclr_2022_o9DnX55PEAo",
"iclr_2022_o9DnX55PEAo"
] |
iclr_2022_yql6px0bcT | Decentralized Cross-Entropy Method for Model-Based Reinforcement Learning | Cross-Entropy Method (CEM) is a popular approach to planning in model-based reinforcement learning.
It has so far always taken a \textit{centralized} approach where the sampling distribution is updated \textit{centrally} based on the result of a top-$k$ operation applied to \textit{all samples}.
We show that such a \textit{centralized} approach makes CEM vulnerable to local optima and impair its sample efficiency, even in a one-dimensional multi-modal optimization task.
In this paper, we propose \textbf{Decent}ralized \textbf{CEM (DecentCEM)} where an ensemble of CEM instances run independently from one another and each performs a local improvement of its own sampling distribution.
In the exemplar optimization task, the proposed decentralized approach DecentCEM finds the global optimum much more consistently than the existing CEM approaches that use either a single Gaussian distribution or a mixture of Gaussians.
Further, we extend the decentralized approach to sequential decision-making problems where we show in 13 continuous control benchmark environments that it matches or outperforms the state-of-the-art CEM algorithms in most cases, under the same budget of the total number of samples for planning. | Reject | This paper presents a decentralized version of the CEM technique, where an ensemble of CEM instances run independently from one another and each performs a local improvement of its own sampling distribution. The paper shows that the proposed technique can alleviate the problem of centralized CEM related to converging to a local optimum. The paper includes a theoretical analysis and simulation experiments that show some benefits of the proposed technique over centralized CEM.
The key criticisms from the reviewers include the straightforward nature of the proposed idea, which limits the technical contribution of the paper, as well as the limited improvements over centralized CEM in the simulation experiments.
In summary, this is a borderline paper. While the paper is well-written and the proposed approach is clearly explained, the lack of strong empirical results that show a pronounced improvement of decentralized CEM coupled with the incremental nature of the idea of decentralized CEM makes me lean toward a rejection. | val | [
"Lry480Z4xu",
"f1J9yMOF7cc",
"noMlDHrcB1",
"ObttuMcyfVK",
"pYOeU4XPZNR",
"hR4SavymTX_",
"POS7wOUxKYb",
"2rZ0Ow81oqV",
"XDk5-KX9J4G",
"gAtX29KE-4k",
"lXO7QWtjEP",
"tuw9LM4m20h",
"2SOQ__AOqM",
"xbCvc4zwVYU"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear reviewer, \n\nWe appreciate you taking the time to read through our response and the other reviews. \nThank you for providing feedback on the revision.\n\nWe would like to clarify the theoretical analysis a bit since we are not sure if the comment about \"fancy theorem\" was from some misunderstanding. Your ... | [
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"noMlDHrcB1",
"hR4SavymTX_",
"tuw9LM4m20h",
"iclr_2022_yql6px0bcT",
"iclr_2022_yql6px0bcT",
"gAtX29KE-4k",
"iclr_2022_yql6px0bcT",
"XDk5-KX9J4G",
"ObttuMcyfVK",
"lXO7QWtjEP",
"pYOeU4XPZNR",
"2SOQ__AOqM",
"xbCvc4zwVYU",
"iclr_2022_yql6px0bcT"
] |
iclr_2022_xVGrCe5fCXY | Denoising Diffusion Gamma Models | Generative diffusion processes are an emerging and effective tool for image and speech generation. In the existing methods, the underlying noise distribution of the diffusion process is Gaussian noise. However, fitting distributions with more degrees of freedom could improve the performance of such generative models. In this work, we investigate other types of noise distribution for the diffusion process. Specifically, we introduce the Denoising Diffusion Gamma Model (DDGM) and show that noise from Gamma distribution provides improved results for image and speech generation. Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise. | Reject | This paper explores replacing the Gaussian noise typically used in diffusion-based generative models with noise from other distributions, specifically the Gamma distribution. The effect of this change is studied empirically for both image and speech generation.
Reviewers welcomed the exploration of the design space of diffusion models, and several reviewers consider the study of alternative noise distributions in particular an important contribution. They also raised several issues with precision and clarity (several mistakes in the manuscript were pointed out), the quality of the experiments, and, especially, a lack of convincing motivation for this exploration / sufficient demonstration of its impact.
While the authors have made a significant effort to address the reviewers' comments and suggestions, which includes running additional experiments, all reviewers have nevertheless chosen borderline ratings, with half erring on the side of rejection, and the other half tentatively recommending acceptance.
I am inclined to agree that, as it stands, the benefit of the proposed change of noise distribution is not convincingly shown to outweigh the additional complexity this introduces, so I am also recommending rejection. | train | [
"qP1RlGyEmZ8",
"a7jmB7yoWO",
"I6pfi32x8OH",
"q9J4LP_IaOp",
"smiaSEYpdrZ",
"_D0orJMgzpb",
"6ffYbRxSKaP",
"-Tjtak53PcK",
"A1xOv_FmqJH",
"jjjxgfkR_4_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"## Summary\nThis paper explores the use of a non-Gaussian diffusion process for Diffusion Probabilistic Models. Unlike the original work by Ho et al., the authors replace the diffusion process with a Markov chain with transition kernel defined by a Gamma distribution. They show that the similar (and necessary) p... | [
5,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
5
] | [
3,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_xVGrCe5fCXY",
"q9J4LP_IaOp",
"iclr_2022_xVGrCe5fCXY",
"-Tjtak53PcK",
"iclr_2022_xVGrCe5fCXY",
"qP1RlGyEmZ8",
"I6pfi32x8OH",
"jjjxgfkR_4_",
"smiaSEYpdrZ",
"iclr_2022_xVGrCe5fCXY"
] |
iclr_2022_qPzR-M6HY8x | Approximating Instance-Dependent Noise via Instance-Confidence Embedding | Label noise in multiclass classification is a major obstacle to the deployment of learning systems. However, unlike the widely used class-conditional noise (CCN) assumption that the noisy label is independent of the input feature given the true label, label noise in real-world datasets can be aleatory and heavily dependent on individual instances. In this work, we investigate the instance-dependent noise (IDN) model and propose an efficient approximation of IDN to capture the instance-specific label corruption. Concretely, noting the fact that most columns of the IDN transition matrix have only limited influence on the class-posterior estimation, we propose a variational approximation that uses a single-scalar confidence parameter. To cope with the situation where the mapping from the instance to its confidence value could vary significantly for two adjacent instances, we suggest using instance embedding that assigns a trainable parameter to each instance. The resulting instance-confidence embedding (ICE) method not only performs well under label noise but also can effectively detect ambiguous or mislabeled instances. We validate its utility on various image and text classification tasks. | Reject | The paper proposes a new method for the problem of learning under instance-dependent noise (IDN). The idea is to construct a variational approximation to the ideal training objective, which involves learning a single scalar C(x) per instance. In turn, each such scalar is treated as an additional parameter to be learned by the network.
Reviewers generally found the basic idea of the proposal to be interesting and novel, with the response clarifying some initial questions on the design of the network to learn C(x). The paper is also well-written, and presents experiments on image and text classification benchmarks. Some concerns were however raised:
(1) _Limited theoretical justification_. There is limited formal analysis of when the proposed method can work well.
(2) _Lack of comparison to IDN baselines_. The original submission did not include any IDN baselines as comparison. The revision included results of the method of (Zhang et al., '21a), which is on-par or better than the proposed method; it seems that this baseline really ought to have been included in the original submission, but it is appreciated that these have been added. A related concern was the marginal gains over the GCE method on the CIFAR datasets.
(3) _Sufficiency of learning a single parameter_. The paper learns a single scalar per sample. Several reviewers were unsure on the sufficiency of this parameter to capture the underlying noise distribution.
For (1), the authors acknowledge theoretical analysis as an important future direction. This is perfectly reasonable, but does then require weighting more any issues with the the conceptual and empirical contributions of the paper.
For (2), the response clarified that most of these operate either in the binary setting, or require auxiliary information. This is a valid motivation for the present work; it would however be more compelling to include results in a binary setting, to better understand the strengths and weaknesses compared to existing proposals. The response also clarified the present method does not claim to improve upon state-of-the-art performance, but rather proposes a simple method which has additional applications (as shown in Appendix E). This is a reasonable claim; however, to my taste, there is insufficient discussion of the PLC method (Zhang et al., '21a), and what new conceptual information the present work offers.
For (3), the response argued that the present results already demonstrate the efficacy of using a single parameter, and that using multiple parameters can be studied in future work. One reviewer was not convinced of the efficacy being shown in some of the results in Appendix E. It could strengthen the work if there is an empirical analysis of when the single parameter assumption starts to break down; e.g., perhaps under increasing levels of CCN noise?
Overall, the paper has interesting ideas and some nice analyses. At the same time, there was clear scope for improvement in the original submission. This was partially addressed in the revision, but given that several domain experts retain reservations (particularly in regards to comparisons against prior IDN works), it is encouraged that the authors incorporate the above comments for a future version of the paper. | val | [
"_OU6W2SZaRs",
"e_8lOWcQndC",
"x3isZ-Dvf0y",
"9YWrUK85Qg5",
"xOePpVdDeiG",
"ffqcgQLgOB0",
"Rw_gOuduHdT",
"9kTdyKxpR6",
"GzHrfBYF8nE",
"mhbJ4qORylq",
"H-7BpuSjdD4",
"IrlxpcJPpdC",
"8eXoWtQwqUY",
"XkEcSrWV2eH",
"8E5Y9-_JXgj",
"3UYyDOzvZL5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks authors for the detailed explanations about the use of single scalar as well as the intuition of $g$. My unsolved concern is that:\n\nIn my first point **A single-scalar V.S. $K\\times K$ noise transition matrix**, I agree with the authors that this can be beneficial in mislabeled instance detection. Howev... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"H-7BpuSjdD4",
"mhbJ4qORylq",
"xOePpVdDeiG",
"Rw_gOuduHdT",
"IrlxpcJPpdC",
"iclr_2022_qPzR-M6HY8x",
"8eXoWtQwqUY",
"mhbJ4qORylq",
"mhbJ4qORylq",
"3UYyDOzvZL5",
"8E5Y9-_JXgj",
"XkEcSrWV2eH",
"ffqcgQLgOB0",
"iclr_2022_qPzR-M6HY8x",
"iclr_2022_qPzR-M6HY8x",
"iclr_2022_qPzR-M6HY8x"
] |
iclr_2022_dtpgsBPJJW | Riemannian Manifold Embeddings for Straight-Through Estimator | Quantized Neural Networks (QNNs) aim at replacing full-precision weights $\boldsymbol{W}$ with quantized weights $\boldsymbol{\hat{W}}$, which make it possible to deploy large models to mobile and miniaturized devices easily. However, either infinite or zero gradients caused by non-differentiable quantization significantly affect the training of quantized models. In order to address this problem, most training-based quantization methods use Straight-Through Estimator (STE) to approximate gradients $\nabla_{\boldsymbol{W}}$ w.r.t. $\boldsymbol{W}$ with gradients $\nabla_{\boldsymbol{\hat{W}}}$ w.r.t. $\boldsymbol{\hat{W}}$ where the premise is that $\boldsymbol{W}$ must be clipped to $[-1,+1]$. However, the simple application of STE brings with the gradient mismatch problem, which affects the stability of the training process. In this paper, we propose to revise an approximated gradient for penetrating the quantization function with manifold learning. Specifically, by viewing the parameter space as a metric tensor in the Riemannian manifold, we introduce the Manifold Quantization (ManiQuant) via revised STE to alleviate the gradient mismatch problem. The ablation studies and experimental results demonstrate that our proposed method has a better and more stable performance with various deep neural networks on CIFAR10/100 and ImageNet datasets. | Reject | The paper seeks to improve straight-through estimators by combining them with the ideas for correcting the step direction to be closer to a natural gradient.
While some (modest) improvements are demonstrated experimentally, the paper critically lacks technical correctness and has quite some gaps when trying to derive the algorithm from the natural gradient and Rao-Cramer bound. See public comments by reviewers and AC. The algorithm ends up to be a mirror descent with a mirror map, which is cheap to compute but not particularly well motivated. Moreover application of mirror descent to the activations (unlike the weights) is not well justified. The paper is rather unclear and hard to read also language-wise. Please proofread _before_ submitting. | train | [
"jOlhz9iN-Zj",
"wwZSA_W1fiH",
"YCpgxXuUlhZ",
"pOTTRmcklUY",
"clSS65r2Z_A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The paper is difficult to read language-wise (grammatical errors, unclear, disconnected) and has too many technical errors, preventing understanding of the paper and making its theoretical statements uninterpretable or false. In my opinion, authors should have prepared the submission more carefully by proofreadin... | [
-1,
3,
3,
6,
6
] | [
-1,
3,
3,
3,
4
] | [
"iclr_2022_dtpgsBPJJW",
"iclr_2022_dtpgsBPJJW",
"iclr_2022_dtpgsBPJJW",
"iclr_2022_dtpgsBPJJW",
"iclr_2022_dtpgsBPJJW"
] |
iclr_2022_JSsjw8YuG1P | PERSONALIZED LAB TEST RESPONSE PREDICTION WITH KNOWLEDGE AUGMENTATION | Personalized medical systems are rapidly gaining traction as opposed to “one size
fits all” systems. The ability to predict patients’ lab test responses and provide justification for the predictions would serve as an important decision support tool and
help clinicians tailor treatment regimes for patients. This requires one to model
the complex interactions among different medications, diseases, and lab tests. We
also need to learn a strong patient representation, capturing both the sequential
information accumulated over the visits and information from other similar patients. Further, we model the drug-lab interactions and diagnosis-lab interactions
in the form of graphs and design a knowledge-augmented approach to predict patients’ response to a target lab result. We also take into consideration patients'
past lab responses to personalize the prediction. Experiments on the benchmark
MIMIC-III and a real-world outpatient dataset demonstrate the effectiveness of
the proposed solution in reducing prediction errors by a significant margin. Case
studies show that the identified top factors for influencing the predicted lab results
are consistent with the clinicians' understanding. | Reject | The paper uses several types of information to predict one specific
lab test response for patients. The predictions are made by combining
and tailoring mainly existing techniques.
The reviewers raised a number of concerns, and the authors clarified
many of them and provided additional results. In particular the
following issues were discussed: Comparing to state-of-the-art methods
and methods having the same information available, specifics of
empirical evaluations and of methodological novelty, choice of the
particular data sets, and justifiability of conclusions.
The main remaining weakness is the limited novelty, which should not
be interpreted as the contributions of the paper being trivial.
In contrast, the solid engineering work done by the authors in this
paper will be valuable in developing clinical decision support tools,
and the authors are encouraged to incorporate the new results and
feedback in future work and submissions. | train | [
"8EaaBvHMbtm",
"HKiZwHiKySn",
"RxgRacrQq_C",
"4BcHTNI-KHC",
"3qR_SYJ3I6C",
"JgQlrM_LIFx",
"3t3efj800B",
"1Okcf_G4dFx",
"0_ECNvDiG6t",
"Yza4mQFlZyw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, \nThank you for your response, and apologies for the late reply. I believe your responses address my remaining concerns. One note is that you should ideally make it clearer in the manuscript (and perhaps the table legend) that BEHRT was also provided with the medication data.\nGood luck! \n",
"T... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"JgQlrM_LIFx",
"iclr_2022_JSsjw8YuG1P",
"iclr_2022_JSsjw8YuG1P",
"0_ECNvDiG6t",
"Yza4mQFlZyw",
"HKiZwHiKySn",
"1Okcf_G4dFx",
"iclr_2022_JSsjw8YuG1P",
"iclr_2022_JSsjw8YuG1P",
"iclr_2022_JSsjw8YuG1P"
] |
iclr_2022_AypVMhFfuc5 | FrugalMCT: Efficient Online ML API Selection for Multi-Label Classification Tasks | Multi-label classification tasks such as OCR and multi-object recognition are a major focus of the growing machine learning as a service industry. While many multi-label APIs are available, it is challenging for users to decide which API to use for their own data and budget, due to the heterogeneity in their prices and performance. Recent work has shown how to efficiently select and combine single label APIs to optimize performance and cost. However, its computation cost is exponential in the number of labels, and is not suitable for settings like OCR. In this work, we propose FrugalMCT, a principled framework that adaptively selects the APIs to use for different data in an online fashion while respecting the user’s budget. It allows combining ML APIs’ predictions for any single data point, and selects the best combination based on an accuracy estimator. We run systematic experiments using ML APIs from Google, Microsoft, Amazon, IBM, Tencent, and other providers for tasks including multi-label image classification, scene text recognition, and named entity recognition. Across these tasks, FrugalMCT can achieve over 90% cost reduction while matching the accuracy of the best single API, or up to 8% better accuracy while matching the best API’s cost. | Reject | The authors study a practical problem of selecting/combining existing multi-label classification APIs under a budget constraint for a specific problem instance on hand. The task can be viewed as an (online) integer programming problem when given an accuracy estimator for the combination performance. The authors relax the integer constraints and propose a framework to solve the task in the dual form. They also run experiments to validate that the proposed framework is advantageous (cost or accuracy-wise) over the best single API.
Most of the reviewers are positive about the practical value and the potential impacts of the work in applications/products/services. There are several disputes between the authors and some reviewers that cannot be fully resolved during the rebuttal. In the end, no reviewers express willingness to strongly champion for the acceptance of the paper, making the paper a borderline case. The decision is based on a careful examination of the current manuscript and every side's opinions.
* Novelty: Some reviewers question about the novelty of the work. There are two aspects about novelty: one is on whether the problem itself is novel (are the authors trying to propose a new multi-label method?) In this aspect, the authors' response, which states that they are not aiming at proposing a new method, but at solving an automation task for MLaaS users, appears believable. The other aspect is whether the solution technique, namely the relaxed integer programming and other techniques, are sufficiently novel. Some reviewers find the novelty aspect satisfactory, while others believe that the proposed optimization technique have been widely used in machine learning community. The authors did not clarify the similarity/difference of the proposed technique to existing ones during the rebuttal. In this sense, the technical novelty is not well justified.
* Speed: Some reviewers are concerned about different aspects of the running time and other costs. The authors emphasized the rapid speed in inference phase, particularly in Figure 3. Less is discussed about the time needed for the training phase (although the authors claim to be much smaller than the inference time)---somehow even the most positive reviewers have some questions about this aspect. The authors could add more clarification about the different "time" costs to the discussion. One dispute between some reviewers and the authors is about the *complexity* analysis of time, which is indeed missing in the current manuscript and can be a nice-to-have for future todos.
* Theoretical Guarantee: One major dispute between some reviewers and the authors is on the theoretical guarantee provided. The reviewers suggest a regret-style bound, which compares the solution to the worst-case sequence; the authors provide an optimization-style bound, which compares the solution to the absolute optimal solution. Different bounds have their different roles for supporting the framework. Given that the authors have provided some reasonable bounds, the lack of regret bound is not taken against the authors.
* Specialty: One concern raised by some reviewers is that the technique does not seem particularly tailored for multi-label classification (except some minor parts). In this sense, it is nice to have for the authors to discuss more on the wider applicability of the technique, and/or include some more specialty of the multi-label classification problem into the technique design.
After taking all the factors above into account, and calibrating the received scores to the distribution across the papers, it seems that the paper could use some more revision before being mature enough as an impactful work. | train | [
"dvp9vi3gcDu",
"D2T2aV6iAKT",
"uLEc5TgE59K",
"GDUpPqqgyU",
"xsKjRUBwrpD",
"AKFJ1MFnKny",
"rDT-c6WgJlP",
"JnyKzrKr0C8",
"okULlBnz3I",
"RmYlVnovhuN"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer, \n\nThank you very much for your time and feedback! We hope our response has answered your questions and you'd consider raising your score in light of it. In particular, several points you had asked about--runtime of the method and regret bound--were in the original paper and we highlighted the rel... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"JnyKzrKr0C8",
"JnyKzrKr0C8",
"RmYlVnovhuN",
"okULlBnz3I",
"rDT-c6WgJlP",
"iclr_2022_AypVMhFfuc5",
"iclr_2022_AypVMhFfuc5",
"iclr_2022_AypVMhFfuc5",
"iclr_2022_AypVMhFfuc5",
"iclr_2022_AypVMhFfuc5"
] |
iclr_2022_Kmsf3z-vGu | Gradient-based Meta-solving and Its Applications to Iterative Methods for Solving Differential Equations | In science and engineering applications, it is often required to solve similar computational problems repeatedly. In such cases, we can utilize the data from previously solved problem instances to improve efficiency of finding subsequent solutions. This offers a unique opportunity to combine machine learning (in particular, meta-learning) and scientific computing. To date, a variety of such domain-specific methods have been proposed in the literature, but a generic approach for designing these methods remains under-explored. In this paper, we tackle this issue by formulating a general framework to describe these problems, and propose a gradient-based algorithm to solve them in a unified way. As an illustration of this approach, we study the adaptive generation of initial guesses for iterative solvers to speed up the solution of differential equations. We demonstrate the performance and versatility of our method through theoretical analysis and numerical experiments. | Reject | The authors define the task of solving a family of differential equations as a task of gradient-based meta-learning generalizing the gradient-based model agnostic meta-learning to problems with differentiable solvers. According to the reviews, there were some concerns regarding the practical value of the paper, for example, (1) the proposed technology is restricted to linear systems, and relatively easy problems (2) there is no demonstration of practical application utility (3) It lacks systematic comparison with other methods (4) some technical details are missing. There were quite a lot of discussions on the paper among the reviewers, and the consensus is that the paper is not solid enough for publication at ICLR in its current form (the reviewer who gave the highest score is less confident and does not want to champion the paper). | train | [
"S2D2DCTcjrM",
"MELrSuIBYJp",
"9Z0mh92tOC",
"kaPRMigFHg3",
"8yq0NZRFgDO",
"ZASjYCKu9U",
"PVi9PGeB53_",
"I_4BQMgcJWJ",
"cevAn8IxeSB",
"LYA27w2oZTm",
"iDK9NAl6r8U",
"6re0lxCK3LK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read other reviewers' comments and the discussion. The reviewers and the authors agree that the contribution of the paper is that it proposes a general framework---gradient-based meta solving, and some other algorithms can be formulated in such way. This sounds very \"fancy\", but the paper failed to show ... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ZASjYCKu9U",
"I_4BQMgcJWJ",
"iclr_2022_Kmsf3z-vGu",
"PVi9PGeB53_",
"cevAn8IxeSB",
"6re0lxCK3LK",
"9Z0mh92tOC",
"iDK9NAl6r8U",
"LYA27w2oZTm",
"iclr_2022_Kmsf3z-vGu",
"iclr_2022_Kmsf3z-vGu",
"iclr_2022_Kmsf3z-vGu"
] |
iclr_2022_4GBHVfEcmoS | Propagating Distributions through Neural Networks | We propose a new approach to propagating probability distributions through neural networks. To handle non-linearities, we use local linearization and show this to be an optimal approximation in terms of total variation for ReLUs. We demonstrate the advantages of our method over the moment matching approach popularized in prior works. In addition, we formulate new loss functions for training neural networks based on distributions. To demonstrate the utility of propagating distributions, we apply it to quantifying prediction uncertainties. In regression tasks we obtain calibrated confidence intervals, and in a classification setting we improve selective prediction on out-of-distribution data. We also show empirically that training with our uncertainty aware losses improve robustness to random and adversarial noise. | Reject | The paper discusses propagating input uncertainty through non-linear layers by a simple local linearization approach. This is a straightforward idea and the authors explain how this is an optimal approximation of the propagated distribution for Total Variation and reLU non-linearity (for a single layer). This is an interesting (if quite limited) theoretical result. What is not clearly stated is that this result only holds for a single layer. It does not mean that the local approach is the best way to approximate (in the total variation sense) a distribution passed through multiple reLU layers.
By repeating the procedure, the authors are able to define closed form objective functions for noise-robust training of deep networks.
The reviewers found this an interesting paper and there was a good effort by the authors to improve the results. However, technical innovation is modest and reviewer doubts still remain. For that reason, clarity of presentation is critically important. The overall numerical score isn't convincing and with one reviewer remaining very unconvinced.
I agree with the reviewers that the technical contribution is quite limited and I would argue is not particularly well explained. For example, a simple alternative would be to use a "global" linearization in which one can consider the network function $f$ as a whole, and then simply linearize this (rather than linearizing each layer). Indeed, the way that the paper is written, this would be a natural interpretation since $f$ is defined in the introduction as the network function, but is used later differently (eg section 3.2) as the transfer function. The approach is to recursively compute a new mean and covariance for each layer, propagating these through the network (similar to moment matching approaches). It would have helped if the authors had made pseudocode for their approach. It would be natural and interesting to compare to the simple global linearization approach (which is computationally faster).
The presentation of results and experiments could be improved. For example, in figure 3, it is not clear (nor is it explained in the text) what the definition of "robust accuracy" is.
Given the modest technical innovation, I also feel that the clarity of presentation isn't yet at the level that would merit acceptance. The paper would also benefit from some deeper insight into why the approach might perform better than other approaches (such as local moment matching) at the network level (rather than a single layer). | test | [
"j6I8G5ZrsK4",
"ICs9XrslYy",
"HN509EAfD1C",
"WGom3br2By",
"9wbRbfC807t",
"-9hyW9lVSs6",
"8OLBffajK_",
"VkCXzDjPdDv",
"4whuyUHiy5K",
"Bqhp9U36i_7",
"PYchfAutw7C",
"cceuuffqsu",
"-D8_Tr7G9R9",
"9bibwNBJMyf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a method to propagate uncertainties through neural networks by performing a first-order approximation at nonlinear activation functions. By injecting uncertainty into the inputs, prediction uncertainties are obtained. These prediction uncertainties are exploited at training time by using new unc... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"iclr_2022_4GBHVfEcmoS",
"VkCXzDjPdDv",
"9bibwNBJMyf",
"9bibwNBJMyf",
"j6I8G5ZrsK4",
"j6I8G5ZrsK4",
"j6I8G5ZrsK4",
"j6I8G5ZrsK4",
"-D8_Tr7G9R9",
"cceuuffqsu",
"iclr_2022_4GBHVfEcmoS",
"iclr_2022_4GBHVfEcmoS",
"iclr_2022_4GBHVfEcmoS",
"iclr_2022_4GBHVfEcmoS"
] |
iclr_2022_sS0dHmaH1I | Fast Adaptive Anomaly Detection | The ability to detect anomaly has long been recognized as an inherent human ability, yet to date, practical AI solutions to mimic such capability have been lacking.This lack of progress can be attributed to several factors. To begin with, the distribution of “abnormalities” is intractable. Anything outside of a given normal population is by definition an anomaly. This explains why a large volume of workin this area has been dedicated to modeling the normal distribution of a given task followed by detecting deviations from it. This direction is however unsatisfying as it would require modeling the normal distribution of every task that comes along, which includes tedious data collection. In this paper, we report our work aiming to handle these issues. To deal with the intractability of abnormal distribution, we leverage Energy Based Model (EBM). EBMs learn to associates low energies to correct values and higher energies to incorrect values. As its core, the EBM em-ploys Langevin Dynamics (LD) in generating these incorrect samples based on an iterative optimization procedure, alleviating the intractable problem of modeling the world of anomalies. Then, in order to avoid training an anomaly detector for every task, we utilize an adaptive sparse coding layer. Our intention is to design a plug and play feature that can be used to quickly update what is normal during inference time. Lastly, to avoid tedious data collection, this mentioned update of the sparse coding layer needs to be achievable with just a few shots. Here, we employ a meta learning scheme that simulates such a few shot setting during training. We support our findings with strong empirical evidence. | Reject | This work tries to tackle the problem of anomaly detection across different tasks. To do so, authors employ energy-based models (EBMs) and define an outlier score in terms of the EBM energies, having a shared sparse code for different tasks. This pipeline is tested on some image and video anomaly datasets for industrial inspection.
The reviewers highlighted some concerns that need to be addressed before the paper is ready for publication.
First, a revision could benefit from a rewriting the clearly formalizes the learning problem from page 2 and then discusses about the possible modeling options given i) the task at hand and ii) some efficiency requirements.
Second, concerning the modeling choices of the proposed pipeline, the motivation behind the choice of EBMs should be strengthened. For example, it is not clear why the proposed sparse coding could be used for any other latent variable probabilistic model. As observed by one reviewer, the pros of having energies instead of probabilities (or just reconstructions from a deterministic autoencoder) is not discussed sufficiently. Additionally, the heuristics of running Langevin dynamics for only 5 steps should be backed up by stronger empirical evidence as it lacks theory, and it should be discussed how much you should run the Markov chain to obtain sensible negative samples.
Third, conclusions over the experiments on the provided benchmarks seem preliminary. For instance, a new revision could benefit from adding a statistical significance analysis to the reported accuracies. I appreciate that authors added further ablation studies including experiments on contaminated data in the latest revision. I suggest them to extend the experimental suite to more benchmarks including the commonly used for anomaly detection. | train | [
"moSgZxYTZvU",
"TPHDSwq9Az-",
"7Hd1ktE3Z8p",
"voX3fEbgx_5",
"iMwnXBIso3z",
"m2Xh_IqVlk",
"SZQ57lRpvsB",
"7JKIsZpFTL-",
"vTre9Owal6f",
"IfVjLa5G-W",
"bTGe3YpARgd",
"_gCmFqMziZD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer **d5Wi** and reviewer **Mzn6**,\n\nThank you again for your time reviewing our paper. We have tried hard to respond to your comments. As the final deadline of the discussion period is approaching in one day, we would like to know if our responses have addressed your concerns. And please do not hesit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
3,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3,
4
] | [
"iclr_2022_sS0dHmaH1I",
"iclr_2022_sS0dHmaH1I",
"IfVjLa5G-W",
"bTGe3YpARgd",
"_gCmFqMziZD",
"vTre9Owal6f",
"7JKIsZpFTL-",
"iclr_2022_sS0dHmaH1I",
"iclr_2022_sS0dHmaH1I",
"iclr_2022_sS0dHmaH1I",
"iclr_2022_sS0dHmaH1I",
"iclr_2022_sS0dHmaH1I"
] |
iclr_2022_gaYko_Y2_l | Weakly Supervised Graph Clustering | Graph Clustering, which clusters the nodes of a graph given its collection of node features and edge connections in an unsupervised manner, has long been researched in graph learning and is essential in certain applications. While this task is common, more complex cases arise in practice—can we cluster nodes better with some graph-level side information or in a weakly supervised manner as, for example, identifying potential fraud users in a social network given additional labels of fraud communities. This triggers an interesting problem which we define as Weakly Supervised Graph Clustering (WSGC). In this paper, we firstly discuss the various possible settings of WSGC, formally. Upon such discussion, we investigate a particular task of weakly supervised graph clustering by making use of the graph labels and node features, with the assistance of a hierarchical graph that further characterizes the connections between different graphs. To address this task, we propose Gaussian Mixture Graph Convolutional Network (GMGCN), a simple yet effective framework for learning node representations under the supervision of graph labels guided by a proposed consensus loss and then inferring the category of each node via a Gaussian Mixture Layer (GML). Extensive experiments are conducted to test the rationality of the formulation of weakly supervised graph clustering. The experimental results show that, with the assistance of graph labels, the weakly supervised graph clustering method has a great improvement over the traditional graph clustering method. | Reject | Existing methods for graph clustering usually use node/edge information, but ignore graph-level information. This paper proposes incorporating graph-level labels into graph clustering and formulating the new problem as weakly supervised graph clustering. The paper further proposes Gaussian Mixture Graph Convolutional Network (GMGCN) framework for the task. Experimental results on several datasets demonstrate the effectiveness of the method.
The authors are very active in answering questions by the reviewers. They have successfully addressed some of the issues. However, there are still questions that remain unaddressed. The submission is not of the quality of ICLR papers.
Strength
* A new method is proposed.
* The proposed methods outperform baseline models on the given datasets and synthetic datasets.
Weakness
* The explanations are not clear enough. Although the authors provide detailed responses to the reviews, the problems indicated by the reviewers are still not well addressed.
* The proposed method seems to be too complicated.
* It is not clear why the proposed method works.
* The problem studied might not be realistic.
----
Here is a summary of the reviewers' final comments.
* Reviewer oDis slightly increased the score。
* Reviewer r2ym says“I read responses to my concerns and others, but except for some clarifying statements and notations, authors' responses are not convincing enough. Also, while I now understand the concept of proposed work better than before, I do not think that it is explained and presented well enough.”
* Reviewer inpd says “would like to keep my original score”. | train | [
"GFsrmiZCD9Y",
"Rhz3FvQICjh",
"oAi_eIb0yDd",
"lfV3dBYP6Aw",
"4_t0-xuGFit",
"4skd3Z89fy",
"oJWY7cdt-nu",
"lHAtNOzBOlN",
"Sg418OWar_C",
"NNPvXW2zYY",
"G_trsgJq-su",
"EpXPRfDvOlq",
"h6CBQlaPu3K",
"BEfaKqXidf0",
"4uU8Kss_YGI",
"IyAyuQqYCq5",
"ACK8YWiZHj4",
"Y2f1Xrr4Ko",
"pe6yLfLNJFG"... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Dear reviewer PUKs,\n\nThank you for your comments and suggestions on our paper. We are still willing to know what you think we could further improve our paper although the deadline is coming. Any concrete suggestions would help us to do better. It is so upset to us that you downgraded the score without any reaso... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"h6CBQlaPu3K",
"lHAtNOzBOlN",
"iclr_2022_gaYko_Y2_l",
"9-ctT56j7m",
"h6CBQlaPu3K",
"9-ctT56j7m",
"WRzCjL491sE",
"oAi_eIb0yDd",
"iclr_2022_gaYko_Y2_l",
"G_trsgJq-su",
"sl6Zd2KGBcz",
"BEfaKqXidf0",
"iclr_2022_gaYko_Y2_l",
"h6CBQlaPu3K",
"iclr_2022_gaYko_Y2_l",
"h6CBQlaPu3K",
"9-ctT56j7... |
iclr_2022_Dy8gq-LuckD | Recognizing and overcoming the greedy nature of learning in multi-modal deep neural networks | We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks (DNNs), these models tend to rely on just one modality while under-utilizing the other modalities. We observe empirically that such behavior hurts its overall generalization. We validate our hypothesis by estimating the gain on the accuracy when the model has access to an additional modality. We refer to this gain as the conditional utilization rate of the modality. In the experiments, we consistently observe an imbalance in conditional utilization rate between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce an efficient proxy based on the pace at which a DNN learns from each modality, which we refer to as conditional learning speed. We thus propose a training algorithm, balanced multi-modal learning, and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm is found to improve the model’s generalization on three datasets: Colored MNIST (Kim et al., 2019), Princeton ModelNet40 (Wu et al., 2015), and NVIDIA Dynamic Hand Gesture Dataset (Molchanov et al., 2016). | Reject | PAPER: This paper presents analysis of cross-modal interactions in multimodal models and propose a method to help balance the multimodal learning process. The cross-modal analysis is based on measures related to conditional utilization rate and the proposed approach is related to conditional learning speed.
DISCUSSION: The reviewers showed support for this line of research, as a way to better understand the learning process for multimodal models. The discussion allowed to identify points that needed to be clarify and concerns about the experimental results. The authors addressed many of these issues in their response. All reviewers took the time to read these responses as well as other reviews. There are many reviewers who are still expressing concerns with the experimental results.
SUMMARY: This is an important line of research, and the authors should continue this research endeavor. While the paper presents some interesting research hypotheses about multimodal learning, it seems that more experiments are needed to properly address these hypotheses. In its current form, the paper may not yet be ready for publication. | train | [
"idTUktOofMb",
"MD9SD4t5d6g",
"HnXG7qyvuSI",
"uJV2MB1aDNV",
"hmrYwjtnqw3",
"NqOV9nRFxU6",
"Q4DnYrihIRw",
"h0W55DWVZk0",
"fESXc3cm6XQ",
"2M7IbAAvPJ",
"e1UZsfH5MZ",
"yF7uqZ7nwHX",
"5ojlh5AFRfy",
"-E1Z_fcQ7pY",
"HfdgvViCX0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on the multi-modal interaction problem, that multi-modal models tend to rely on just one modality while under-utilizing the other modalities. Since conditional utilization rate cannot be computed efficiently during training, they introduce an efficient proxy based on the pace at which a DNN lear... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_Dy8gq-LuckD",
"h0W55DWVZk0",
"yF7uqZ7nwHX",
"2M7IbAAvPJ",
"-E1Z_fcQ7pY",
"5ojlh5AFRfy",
"iclr_2022_Dy8gq-LuckD",
"idTUktOofMb",
"-E1Z_fcQ7pY",
"fESXc3cm6XQ",
"HfdgvViCX0",
"5ojlh5AFRfy",
"iclr_2022_Dy8gq-LuckD",
"iclr_2022_Dy8gq-LuckD",
"iclr_2022_Dy8gq-LuckD"
] |
iclr_2022_j97zf-nLhC | Zero-Shot Coordination via Semantic Relationships Between Actions and Observations | An unaddressed challenge in zero-shot coordination is to take advantage of the semantic relationship between the features of an action and the features of observations. Humans take advantage of these relationships in highly intuitive ways. For instance in the absence of a shared-language, we might point to the object we desire or hold up fingers to indicate how many objects we want. To address this challenge, we investigate the effect of network architecture on the propensity of learning algorithms to make use of these relationships in human-compatible ways. We find that attention-based architectures that jointly process a featurized representation of the observation and the action, have a better inductive bias for exploiting semantic relationships for zero-shot coordination. Excitingly, in a set of diagnostic tasks, these agents produce highly human-compatible policies, without requiring the symmetry relationships of the problems to be hard-coded. | Reject | The author response addressed some reviewer concerns, and generally reviewers increased their scores. However, there are important, and unanswered concerns about the generalization of the model. The discussion raised the concerns that despite the paper claim of "a specific class of higher order reasoning" emerging, the result suggests relatively simple strategies. This might not be a limitation of the approach, but of the evaluation scenario. So, this either requires a more nuanced view of the findings, and further empirical evidence to support the claim. | train | [
"QkgLBSQHmTR",
"mi1ro3GHjv2",
"cT-vPoqjf2P",
"TAzaSIQ8x_",
"67CoOh6Xp-",
"ezsQo8nHQZ",
"wmnMT82sLQH",
"bMufrs_GH2X",
"NNksS68YBIL",
"9ZH9ensmD3",
"xWSvucZFGaZ",
"Eq1RbW2KUIm",
"TOliz1VbG8b",
"TyCanh1zgcC",
"sdamCG6qqTz",
"E_wTT5ZR-GP",
"D4Kgpf4ZICF"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer,\n\nWe are grateful that the reviewer decided to raise their score, and we would like to follow up to see if the reviewer had any more questions/feedback. In particular, we would like to get feedback on:\n\n- Does the reviewer feel that their concerns have been addressed properly in the current vers... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
3
] | [
"TyCanh1zgcC",
"iclr_2022_j97zf-nLhC",
"wmnMT82sLQH",
"sdamCG6qqTz",
"bMufrs_GH2X",
"iclr_2022_j97zf-nLhC",
"Eq1RbW2KUIm",
"E_wTT5ZR-GP",
"iclr_2022_j97zf-nLhC",
"ezsQo8nHQZ",
"NNksS68YBIL",
"D4Kgpf4ZICF",
"iclr_2022_j97zf-nLhC",
"xWSvucZFGaZ",
"TOliz1VbG8b",
"9ZH9ensmD3",
"iclr_2022... |
iclr_2022_voEpzgY8gsT | Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes | We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space. Our model combines the techniques in information geometry to model higher-order interactions on a statistical manifold and in generalized additive models to use lower-dimensional projections to overcome the effects from the curse of dimensionality. Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower-dimensional projections to the distribution modeled by an intensity function in the Poisson process. Our empirical results show that our model is able to use samples observed in the lower dimensional space to estimate the higher-order intensity function with extremely sparse observations. | Reject | The paper proposes a novel approach for estimating the high-dimensional intensity function of a Poisson process. The proposed approach builds on generalized additive models, using lower-dimensional projections.
The reviewers noted that, although the paper is well written, the position of this paper compared to earlier related work is unclear, and the empirical evaluation of the method should be strenghtened. The authors clarified some points in their response, but the paper would still require some more modifications to be ready for publication. I therefore recommend this paper to be rejected. | train | [
"dnMc0UUQ5_",
"OvHCUdl-TMI",
"zhSZghxZ8-9",
"wv6y3x6vsch",
"vb2T7Xt1hZm",
"6rwXjlpOEqp",
"uFnYIw9Jlo7",
"CTmwmknS9xR",
"zbSN1U60wkv",
"pQJFFmHTfF9",
"PBv9iW33MM-",
"lwz1PDefVl0",
"c6_YoXx6uW-",
"3zryyCAccY",
"f5WK0PQt-R",
"CkG-ffERTN"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable comments. Here is the response to your concerns.\n\n> The problem of selecting the M and h is not addressed. The synthetic experiments should adopt the same strategy as those will be used in real applications for an honest assessment of the proposed approach.\n\n\nOur technique to sele... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"OvHCUdl-TMI",
"lwz1PDefVl0",
"wv6y3x6vsch",
"zbSN1U60wkv",
"f5WK0PQt-R",
"CkG-ffERTN",
"6rwXjlpOEqp",
"vb2T7Xt1hZm",
"pQJFFmHTfF9",
"3zryyCAccY",
"c6_YoXx6uW-",
"PBv9iW33MM-",
"iclr_2022_voEpzgY8gsT",
"iclr_2022_voEpzgY8gsT",
"iclr_2022_voEpzgY8gsT",
"iclr_2022_voEpzgY8gsT"
] |
iclr_2022_YgR1rRWETI | Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity | Neural network pruning is a fruitful area of research with surging interest in high sparsity regimes. Benchmarking in this domain heavily relies on faithful representation of the sparsity of subnetworks, which has been traditionally computed as the fraction of removed connections (direct sparsity). This definition, however, fails to recognize unpruned parameters that detached from input or output layers of underlying subnetworks, potentially underestimating actual effective sparsity: the fraction of inactivated connections. While this effect might be negligible for moderately pruned networks (up to $10\times-100\times$ compression rates), we find that it plays an increasing role for thinner subnetworks, greatly distorting comparison between different pruning algorithms. For example, we show that effective compression of a randomly pruned LeNet-300-100 can be orders of magnitude larger than its direct counterpart, while no discrepancy is ever observed when using SynFlow for pruning (Tanaka et al., 2020). In this work, we adopt the lens of effective sparsity to reevaluate several recent pruning algorithms on common benchmark architectures (e.g., LeNet-300-100, VGG-19, ResNet-18) and discover that their absolute and relative performance changes dramatically in this new, and as we argue, more appropriate framework. To aim for effective, rather than direct, sparsity, we develop a low-cost extension to most pruning algorithms. Further, equipped with effective sparsity as a reference frame, we partially reconfirm that random pruning with appropriate sparsity allocation across layers performs as well or better than more sophisticated algorithms for pruning at initialization (Su et al., 2020). In response to this observation, using a simple analogy of pressure distribution in coupled cylinders from thermodynamics, we design novel layerwise sparsity quotas that outperform all existing baselines in the context of random pruning. | Reject | ### Summary
This work investigates effective sparsity: an assessment of the sparsity of pruned networks that accounts for the fact that unpruned neurons can still be completely disconnected through pruning. Hence, the effective sparsity of a network may be much lower than otherwise reported.
### Discussion
#### Strengths
- The paper studies an important metric that deserves additional attention in the community, where a change in metric may guide either the theory or practice of pruning.
- The paper evaluates direct versus empirical sparsity for a healthy number of pruning techniques.
#### Weaknesess
- While this paper appears to be the most direct study of effective sparsity at the moment, it is not the first. Appendix M of [1] defines effective sparsity and shows that direct and effective sparsity are similar for contemporary pruning at initialization techniques. However, that work does not evaluate random pruning. This work here will need to revise its novelty claims to account for these results as its characterization that [1] only considers direct sparsity is incorrect.
- "Computing effective sparsity:" the procedure in question is similar to that of Appendix M in [1], thus its relationship should be detailed.
- With the primary observations residing in the regime of extremely sparse neural networks, the elements of the response (and in the last paragraph of the paper) that claim this regime is productive for ensembling should make a more prominent appearance in the introduction of the work.
[1] Pruning Neural Networks at Initialization: Why Are We Missing the Mark?
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel Roy, Michael Carbin. ICLR, 21
### Recommendation
I recommend Reject. Generally, the paper is well-written and the empirical characterization of direct versus effective sparsity is thorough (except for ResNet-50 results). However, the results and the language around these results need significant rescoping to account for novelty and the relation of the work to an area in which it is anticipated that these results will change theory, practice, or thinking (e.g., ensembling).
Though I cannot speak for future reviewers, IMO, an extension of the results here to ResNet-50+ImageNet should suffice to establish the extent of the discrepancy between direct and effective sparsity. However, to satisfy additional demands from reviewers for more practical relevance, I suggest an evaluation that demonstrates a consequential difference in behavior for a task that maps more closely to the anticipated area of impact (e.g., ensembling) | train | [
"8y4U9IXKdpV",
"ZYLm8zqegFR",
"1W7keC6B2WC",
"9Ae4yFqqjUX",
"DohfrekY5fH",
"-YY6U59fbS8",
"IDbpeiwbtbN",
"TBADJzRYY6C",
"OqNgZyV7Yd3",
"41j5viIyOu1",
"wH1dZtTylC",
"78D5thnfHKs",
"VtEAE5EP-vK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for answering to my concerns and for clarifying the novel contributions of this paper. Indeed, I was thinking that this paper is the first one which formulates clearly the concept of \"effective sparsity\", but I wasn't completely sure. At the same time, even if the paper in its current state ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"DohfrekY5fH",
"9Ae4yFqqjUX",
"IDbpeiwbtbN",
"TBADJzRYY6C",
"78D5thnfHKs",
"wH1dZtTylC",
"VtEAE5EP-vK",
"41j5viIyOu1",
"-YY6U59fbS8",
"iclr_2022_YgR1rRWETI",
"iclr_2022_YgR1rRWETI",
"iclr_2022_YgR1rRWETI",
"iclr_2022_YgR1rRWETI"
] |
iclr_2022_qfaNCudAnji | Deep Q-Network with Proximal Iteration | We employ Proximal Iteration for value-function optimization in reinforcement learning. Proximal Iteration is a computationally efficient technique that enables us to bias the optimization procedure towards more desirable solutions. As a concrete application of Proximal Iteration in deep reinforcement learning, we endow the objective function of the Deep Q-Network (DQN) agent with a proximal term to ensure that the online-network component of DQN remains in the vicinity of the target network. The resultant agent, which we call DQN with Proximal Iteration, or DQNPro, exhibits significant improvements over the original DQN on the Atari benchmark. Our results accentuate the power of employing sound optimization techniques for deep reinforcement learning. | Reject | The paper applies proximal iteration to Q-learning, which significantly improves the performance of DQN. Reviewers agreed the paper is not ready for publication, for a couple reasons. DQN is quite far from current state-of-the-art. Improvements therefore need to be well-founded to be of broad interest. If the algorithm that is being improved is not competitive, there should be more general lessons that can be extracted from how and why the improvement works. Unfortunately, the reviewers felt that there was insufficient understanding of why proximal iteration helps. | train | [
"8A4OMcRbmZn",
"jxKaOVn90VR",
"zRLSTJruwXx",
"B__607zch3y",
"hUOiogh9t6",
"SvKTylavhrH",
"NcAo_yBhg1q",
"JKIXoOg0SQQ",
"bpoqdA2zTpg",
"NJOkuw-WkT_",
"LS-m1fRmr8p",
"_IRReKilhqb",
"U-ND4TyoI9T"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Like other reviewers, my biggest concern is the lack of a more in-depth analysis of the effectiveness of DQNPro. While it is nice to improve the performance of DQN by changing a few lines of code, I think it is more important to explain why and when it works. Therefore, I keep my original score.",
" Well, at a ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"SvKTylavhrH",
"JKIXoOg0SQQ",
"hUOiogh9t6",
"NcAo_yBhg1q",
"_IRReKilhqb",
"LS-m1fRmr8p",
"U-ND4TyoI9T",
"NJOkuw-WkT_",
"iclr_2022_qfaNCudAnji",
"iclr_2022_qfaNCudAnji",
"iclr_2022_qfaNCudAnji",
"iclr_2022_qfaNCudAnji",
"iclr_2022_qfaNCudAnji"
] |
iclr_2022_LBv-JtAmm4P | Is Heterophily A Real Nightmare For Graph Neural Networks on Performing Node Classification? | Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using the graph structures based on the relational inductive bias (homophily assumption). Though GNNs are believed to outperform NNs in real-world tasks, performance advantages of GNNs over graph-agnostic NNs seem not generally satisfactory. Heterophily has been considered as a main cause and numerous works have been put forward to address it. In this paper, we first show that not all cases of heterophily are harmful for GNNs with aggregation operation. Then, we propose new metrics based on a similarity matrix which considers the influence of both graph structure and input features on GNNs. The metrics demonstrate advantages over the commonly used homophily metrics in tests on synthetic graphs. From the metrics and the observations, we find that some cases of harmful heterophily can be addressed by diversification operation. By using this fact and knowledge of filterbanks, we propose the Adaptive Channel Mixing (ACM) framework to adaptively exploit aggregation, diversification and identity channels in each GNN layer, in order to address harmful heterophily. We validate the ACM-augmented baselines with 10 real-world node classification tasks. They consistently achieve significant performance gain and exceed the state-of-the-art GNNs on most of the tasks without incurring significant computational burden. | Reject | The reviewers agree that the paper is addressing an interesting problem. However, the authors analyze the effect of heterophily on GNN for node classification. The authors simplify the analysis by removing the nonlinearity in the GNN model and derive some theoretical results. However, the analysis is very specific to the simplified version of GNN, and the link to later proposed solution is also weak. Furthermore, a more significant improvement in experiments will also make the paper more convincing. | val | [
"qwg8wouejV5",
"qxNLZ2bQQH0",
"dd7yvNN5dOM",
"NL2PnlONWFY",
"a2f7eZqG03",
"WR8X2tEzAzy",
"2oR-nTg1Bwq",
"4cb0pDglrXB",
"mV5l0km9_db",
"3w7-7xbWN1r",
"6qJ-FgP8UIB",
"4yUbO5C2UjK",
"Gydl9nHqoUZ",
"EXZsB2Ep7nN"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n\n\n**Q1**: I am wondering how you generate those curves (except for H_edge) with smooth points along the x-axis. \n\n**R1**:\nWe reorder the value of the metrics in ascend order for x-axis.\n\nHere is a simplified example. Suppose we generate graphs with $H_\\text{edge}=0.1,0.5,0.9$, the test accuracy of GCN o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"qxNLZ2bQQH0",
"4cb0pDglrXB",
"3w7-7xbWN1r",
"mV5l0km9_db",
"6qJ-FgP8UIB",
"4yUbO5C2UjK",
"EXZsB2Ep7nN",
"Gydl9nHqoUZ",
"iclr_2022_LBv-JtAmm4P",
"iclr_2022_LBv-JtAmm4P",
"iclr_2022_LBv-JtAmm4P",
"iclr_2022_LBv-JtAmm4P",
"iclr_2022_LBv-JtAmm4P",
"iclr_2022_LBv-JtAmm4P"
] |
iclr_2022_xbx7Hxjbd79 | COLA: Consistent Learning with Opponent-Learning Awareness | Optimization problems with multiple, interdependent losses, such as Generative Adversarial Networks (GANs) or multi-agent RL, are commonly formalized as differentiable games.
Learning with Opponent-Learning Awareness (LOLA) introduced opponent shaping to this setting. More specifically, LOLA introduced an augmented learning rule that accounts for the agent's influence on the anticipated learning step of the other agents. However, the original LOLA formulation is inconsistent because LOLA models other agents as naive learners rather than LOLA agents.
In previous work, this inconsistency was stated to be the root cause of LOLA's failure to preserve stable fixed points (SFPs). We provide a counterexample by investigating cases where Higher-Order LOLA (HOLA) converges.
Furthermore, we show that, contrary to claims made, Competitive Gradient Descent (CGD) does not solve the consistency problem.
Next, we propose a new method called Consistent LOLA (COLA), which learns update functions that are consistent under mutual opponent shaping. Lastly, we empirically compare the performance and consistency of HOLA, LOLA, and COLA on a set of general-sum learning games. | Reject | The main contribution of this paper is that it points out incorrect claims in the literature of multi-agent RL and provides new insight on the failure modes of current methods. Specifically, this paper investigates the inconsistency problem in LOLA (meaning it assumes the other agent as a naive learner, thus not converging to SFPs in some games). It then shows problems with two fixes in the literature: 1) HOLA addresses the inconsistency problem only when it converges; otherwise, HOLA does not resolve the issue. 2) GCD does not resolve the issue although it claims to do so. This paper then proposes a method COLA that fixes the inconsistency issue, which outperforms HOLA when it diverges. Reviewers generally agree that the insight from this work is interesting and important for the field. However, there were some concern on both the theory and the experiments. While the updated version addresses some of the concerns, it also made significant changes to both the theoretical and the empirical sections, and would benefit from another round of close review. Thus, I think the current version of this work is borderline. | train | [
"11jsGkdXa28",
"5N2qElPjS4c",
"eLz4N4hKHRu",
"vBeiiorUt7g",
"Robp4VzNPp",
"34hBLsS9cyF",
"UJDu4rDpHX",
"4orWoX3Oj0",
"d55vZX0FHJF",
"tAYJLe5kcD",
"7sxTsZ6h8pZ",
"1r4yJk_XN4j",
"115C384BGpz",
"chfWdb0JKBH",
"dWKHzTLFDo",
"BJbP-WrVW-w",
"d3byjJci3P",
"TXWFLU0PBU",
"0-UKeGkXdGo",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your constructive feedback and for acknowledging the paper’s potential.\n \n> such as stronger theoretical properties (not limited to a particular game (e.g., Tandem))\n\nRegarding our use of particular games in proofs, we’d like to note that, e.g., our result in Proposition 4 shows that COLA does not ... | [
-1,
-1,
3,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8
] | [
-1,
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"5N2qElPjS4c",
"UJDu4rDpHX",
"iclr_2022_xbx7Hxjbd79",
"d55vZX0FHJF",
"eLz4N4hKHRu",
"iclr_2022_xbx7Hxjbd79",
"4orWoX3Oj0",
"d55vZX0FHJF",
"d3byjJci3P",
"eLz4N4hKHRu",
"34hBLsS9cyF",
"0-UKeGkXdGo",
"iclr_2022_xbx7Hxjbd79",
"gPLgYsZ-2cD",
"eLz4N4hKHRu",
"34hBLsS9cyF",
"0-UKeGkXdGo",
... |
iclr_2022_sWbXSWzHPa | Invariant Learning with Partial Group Labels | Learning invariant representations is an important requirement in training machine learning models that are driven by spurious correlations in the datasets. These spurious correlations, between input samples and the target labels, wrongly direct the neural network predictions resulting in poor performance on certain groups, especially the minority groups. Robust training against these spurious correlations requires the knowledge of group membership for every sample. Such a requirement is impractical in situations where the data labelling efforts for minority or rare groups is significantly laborious or where the individuals comprising the dataset choose to conceal sensitive information. On the other hand, the presence of such data collection efforts result in datasets that contain partially labelled group information. Recent works have tackled the fully unsupervised scenario where no labels for groups are available. Thus, we aim to fill the missing gap in the literature by tackling a more realistic setting that can leverage partially available sensitive or group information during training. First, we construct a constraint set and derive a high probability bound for the group assignment to belong to the set. Second, we propose an algorithm that optimizes for the worst-off group assignments from the constraint set. Through experiments on image and tabular datasets, we show improvements in the minority group’s performance while preserving overall aggregate accuracy across groups. | Reject | This paper introduces a novel method for learning distributional robust machine learning models when only partial group labels are available to improve performance of learning algorithms on minority groups.
Pros: The paper is well motivated and written. The ideas are interesting. Most work on distributional robust optimization (DRO) are in unsupervised settings where group information is not available. They provide an approach for the semi-supervised setting through a constraint set.
Cons:
The empirical results do not show better performance over unsupervised baselines as pointed out by reviewers.
The authors claim one of the benefits of their proposed approach is a one-stage approach, in contrast to competing models that require a two-stage approach; hence, allowing their approach to reduce compute time. It’ll be helpful to strengthen this point by showing time comparisons.
Missing labels in this case due to participants withholding sensitive information is not an MCAR case, but the proposed work makes an MCAR assumption. It’ll help to add a discussion and point out such limitations of the approach.
Summary: This paper has novel and interesting ideas, but still has several issues as pointed out by the reviewers before it is ready for publication. | val | [
"84K5VmlID_r",
"LwQOLFXrkT",
"pi-l9zEJx3",
"PzGJ0kTD6oI",
"d59F-gWubDC",
"WRlXHq6HOfj",
"PNeJJ_knLeI",
"ej7citQ7H4-",
"4knfdzwatfv",
"9K6UFTSVNeR",
"aPGBLObYyXk",
"oMW9x8P2NM",
"OdsVhC7glXU",
"qXpMWPP1Fp",
"xAirDfRhxjQ",
"0tV0SIRoUGj"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We are thankful to the reviewer for providing suggestions to include a discussion section on estimated/misspecified $\\hat p$. Further, it is a nice suggestion to devise a metric specific to safety-critical models.\n\nWith regards to two-stage models like JTT/GEORGE/EIIL, we provide a few additional points of com... | [
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"9K6UFTSVNeR",
"4knfdzwatfv",
"WRlXHq6HOfj",
"ej7citQ7H4-",
"iclr_2022_sWbXSWzHPa",
"oMW9x8P2NM",
"iclr_2022_sWbXSWzHPa",
"OdsVhC7glXU",
"qXpMWPP1Fp",
"aPGBLObYyXk",
"0tV0SIRoUGj",
"d59F-gWubDC",
"PNeJJ_knLeI",
"xAirDfRhxjQ",
"iclr_2022_sWbXSWzHPa",
"iclr_2022_sWbXSWzHPa"
] |
iclr_2022_SF9o3-yP1WR | Robust and Personalized Federated Learning with Spurious Features: an Adversarial Approach | A common approach for personalized federated learning is fine-tuning the global machine learning model to each local client. While this addresses some issues of statistical heterogeneity, we find that such personalization methods are often vulnerable to spurious features, leading to bias and diminished generalization performance. However, debiasing the personalized models under spurious features is difficult. To this end, we propose a strategy to mitigate the effect of spurious features based on our observation that the global model in the federated learning step has a low accuracy disparity due to statistical heterogeneity. Then, we estimate and mitigate the accuracy disparity of personalized models using the global model and adversarial transferability in the personalization step. Empirical results on MNIST, CelebA, and Coil20 datasets show that our method reduces the accuracy disparity of the personalized model on the bias-conflicting data samples from 15.12% to 2.15%, compared to existing personalization approaches, while preserving the benefit of enhanced average accuracy from fine-tuning. | Reject | The paper talks about a novel setting in Federated Learning and argues that personalization methods may cause the personalized models to overfit on spurious features, thereby increasing the accuracy disparity compared to the global model. To this end the authors propose a debiasing strategy using a global model and adversarial tranferability.
There were some positive opinion about the problem being interesting .However reviewers had several concerns about the validity of assumption and hand wavy arguments used in the solutions for existence adversarial tranferability. Overall, the settings and the need for removing personalization bias needs to be validated more convincingly and rigorously, with concrete real scenarios and experiments. | train | [
"YWcD1YcRiBO",
"IdsqZpAvzOP",
"2h1OG9Htht",
"uI-pORWZ6lW",
"E_lXkRADJp",
"hJpD_niDyxu",
"qP-fv1VOQn3",
"teFxg2An8Iz"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for their comments. The paper is revised, and the revision summary is listed below:\n\n- We provide additional theoretical analysis in Section 5 and the proofs in Appendix B, showing that the disparity and the transferability are connected.\n- The section of the weight regularization te... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_SF9o3-yP1WR",
"qP-fv1VOQn3",
"hJpD_niDyxu",
"teFxg2An8Iz",
"hJpD_niDyxu",
"iclr_2022_SF9o3-yP1WR",
"iclr_2022_SF9o3-yP1WR",
"iclr_2022_SF9o3-yP1WR"
] |
iclr_2022_wxVpa5z4DU1 | Accuracy-Privacy Trade-off in Deep Ensemble: A Membership Inference Perspective | Deep ensemble learning has been shown to improve accuracy by training multiple neural networks and fusing their outputs. Ensemble learning has also been used to defend against membership inference attacks that undermine privacy. In this paper, we empirically demonstrate a trade-off between these two goals, namely accuracy and privacy (in terms of membership inference attacks), in deep ensembles. Using a wide range of datasets and model architectures, we show that the effectiveness of membership inference attacks also increases when ensembling improves accuracy. To better understand this trade-off, we study the impact of various factors such as prediction confidence and agreement between models that constitute the ensemble. Finally, we evaluate defenses against membership inference attacks based on regularization and differential privacy. We show that while these defenses can mitigate the effectiveness of the membership inference attack, they simultaneously degrade ensemble accuracy. We illustrate similar trade-off in more advanced and state-of-the-art ensembling techniques, such as snapshot ensembles and diversified ensemble networks. The source code is available in supplementary materials. | Reject | The authors in the paper perform empirical studies to investigate the trade-off between accuracy and privacy (measured by membership inference attacks) in deep ensembles. They find out that the level of correct agreement among models is the most dominant factor that improves the performance of MI attacks in deep ensembles. They support their claim by visualizing the distribution shifts of correct agreement in train/test examples. They further implement a variety of existing defenses, such as differential privacy and regularizations, etc., to investigate the effects of existing defense mechanisms. Overall, the paper is well-written and the experiments are well conducted.
While these findings are interesting, they do not reveal something useful or surprising about deep ensemble learning. It is not clear what the contribution is to the membership inference attack literature and private machine learning literature. they do not propose anything new to make the attacks stronger or defenses stronger. | train | [
"g2xmzyyaqbw",
"WwG0SoypFBZ",
"fYiqDOkqzkf",
"orK5qfteykh",
"V432_5m08Q7",
"edXwG4CQBej",
"59FesT4FbT1",
"vZnMjULjPz",
"R6GN8OoT6ao",
"hPvnbCpgko4",
"UL9N08GOa5n",
"E2m8f28g47j",
"i-v4iTO-Piz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work analyzes the accuracy-privacy trade-off in ensemble learning by performing model inference attacks. The key finding of the paper is that the presence of an ensemble (that averages the predictions of individual learners) exacerbates the disparity between the confidence distribution of samples that were se... | [
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_wxVpa5z4DU1",
"UL9N08GOa5n",
"R6GN8OoT6ao",
"59FesT4FbT1",
"iclr_2022_wxVpa5z4DU1",
"vZnMjULjPz",
"V432_5m08Q7",
"i-v4iTO-Piz",
"E2m8f28g47j",
"g2xmzyyaqbw",
"g2xmzyyaqbw",
"iclr_2022_wxVpa5z4DU1",
"iclr_2022_wxVpa5z4DU1"
] |
iclr_2022_syzTg1vyBtL | Congested bandits: Optimal routing via short-term resets | For traffic routing platforms, the choice of which route to recommend to a user depends on the congestion on these routes -- indeed, an individual's utility depends on the number of people using the recommended route at that instance. Motivated by this, we introduce the problem of Congested Bandits where each arm's reward is allowed to depend on the number of times it was played in the past $\Delta$ timesteps. This dependence on past history of actions leads to a dynamical system where an algorithm's present choices also affect its future pay-offs, and requires an algorithm to plan for this. We study the congestion aware formulation in the multi-armed bandit (MAB) setup and in the contextual bandit setup with linear rewards. For the multi-armed setup, we propose a UCB style algorithm and show that its policy regret scales as $\tilde{O}(\sqrt{K \Delta T})$. For the linear contextual bandit setup, our algorithm, based on an iterative least squares planner, achieves policy regret $\tilde{O}(\sqrt{dT} + \Delta)$. From an experimental standpoint, we corroborate the no-regret properties of our algorithms via a simulation study. | Reject | This paper introduces a new structured bandit problem called congested bandits, where the expected reward for an arm is a decaying function of how many times it has been played recently. This model aims to address problems such as route recommendation, in which recommended routes tend to get congested (hence yield lower rewards). Different from prior work on bandits with non-stationary reward distributions, the effect of congestion in this model resets after Delta time steps. The authors show that this problem can be formulated as a structured MDP and propose a variant of UCRL2 that learns to recommend the optimal arm for each congestion state. They show that the proposed algorithm achieves a policy regret bound that significantly improves upon UCRL2. They also propose a variant of their algorithm tailored for the linear stochastic contextual bandit setup with the associated analysis.
Unfortunately, this is a rather niche problem formulation and it fails to truly capture congestion models for traffic routing platforms (or other practical routing problems) which serve as the main motivation for the paper. Moreover, the novelty is limited: the setting is very close to existing non-stationary bandit models and the proposed algorithms are straightforward extensions of existing strategies. A possible way for supporting the novelty of the setting could be to improve its theoretical understanding through a lower bounds analysis, which is currently lacking from the paper. Although this paper contains interesting and well-articulated ideas, contributions are not sufficient. | train | [
"WIV3AXqhush",
"OfgITDhTHd9",
"n24WH3yo7sn",
"n9N7qrKMIjH",
"JG3yMVwfIr",
"Y0E8yDus7r-",
"uQDOkLL4mfj",
"PL2xBUzuct9",
"QmYBhBCE5hc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. My questions are mostly addressed. I will keep the score, but I would recommend the authors to further clarify the novelty of their algorithm design and the theoretical analysis. For example, regarding the way the proposed algorithm differs from UCRL2, is it a direct extension or it has s... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"JG3yMVwfIr",
"QmYBhBCE5hc",
"PL2xBUzuct9",
"uQDOkLL4mfj",
"Y0E8yDus7r-",
"iclr_2022_syzTg1vyBtL",
"iclr_2022_syzTg1vyBtL",
"iclr_2022_syzTg1vyBtL",
"iclr_2022_syzTg1vyBtL"
] |
iclr_2022_9dn7CjyTFoS | One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features | It is well understood that modern deep networks are vulnerable to adversarial attacks. However, conventional methods fail to produce adversarial perturbations that are intelligible to humans, and they pose limited threats in the physical world. To study feature-class associations in networks and better understand the real-world threats they face, we develop feature-level adversarial perturbations using deep image generators and a novel optimization objective. We term these feature-fool attacks. We show that they are versatile and use them to generate targeted feature-level attacks at the ImageNet scale that are simultaneously interpretable, universal to any source image, and physically-realizable. These attacks can also reveal spurious, semantically-describable feature/class associations, and we use them to guide the design of ``copy/paste'' adversaries in which one natural image is pasted into another to cause a targeted misclassification. | Reject | In this manuacript, the authors develop feature-fool attacks with feature-level adversarial perturbations using deep image generators and a novel optimization objective. They further show that the feature-fool attacks are versatile and can generate targeted feature-level attacks at the ImageNet scale that are simultaneously interpretable, universal to any source image, and physically-realizable.
The reviewers agree that the paper is well-motivated and the authors have addressed some concerns.
However, the reviewers still do not satisfy with some concerns so as to keep the initial scores.
In comparison with the manuscripts I'm handling, I have to recommend to reject it! | test | [
"lkWZwlfkbu6",
"okMoHL3fJJ_",
"Ojvq2qy1Bj",
"vbGP17naZiu",
"HNjevL5gYl0",
"V_ixAn723Lg",
"IdhQHPbHfMA",
"E3u1KMSRB_",
"nUW0ze8-nQO",
"FtqDILeSKm4",
"63kXUg7ciO",
"xvAfqeBsa1m",
"5xZRDaPDe8Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. The motivations are now clearer. I've read other reviewers' responses and I agree with some of their concerns. Based on these, I keep my score.",
" Having read the other reviews, the response from the authors and the updated paper I do not feel as if my concerns were sufficiently ad... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"HNjevL5gYl0",
"63kXUg7ciO",
"IdhQHPbHfMA",
"iclr_2022_9dn7CjyTFoS",
"FtqDILeSKm4",
"E3u1KMSRB_",
"5xZRDaPDe8Q",
"xvAfqeBsa1m",
"63kXUg7ciO",
"iclr_2022_9dn7CjyTFoS",
"iclr_2022_9dn7CjyTFoS",
"iclr_2022_9dn7CjyTFoS",
"iclr_2022_9dn7CjyTFoS"
] |
iclr_2022_NrB52z3eOTY | Effective Uncertainty Estimation with Evidential Models for Open-World Recognition | Reliable uncertainty estimation is crucial when deploying a classifier in the wild. In this paper, we tackle the challenge of jointly quantifying in-distribution and out-of-distribution (OOD) uncertainties. To this end, we leverage the second-order uncertainty representation provided by evidential models and we introduce KLoS, a Kullback–Leibler divergence criterion defined on the class-probability simplex. By keeping the full distributional information, KLoS captures class confusion and lack of evidence in a single score. A crucial property of KLoS is to be a class-wise divergence measure built from in-distribution samples and to not require OOD training data, in contrast to current second-order uncertainty measures. We further design an auxiliary neural network, KLoSNet, to learn a refined criterion directly aligned with the evidential training objective. In the realistic context where no OOD data is available during training, our experiments show that KLoSNet outperforms first-order and second-order uncertainty measures to simultaneously detect misclassifications and OOD samples. When training with OOD samples, we also observe that existing measures are brittle to the choice of the OOD dataset, whereas KLoS remains more robust. | Reject | This submission received 4 diverging ratings: 6, 5, 5, 3. On the positive side, reviewers appreciated the central idea and a quality manuscript. At the same time, they have raised important concerns around unfair comparisons with baselines, experiments not fully supporting the claims and lack of comparisons with some prior methods. After discussions with the authors most reviewers stayed with their original ratings.
The AC agrees that the weaknesses in this case outweigh the strengths. The final recommendation is to reject. | train | [
"RSLcJS-S-32",
"xwS1Vfc-OXi",
"NtC_7E-UaH3",
"JQIbLWLj2ct",
"7qPoLEW89-",
"xdgBwLjbrvh",
"LpfswUoWUAu",
"jn0jDgjpq0M",
"ZBesicWqTe",
"Fe4ubzwoc4F",
"gwPDCStn88P",
"tPLgzcEsyOz",
"sRd3jCfrytN",
"UdGjsiWkLSf",
"oybDINJSp_"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer rode, we thank you for your feedback.\n\n- Regarding resources comparison, we respectfully disagree with your comment: during confidence training, computational resources are used to learn the confidence network, which is separate from the classifier. In addition, the ablation study shows that the c... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"xwS1Vfc-OXi",
"jn0jDgjpq0M",
"JQIbLWLj2ct",
"gwPDCStn88P",
"iclr_2022_NrB52z3eOTY",
"tPLgzcEsyOz",
"iclr_2022_NrB52z3eOTY",
"ZBesicWqTe",
"oybDINJSp_",
"UdGjsiWkLSf",
"sRd3jCfrytN",
"7qPoLEW89-",
"iclr_2022_NrB52z3eOTY",
"iclr_2022_NrB52z3eOTY",
"iclr_2022_NrB52z3eOTY"
] |
iclr_2022_BduNVoPyXBK | Task-driven Discovery of Perceptual Schemas for Generalization in Reinforcement Learning | Deep reinforcement learning (Deep RL) has recently seen significant progress in developing algorithms for generalization. However, most algorithms target a single type of generalization setting. In this work, we study generalization across three disparate task structures: (a) tasks composed of spatial and temporal compositions of regularly occurring object motions; (b) tasks composed of active perception of and navigation towards regularly occurring 3D objects; and (c) tasks composed of navigating through sequences of regularly occurring object-configurations. These diverse task structures all share an underlying idea of compositionality: task completion always involves combining reoccurring segments of task-oriented perception and behavior. We hypothesize that an agent can generalize within a task structure if it can discover representations that capture these reoccurring task-segments. For our tasks, this corresponds to representations for recognizing individual object motions, for navigation towards 3D objects, and for navigating through object-configurations. Taking inspiration from cognitive science, we term representations for reoccurring segments of an agent's experience, "perceptual schemas". We propose Composable Perceptual Schemas (CPS), which learns a composable state representation where perceptual schemas are distributed across multiple, relatively small recurrent "subschema" modules. Our main technical novelty is an expressive attention function that enables subschemas to dynamically attend to features shared across all positions in the agent's observation. Our experiments indicate our feature-attention mechanism enables CPS to generalize better than recurrent architectures that attend to observations with spatial attention. | Reject | This paper develops a mechanism for learning modular state representations in RL that organize recurring patterns into composable schemas. The approach combines modular RNNs as in RIMs (Goyal et al., 2020) with a dynamic feature attention mechanism. There were a variety of concerns in the initial reviews that were addressed by the authors through a set of clarifications and improved empirical analysis, substantially improving the paper. However, there still remain some issues in clarity of presentation and inconsistent empirical results, especially in the form of clear take-aways from the empirical analysis and broader insights from the paper, as detailed in the individual reviews. The authors are encouraged to take these aspects into consideration in revising their manuscript. | train | [
"s-2A3pDP_DM",
"8UyWkCiNedG",
"lDR6JDpj8pO",
"kZrBdFL57ZB",
"yJZ9s_76FIV",
"Y0m3Zz8Mgh",
"ON_Kpmuyxi4",
"KBwYyGxZMZL",
"HwYiVkm4S1y",
"UETR3nBBigR",
"eO8RmYUhz9c",
"rBErP-5maA",
"nFbV9eFkyg-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes Composable Perceptual Schemas (CPS), a modular state representation learning architecture for reinforcement learning that combines modular RNNs as in RIMs (Goyal et al., 2020) with a dynamic feature attention mechanism. It is hypothesized that this lets CPS exhibit multiple different kinds of g... | [
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_BduNVoPyXBK",
"KBwYyGxZMZL",
"iclr_2022_BduNVoPyXBK",
"UETR3nBBigR",
"ON_Kpmuyxi4",
"iclr_2022_BduNVoPyXBK",
"nFbV9eFkyg-",
"HwYiVkm4S1y",
"s-2A3pDP_DM",
"eO8RmYUhz9c",
"lDR6JDpj8pO",
"Y0m3Zz8Mgh",
"iclr_2022_BduNVoPyXBK"
] |
iclr_2022_9BIN1yr5Gp | Parallel Deep Neural Networks Have Zero Duality Gap | Training deep neural networks is a well-known highly non-convex problem. In recent works, it is shown that there is no duality gap for regularized two-layer neural networks with ReLU activation, which enables global optimization via convex programs. For multi-layer linear networks with vector outputs, we formulate convex dual problems and demonstrate that the duality gap is non-zero for depth three and deeper networks. However, by modifying the deep networks to more powerful parallel architectures, we show that the duality gap is exactly zero. Therefore, strong convex duality holds, and hence there exist equivalent convex programs that enable training deep networks to global optimality. We also demonstrate that the weight decay regularization in the parameters explicitly encourages low-rank solutions via closed-form expressions. For three-layer non-parallel ReLU networks, we show that strong duality holds for rank-1 data matrices, however, the duality gap is non-zero for whitened data matrices. Similarly, by transforming the neural network architecture into a corresponding parallel version, the duality gap vanishes. | Reject | This is an interesting paper which further extends the duality theory of deep networks. Unfortunately, reviewers had many concerns about, presentation, technical details, and missing prior work. I will add that a large volume of relevant implicit bias work (e.g., in the setting of deep linear networks, mirroring Proposition 2) is completely uncited (e.g., works by Arora et al., Soudry et al., Ji et al.), despite being earlier than many of the works which are currently cited. As such, I urge the authors to continue in their valuable line of work, taking into consideration all of these points and also the reviewer comments.
Separately, I note that there is a violation of the blind policy in the current revision: grant information was included. The PC decide this was a minor violation and should not affect the review process, however their decision could have easily been otherwise. I urge the authors to be exceptionally careful with such issues in the future. | train | [
"5dx_rLrWgZ",
"KC3pc6dDOs",
"ozb-fEji3rI",
"kZujXZ67Dms",
"xV8bXKmyffr",
"TETRMATpAU9",
"sxQyg69xD8p",
"FHXWAl5FSOS",
"jVFzKXK7pQm",
"CzZjHPVImN5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I have checked the revised version of the paper. Thanks for completing the proof of Proposition 4 and 6, and correcting typos. \nI believe that a discussion about the connection with mean field regime will be helpful in increasing the importance of the paper because this regime can explain the feature learning as... | [
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"sxQyg69xD8p",
"iclr_2022_9BIN1yr5Gp",
"xV8bXKmyffr",
"iclr_2022_9BIN1yr5Gp",
"kZujXZ67Dms",
"CzZjHPVImN5",
"KC3pc6dDOs",
"jVFzKXK7pQm",
"iclr_2022_9BIN1yr5Gp",
"iclr_2022_9BIN1yr5Gp"
] |
iclr_2022_J9_7t9m8xRj | Diverse and Consistent Multi-view Networks for Semi-supervised Regression | Label collection is costly in many applications, which poses the need for label-efficient learning. In this work, we present Diverse and Consistent Multi-view Networks (DiCoM) — a novel semi-supervised regression technique based on a multi-view learning framework. DiCoM combines diversity with consistency — two seemingly opposing yet complementary principles of multi-view learning - based on underlying probabilistic graphical assumptions. Given multiple deep views of the same input, DiCoM encourages a negative correlation among the views' predictions on labeled data, while simultaneously enforces their agreement on unlabeled data. DiCoM can utilize either multi-network or multi-branch architectures to make a trade-off between computational cost and modeling performance. Under realistic evaluation setups, DiCoM outperforms competing methods on tabular and image data. Our ablation studies confirm the importance of having both consistency and diversity. | Reject | The work proposed multi-view learning framework that combines diversity and consistency objectives for semi-supervised learning. While reviewers appreciated that simplicity of the proposed method, they raised concerns on the limited contribution on top of the original Bayesian Co-Training work. Although authors provided detailed rebuttals that addressed some of the reviewers' concerns, and one reviewer did raise their score, the other reviewers' scores remained unchanged. Given the work is closely based off the BCT work, I would like to see more detailed analyses on the importance of the changes brought in this work, such as changing the base learners and introduction of diversity objectives as pointed out by the authors. | val | [
"No8kVKQOk5c",
"6mygeKE0qGq",
"PRCYjGsiLg-",
"VqusWT5vNct",
"k5inrXet9L",
"ETcRjs4X4kk",
"3gbVMXzeRus",
"a84z8X07qd",
"DlaKVVKNfo",
"5upfcvs_qIK",
"sfsRBI5CKJk",
"pEAZ6Y-_fnX",
"JGx2sg7PWW",
"aB1ZG_-a85x",
"-HTTkYf-7QV",
"QeTv1lC_K2Y",
"RjnylkcTvTB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for carefully reading all our responses and for giving us very helpful comments. \n\nRegarding the following suggestion\n>It is nice that DiCoM can be used for high-dimensional data (Section 4.2) and it would be great if the author can further investigate it and highlight this contribution.\... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"PRCYjGsiLg-",
"iclr_2022_J9_7t9m8xRj",
"ETcRjs4X4kk",
"k5inrXet9L",
"iclr_2022_J9_7t9m8xRj",
"6mygeKE0qGq",
"6mygeKE0qGq",
"RjnylkcTvTB",
"RjnylkcTvTB",
"RjnylkcTvTB",
"QeTv1lC_K2Y",
"QeTv1lC_K2Y",
"QeTv1lC_K2Y",
"-HTTkYf-7QV",
"iclr_2022_J9_7t9m8xRj",
"iclr_2022_J9_7t9m8xRj",
"iclr... |
iclr_2022_bilHNPhT6- | On Multi-objective Policy Optimization as a Tool for Reinforcement Learning: Case Studies in Offline RL and Finetuning | Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives or constraints in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors. Often, the task reward and auxiliary objectives are in conflict, and in this paper we argue that this makes it natural to treat these cases as instances of multi-objective (MO) optimization problems. We demonstrate how this perspective allows us to develop novel and more effective RL algorithms. In particular, we focus on offline RL and finetuning as case studies, and show that existing approaches can be understood as MO algorithms relying on linear scalarization. We hypothesize that replacing linear scalarization with a better algorithm can improve performance. We introduce Distillation of a Mixture of Experts (DiME), a new MORL algorithm that outperforms linear scalarization and can be applied to these non-standard MO problems. We demonstrate that for offline RL, DiME leads to a simple new algorithm that outperforms state-of-the-art. For finetuning, we derive new algorithms that learn to outperform the teacher policy. | Reject | The reviewers in general did not seem to be strongly impressed by the contribution of the paper. As the authors noted, some reviewers seemed to misinterpret the claims of the paper --- the paper is not to design new MORL algorithms that are significantly better on standard MORL benchmarks but is to apply MORL on offline RL and fine-tuning. On the other hand, the AC suspects that the paper's exposition could be more centered around the applications, e.g., arguing why offline RL can be benefited from better MO training, and why the challenge of offline RL is to balance some given notions of risk and return computationally (instead of, e.g., developing the right notion/formula for quantifying the risk and return.) Moreover, I think the paper would be stronger if the evaluation for offline RL setting can be made stronger, e.g., including more tasks and algorithms on the D4RL dataset. If the paper's claim is that MORL is a great tool for offline RL, perhaps it's useful to demonstrate that MORL can achieve SOTA reliably when used on top of existing offline RL algorithms (which almost always have two parts in the objective). In summary, in the AC's opinion, the paper has a valuable contribution to the community but is somewhat boardline for ICLR in the current form, and the AC encourages the authors to resubmit to a top venue conference after addressing some of the reviewers' comments. | train | [
"5mMxsYppfVo",
"wtBd9EtkMDM",
"mIw84Zk1rIQ",
"FpAiAxa1ob",
"nTWGgeunr3",
"j2nC903ZeYe",
"Z1-hlgf-Cc",
"zwtqTroIjYL",
"wvgSdzZSSf8",
"wQZZk68MdhA"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would appreciate hearing back from the reviewers on whether their concerns have been addressed by our responses and updated paper. We are happy to answer any remaining questions. Thank you.",
" **Computational costs**\n\nWe agree that computational cost is very important to consider. In fact, the computation... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"iclr_2022_bilHNPhT6-",
"wQZZk68MdhA",
"wvgSdzZSSf8",
"zwtqTroIjYL",
"Z1-hlgf-Cc",
"iclr_2022_bilHNPhT6-",
"iclr_2022_bilHNPhT6-",
"iclr_2022_bilHNPhT6-",
"iclr_2022_bilHNPhT6-",
"iclr_2022_bilHNPhT6-"
] |
iclr_2022_H4J8FGHOhx_ | A Principled Permutation Invariant Approach to Mean-Field Multi-Agent Reinforcement Learning | Multi-agent reinforcement learning (MARL) becomes more challenging in the presence of more agents, as the capacity of the joint state and action spaces grows exponentially in the number of agents. To address such a challenge of scale, we identify a class of cooperative MARL problems with permutation invariance, and formulate it as mean-field Markov decision processes (MDP). To exploit the permutation invariance therein, we propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation- invariant actor-critic neural architecture. We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence. Moreover, its sample complexity is independent of the number of agents. We validate the theoretical advantages of MF-PPO with numerical experiments in the multi-agent particle environment (MPE). In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors with a smaller number of model parameters, which is the key to its generalization performance.
| Reject | This paper proposes a new multi-agent RL algorithm, based on the PPO algorithm, that uses a mean-field approximation, which results in a a permutation- invariant actor-critic neural architecture. The paper includes a detailed theoretical analysis that shows that the algorithm finds a globally optimal policy at a sub-linear rate of convergence, and that its sample complexity is independent of the number of agents. The paper include some experiments that validate the proposed algorithm.
The reviews of this paper are mixed. Most of the reviewers appreciate the theoretical analysis, but one reviewer does not find the theoretical justification of the mean-field approximation clear. The reviewer also points out to the absence of comparisons to relevant competing algorithms. These concerns are addressed by the authors in their rebuttal. A key issue with this work is the weakness of the empirical evaluation. The proposed method is tested on only two simple tasks, and the results on the second task do not show a considerable advantage of the proposed algorithm. This paper can be strengthened by adding experiments that clearly indicate the advantage of the proposed technique. | train | [
"j8nDUNE5GC",
"lbYYGoBAy0t",
"MK-F_qRfvYd",
"vo2rHlF_i6M",
"vbJCgGxbtH7",
"z0kmQhC8J7O",
"QdGW7vOIMHg"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We deeply appreciate your positive feedbacks and constructive suggestions on improving the paper. Below we provide our responses to your comments in detail.\n\n1. We thank the reviewer for pointing out the distinction between MFG and MFC and the pointer to the related reference. \nWe have made separate discussion... | [
-1,
-1,
-1,
-1,
3,
5,
8
] | [
-1,
-1,
-1,
-1,
4,
2,
5
] | [
"QdGW7vOIMHg",
"MK-F_qRfvYd",
"z0kmQhC8J7O",
"vbJCgGxbtH7",
"iclr_2022_H4J8FGHOhx_",
"iclr_2022_H4J8FGHOhx_",
"iclr_2022_H4J8FGHOhx_"
] |
iclr_2022_jJis-v9Pzhj | Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection | Positive-unlabeled (PU) learning aims at learning a binary classifier from only positive and unlabeled training data. Recent approaches address this problem via cost-sensitive learning by developing unbiased loss functions or via iterative pseudo-labeling solutions to further improve performance. However, two-steps procedures are vulnerable to incorrectly estimated pseudo-labels, as errors are propagated in later iterations when a new model is trained on erroneous predictions. To mitigate this issue we propose \textit{PUUPL}, a new loss-agnostic training procedure for PU learning that incorporates epistemic uncertainty in pseudo-labeling. Using an ensemble of neural networks and assigning pseudo-labels based on high confidence predictions improves the reliability of pseudo-labels, increasing the predictive performance of our method and leads to new state-of-the-art results in PU learning. With extensive experiments, we show the effectiveness of our method over different datasets, modalities, and learning tasks, as well as improved robustness over mispecifications of hyper-parameters and biased positive data. The source code of the method and all the experiments are available in the supplementary material. | Reject | This paper proposes a new loss-agnostic PU learning method based on uncertainty-aware pseudo-label selection.
I would like to thank the authors for their feedback to the initial reviews, which clarified many uncertain issues and improved our understanding of the current paper.
Nevertheless, even if the pseudo-labeling technique was applied to PU learning for the first time, given that it is a common practice in many weakly supervised learning tasks, the technical novelty is rather limited.
Therefore I cannot recommend acceptance of this paper. | train | [
"aiGt_Wzc8E",
"sq8UAlNOg",
"migx_57__ns",
"wF-zk6WXaB7",
"I17LsgkTGEv",
"nEqVLQTmeie",
"xTHNlJWlhR5",
"hC7L0LW_c6E",
"F4WcPosi6x2",
"Zuv4883i5Eu",
"jLOT6KAblWu",
"NcCwVgCauke",
"H-euD9RxY5H",
"uPKG6qtkV4O"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > This is early stopping, on the labeled test set.\n\nThank you for pointing it out. If so, it might be different from clean **validation** dataset. \n\n---\n\n> We reported such results at the end of section 4.3, in the paragraph titled \"early stopping\".\n\nThank you, but I wrote\n> ... it is better to report ... | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"sq8UAlNOg",
"wF-zk6WXaB7",
"iclr_2022_jJis-v9Pzhj",
"xTHNlJWlhR5",
"nEqVLQTmeie",
"uPKG6qtkV4O",
"hC7L0LW_c6E",
"migx_57__ns",
"H-euD9RxY5H",
"jLOT6KAblWu",
"NcCwVgCauke",
"iclr_2022_jJis-v9Pzhj",
"iclr_2022_jJis-v9Pzhj",
"iclr_2022_jJis-v9Pzhj"
] |
iclr_2022_Jt8FYFnyTLR | On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach | Interpretable and explainable machine learning has seen a recent surge of interest. We posit that safety is a key reason behind the demand for explainability. To explore this relationship, we propose a mathematical formulation for assessing the safety of supervised learning models based on their maximum deviation over a certification set. We then show that for interpretable models including decision trees, rule lists, generalized linear and additive models, the maximum deviation can be computed exactly and efficiently. For tree ensembles, which are not regarded as interpretable, discrete optimization techniques can still provide informative bounds. For a broader class of piecewise Lipschitz functions, we repurpose results from the multi-armed bandit literature to show that interpretability produces tighter (regret) bounds on the maximum deviation compared with black box functions. We perform experiments that quantify the dependence of the maximum deviation on model smoothness and certification set size. The experiments also illustrate how the solutions that maximize deviation can suggest safety risks. | Reject | This paper proposes a metric for the safety and interpretability of supervised learning models based on the maximum deviation from interpretable white-box models. The safety and interpretability of black-box models is an important topic, and many reviewers agree that the approach proposed by the authors is interesting. However, the maximum deviation from popular models such as decision trees, generalized linear and additive models have been intensively studied in the context of robust statistics/learning. Without explicit discussion on the connections with these existing studies, the novelty of the proposed approach cannot be properly evaluated. We thus have to conclude that the paper cannot be accepted in its current form. | val | [
"kCKDL0KAfo",
"un6BejmIIF",
"-ULzbhxNGRl",
"znqxSuGDbl",
"vJOPtgdcetu",
"a1LYmWYo5ny",
"u7FAEmiv0_B",
"dE-EgR6xgO1",
"BBjnPOT4fA",
"Lma6HLSzof",
"B9150YN8216",
"cZWS61-giCp",
"lv9ehSyXt0a",
"2Mheb98qpkv",
"utwr18CNY4",
"K3syuuzwbB",
"x161TIVMff9",
"mxWctve3z-7",
"U9aZggqD-Cg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"**Summary:**\n* The paper puts forth methodology for (efficiently?) computing the worst-case deviation between two fitted models (one is a \"candidate\" model, the other is a \"reference\" model), over some feasible region $\\mathcal C$ (that need not be convex).\n* The takeaway message is that these kind of compu... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4
] | [
"iclr_2022_Jt8FYFnyTLR",
"-ULzbhxNGRl",
"znqxSuGDbl",
"2Mheb98qpkv",
"u7FAEmiv0_B",
"2Mheb98qpkv",
"utwr18CNY4",
"2Mheb98qpkv",
"U9aZggqD-Cg",
"B9150YN8216",
"mxWctve3z-7",
"lv9ehSyXt0a",
"x161TIVMff9",
"utwr18CNY4",
"kCKDL0KAfo",
"iclr_2022_Jt8FYFnyTLR",
"iclr_2022_Jt8FYFnyTLR",
"... |
iclr_2022_QevkqHTK3DJ | Compressing Transformer-Based Sequence to Sequence Models With Pre-trained Autoencoders for Text Summarization | We proposed a technique to reduce the decoder’s number of parameters in a sequence to sequence (seq2seq) architecture for automatic text summarization. This approach uses a pre-trained AutoEncoder (AE) trained on top of a pre-trained encoder to reduce the encoder’s output dimension and allow to significantly reduce the size of the decoder. The ROUGE score is used to measure the effectiveness of this method by comparing four different latent space dimensionality reductions: 96%, 66%, 50%, 44%. A few well-known frozen pre-trained encoders (BART, BERT, and DistilBERT) have been tested, paired with the respective frozen pre-trained AEs to test the reduced dimension latent space’s ability to train a 3-layer transformer decoder. We also repeated the same experiments on a small transformer model that has been trained for text summarization. This study shows an increase of the R-1 score by 5% while reducing the model size by 44% using the DistilBERT encoder, and competitive scores for all the other models associated to important size reduction. | Reject | The paper proposes to incorporate an autoencoder to transformer-based summarization models in order to compress the model while preserving the quality of summarization. The strengths of the paper, as identified by reviewers, are in extensive experiments presented in the paper and in a relatively clear write-up. However, the reviewers identify several weaknesses, including missing state-of-the-art summarization baselines and missing relevant compression/knowledge distillation baselines. Although the author response have addressed some of reviewers' concerns, all the reviewers agree that the draft is not yet ready for publication. | train | [
"590ILX4-5OL",
"4P0ulAowfQE",
"FBxab1DF_ci",
"bxvgo3PS63v",
"xFXyTlvjy2a",
"TGTFlOq_SDG",
"2RRapThRckD",
"gD93-pF793",
"fn2Udw6u9pU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their responses to my questions which have clarified some of the confusions. However, I remain of the opinion that there is insufficient novelty and practical significance in this paper to warrant acceptance.",
"The paper proposes a new autoencoder-based seq2seq model for text summariza... | [
-1,
5,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"FBxab1DF_ci",
"iclr_2022_QevkqHTK3DJ",
"fn2Udw6u9pU",
"gD93-pF793",
"4P0ulAowfQE",
"2RRapThRckD",
"iclr_2022_QevkqHTK3DJ",
"iclr_2022_QevkqHTK3DJ",
"iclr_2022_QevkqHTK3DJ"
] |
iclr_2022_4lLyoISm9M | Range-Net: A High Precision Neural SVD | For Big Data applications, computing a rank-$r$ Singular Value Decomposition (SVD) is restrictive due to the main memory requirements. Recently introduced streaming Randomized SVD schemes work under the restrictive assumption that the singular value spectrum of the data has an exponential decay. This is seldom true for any practical data. Further, the approximation errors in the singular vectors and values are high due to the randomized projection. We present Range-Net as a low memory alternative to rank-$r$ SVD that satisfies the lower bound on tail-energy given by Eckart-Young-Mirsky (EYM) theorem at machine precision. Range-Net is a deterministic two-stage neural optimization approach with random initialization, where the memory requirement depends explicitly on the feature dimension and desired rank, independent of the sample dimension. The data samples are read in a streaming manner with the network minimization problem converging to the desired rank-$r$ approximation. Range-Net is fully interpretable where all the network outputs and weights have a specific meaning. We provide theoretical guarantees that Range-Net extracted SVD factors satisfy EYM tail-energy lower bound with numerical experiments on real datasets at various scales that confirm these bounds. A comparison against the state-of-the-art streaming Randomized SVD shows that Range-Net is six orders of magnitude more accurate in terms of tail energy while correctly extracting the singular values and vectors. | Reject | The reviewers were fairly consistent in agreeing that this is a reasonable paper with an interesting idea. However, the use-case is fairly narrow, as the main benefit is less intermediate storage (and only significant for very rectangular matrices) but compared to alternatives it require many passes over the data (usually 5 or so). So it's a narrow use-case and many of the comparisons are not apples-to-apples since the accuracy, time, space-complexity and number of passes differ from algorithm to algorithm.
So while acknowledging the potential benefits of the method, there are downsides too, and thus a clear presentation is very essential. The reviewers mention that presentation (listing the algorithm, clear experiments) could be improved.
On my own reading, I noted that the choice of SketchySVD as the dominant baseline is misleading. SketchySVD is for streaming data (more restrictive than single-pass) so this is an unfair comparison. The appendix does a better job of including other baselines (block Lanczos), though it mischaracterizes them (it says "BlockLanczos requires persistence presence of the data matrix X in memory", but this is not true, the method could easily be implemented in a matrix-free fashion). Another method to compare with is the single-pass algorithm randSVD in Yu et al., who show how to implement one of the Halko et al. 2011 2-pass methods in just one-pass. Other reviewers mention baseline algorithm issues too. I do acknowledge the improved accuracy of your method over all these baselines for some matrices, in terms of the Frobenius norm (or tail error); however, I'm not sure the differences in spectral norm are are great, and see Remark 2.1 in Martinsson and Tropp '20 for arguments about why Frobenius norm guarantees are often not as desirable as spectral norm guarantees.
Another issue is related to the left vs right singular vectors. A reviewer noted: "It is not fair to compare RangeNet with SketchSVD, RangeNet just produces the right singular vectors while SketchSVD produces both left and right singular vectors." The authors respond "Range-Net computes both left and right singular vectors but does not consume main memory to store left singular vectors at run-time". However, if we allow another pass over the matrix to find the left singular vectors, this post-processing can be applied to *any* technique that approximates the singular values and right singular vectors, hence PCA methods are applicable, including deterministic methods like the "Frequent Directions" method (Ghashami et al. '16).
In summary, this method is high-accuracy and low-memory, yet it also has downsides compared to other methods, and the paper could use some improvement. I don't think the paper is ready at this time for acceptance, but given the advantages of the method, I encourage the authors to make changes and resubmit an improved version to ICLR next year or other similar venue.
References:
Yu, Gu, Li, Liu, Li, "Single-Pass PCA of Large High-Dimensional Data". IJCAI '17, https://doi.org/10.24963/ijcai.2017/468
Ghashami, Liberty, Phillips, Woodruff, "Frequent directions: Simple and deterministic matrix sketching". SIAM Journal on Computing. 2016;45(5):1762-92.
Martinsson, Tropp. "Randomized numerical linear algebra: Foundations and algorithms". Acta Numerica. 2020 May;29:403-572. | test | [
"EfHrRdeW1jF",
"eALpDfNO_fG",
"tNW0OffURbC",
"Vt884VNBgj0",
"zUftsjFx59",
"dBolENmtWKk",
"uqI0tG0k71z",
"Jo85rzPf6yC",
"FYIW6T9VypT",
"QNKhN1aoe3",
"0_c7_jKwVPz",
"4e9I4B_dFUv",
"Opci_iG2oNf",
"9rO8SADFYKX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We understand the score remaining unchanged. Irrespective of the score we hope the reviewer is convinced (given the code for a number of state of the art randomized methods) that irreducible errors are introduced due to sketching.",
" Thanks to the authors for their response; I would like to acknowledge reading... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
3,
4
] | [
"eALpDfNO_fG",
"Jo85rzPf6yC",
"Vt884VNBgj0",
"zUftsjFx59",
"iclr_2022_4lLyoISm9M",
"9rO8SADFYKX",
"0_c7_jKwVPz",
"Opci_iG2oNf",
"4e9I4B_dFUv",
"iclr_2022_4lLyoISm9M",
"iclr_2022_4lLyoISm9M",
"iclr_2022_4lLyoISm9M",
"iclr_2022_4lLyoISm9M",
"iclr_2022_4lLyoISm9M"
] |
iclr_2022_JLbXkHkLCG6 | Imitation Learning from Pixel Observations for Continuous Control | We study imitation learning using only visual observations for controlling dynamical systems with continuous states and actions. This setting is attractive due to the large amount of video data available from which agents could learn from. However, it is challenging due to $i)$ not observing the actions and $ii)$ the high-dimensional visual space. In this setting, we explore recipes for imitation learning based on adversarial learning and optimal transport. A key feature of our methods is to use representations from the RL encoder to compute imitation rewards. These recipes enable us to scale these methods to attain expert-level performance on visual continuous control tasks in the DeepMind control suite. We investigate the tradeoffs of these approaches and present a comprehensive evaluation of the key design choices. To encourage reproducible research in this area, we provide an easy-to-use implementation for benchmarking visual imitation learning, including our methods. | Reject | Learning policies from video demonstrations alone without paired action data is a promising paradigm for scaling up Imitation Learning. As such the paper is well-motivated. Two approaches P-SIL and P-DAC train rewards for RL training, based on learning Sinkhorn distances between trajectory embeddings and an adversarial approach. The reviews brought up lack of clarity in presentation and experimental results and ablation studies falling short of convincingly demonstrating value of distance functions used and other design tradeoffs. As such the paper does not meet the bar for acceptance at ICLR. | test | [
"JHI96UmRGN",
"iuGEjZC2qfK",
"YX2vaaSBwHV",
"tQBhdikbons",
"YslgWMbyQ3O",
"o0kC1JXxmKU",
"ZXPFJlMzMov",
"47vDdBnaWzC",
"h2dVcFySEQ7",
"td4_wyzkQg",
"3qeqsG2No2A",
"NVHWRgqtvW",
"iBIn18G2O5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the detailed answer. However, I do not feel that the contribution is clear on the paper. For this reason I am not changing my score.",
" I thank the authors for providing detailed answers to my questions. Even though the rebuttal addressed some misunderstandings, but I am still concerned about the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"YX2vaaSBwHV",
"47vDdBnaWzC",
"YslgWMbyQ3O",
"o0kC1JXxmKU",
"ZXPFJlMzMov",
"3qeqsG2No2A",
"NVHWRgqtvW",
"iBIn18G2O5",
"td4_wyzkQg",
"iclr_2022_JLbXkHkLCG6",
"iclr_2022_JLbXkHkLCG6",
"iclr_2022_JLbXkHkLCG6",
"iclr_2022_JLbXkHkLCG6"
] |
iclr_2022_XeqjsCVLk1m | Tell me why!—Explanations support learning relational and causal structure | Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI—forming abstractions, and learning about the relational and causal structure of the world. Here, we explore whether machine learning models might likewise benefit from explanations. We outline a family of relational tasks that involve selecting an object that is the odd one out in a set (i.e., unique along one of many possible feature dimensions). Odd-one-out tasks require agents to reason over multi-dimensional relationships among a set of objects. We show that agents do not learn these tasks well from reward alone, but achieve >90% performance when they are also trained to generate language explaining object properties or why a choice is correct or incorrect. In further experiments, we show how predicting explanations enables agents to generalize appropriately from ambiguous, causally-confounded training, and even to meta-learn to perform experimental interventions to identify causal structure. We show that explanations help overcome the tendency of agents to fixate on simple features, and explore which aspects of explanations make them most beneficial. Our results suggest that learning from explanations is a powerful principle that could offer a promising path towards training more robust and general machine learning systems. | Reject | This paper studies the use of natural language explanations during the training of an agent for odd-one-out tasks. Experiment results show that using quality explanation as abstract information about object properties helps with the agent performance, as compared with the vanilla method.
Strengths:
- Experiment results are conducted thoroughly to support the major claims made by the paper
- The problem is well motivated and has an important implication
Weakness:
- There has been extensive discussion about whether the paper lacks a more formal and rigorous definition of "explanation" as considered in the scope of this paper.
- Concerns are raised regarding the gaps between the broad claims in the paper and the restricted experiment settings | train | [
"X5NTUl-4Jat",
"hUw6sBjIr9",
"lL7nzhKWo7P",
"V99j2cJd9-",
"IX6rYVDd5Jn",
"u-CcJyDpki3",
"pTCLvqB5qwF",
"SElsvTNYe6w",
"pzp1aSGbwTM",
"xQvXDb-5xom",
"vqEnBmX2MgP",
"djnxs8tSKUY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for taking the time to engage with our response, and we are glad that the reviewer finds the paper improved. We wanted to follow up on the reviewer's remaining concerns:\n\nFirst, we do explicitly address the issue of language vs. other features in the discussion: \"While we focused on langu... | [
-1,
-1,
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"hUw6sBjIr9",
"pTCLvqB5qwF",
"iclr_2022_XeqjsCVLk1m",
"iclr_2022_XeqjsCVLk1m",
"djnxs8tSKUY",
"vqEnBmX2MgP",
"SElsvTNYe6w",
"lL7nzhKWo7P",
"V99j2cJd9-",
"iclr_2022_XeqjsCVLk1m",
"iclr_2022_XeqjsCVLk1m",
"iclr_2022_XeqjsCVLk1m"
] |
iclr_2022_gKWxifgJVP | Fact-driven Logical Reasoning | Recent years have witnessed an increasing interest in training machines with reasoning ability, which deeply relies on accurate, clearly presented clue forms that are usually modeled as entity-like knowledge in existing studies. However, in real hierarchical reasoning motivated machine reading comprehension, such one-sided modeling is insufficient for those indispensable local complete facts or events when only "global" knowledge is really paid attention to. Thus, in view of language being a complete knowledge/clue carrier, we propose a general formalism to support representing logic units by extracting backbone constituents of the sentence such as the subject-verb-object formed "facts", covering both global and local knowledge pieces that are necessary as the basis for logical reasoning. Beyond building the ad-hoc graphs, we propose a more general and convenient fact-driven approach to construct a supergraph on top of our newly defined fact units, benefiting from both sides of the connections between facts and internal knowledge such as concepts or actions inside a fact. Experiments on two challenging logical reasoning benchmarks show that our proposed model, \textsc{Focal Reasoner}, outperforms the baseline models dramatically and achieves state-of-the-art results. | Reject | Strengths:
* Well-written paper
* Strong empirical results on three benchmarks
* Interesting approach of producing semantically augmented LMs using dependency parses to extract svo triples, and finding coreferences between them across multiple sentences
Weaknesses:
* None of the reviewers seem particularly excited about the paper
* Stronger baseline comparisons would have improved the paper
* Authors re-define a lot of terminology, but the novelty of the method is more from the type of graph used to initialize their method, which seems to be a function of OpenIE triplets | train | [
"XI2KdCd9Agq",
"kLSbpoosES-",
"kkXt0oJFGW1",
"9XP9DgaUy9V",
"3PY7HnfJVbB",
"WimvtK5gGc",
"vG0SB_Dr6Pj",
"swfjZpjnLnH",
"jU-UPMETpcw",
"JTAlexPz0XG",
"TMCk0VB4YEQ",
"S4HKSHugtx",
"JlkfYvEkxxU",
"ZGjQZkEeo5U",
"X6K-zkrc3Oo",
"tAlLfwyWzOx",
"k6RJOHdDJnG",
"ZwfEsJzN7re"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your recognition of our response and for raising the score.\n\nWe managed to finish the experiment using DeBERTa model without \"logical fact regularization\" on ReClor dataset and get the test scores from leaderboard submission. The results are shown in the following.\n\nModel|Dev|Test|Test-E|Test-H\n... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"kLSbpoosES-",
"tAlLfwyWzOx",
"iclr_2022_gKWxifgJVP",
"tAlLfwyWzOx",
"TMCk0VB4YEQ",
"swfjZpjnLnH",
"X6K-zkrc3Oo",
"iclr_2022_gKWxifgJVP",
"WimvtK5gGc",
"iclr_2022_gKWxifgJVP",
"JlkfYvEkxxU",
"iclr_2022_gKWxifgJVP",
"ZGjQZkEeo5U",
"k6RJOHdDJnG",
"ZwfEsJzN7re",
"kkXt0oJFGW1",
"JTAlexPz... |
iclr_2022_e8JI3SBZKa4 | Online approximate factorization of a kernel matrix by a Hebbian neural network | We derive an online algorithm for unsupervised learning based on representing every input $\mathbf{x}_t$ by a high dimensional vector $\mathbf{y}_t$ with pairwise inner products that approximately match input similarities as measured by a kernel function: $\mathbf{y}_s \cdot \mathbf{y}_{t} \approx f(\mathbf{x}_s, \mathbf{x}_{t})$. The approximation is formulated using the objective function for classical multidimensional scaling. We derive an upper bound for this objective which only involves correlations between output vectors and nonlinear functions of input vectors. Minimizing this upper bound leads to a minimax optimization, which can be solved via stochastic gradient descent-ascent. This online algorithm can be interpreted as a recurrent neural network with Hebbian and anti-Hebbian connections, generalizing previous work on linear similarity matching. Through numerical experiments with two datasets, we demonstrate that unsupervised learning can be aided by the nonlinearity inherent in our kernel method. We also show that heavy-tailed representation vectors emerge from the learning even though no sparseness prior is used, lending further biological plausibility to the model. Our upper bound employs a rank-one Nystrom approximation to the kernel function, with the novelty of leading to an online algorithm that optimizes landmark placement. | Reject | This paper studies the problem of kernel similarity matching using Hebbian neural networks. Specifically, the authors propose to compute the approximate feature map for the kernel using the least square loss function, Legendre transformation, and Hebbian parameter update rules.
Reviewers generally agree that the proposed method is interesting. However, there are major issues with the current manuscript, both theoretically and empirically. Theoretically, there is no guarantee for the convergence of the method. In fact, the non-convergence of the approximation error, when the dimensionality (number of features) increases, indicates that the proposed method is not consistent, in contrast with other methods such as Kernel PCA or Nystrom approximation-based methods. As observed by the authors, this may be related to the unstable convergence of the stochastic gradient descent ascent optimization procedure. For consistency,
the approximation error needs to approach zero as the number of features becomes large.
Overall, empirical results compared with existing methods are not satisfactory. As the authors themselves point out in their discussion, "this method does not provide the same sorts of theoretically guarantees or empirically observed robustness of sampling based methods".
The authors are encouraged to take reviewers' comments and suggestions into account to improve their current work. | train | [
"e6U0t_ynKsr",
"rzSrbW3lHP2",
"qWxIieldZMO",
"e7MxAT2iDLr",
"Y9XURE2vfhQ",
"yScII1GA5P2",
"lXpQ2K5gAQ",
"GB_FcNF_Ux",
"KVBuWRagz28",
"6lniLb9_Rf",
"mnw0o-nbghs",
"XvH1RM63KF3",
"p9MjtIIDDym",
"8XUOlpuc1P0",
"ZhjJLgNHRl9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
" With an increasing number of neurons perhaps the small value of λ actually causes more artificial correlation than a larger value (since the matrix is being inverted). This wouldn't be a problem with the pseudo-inverse in the Nyström method.\n\nAdditionally, the paper uses a heuristic of two times the number of i... | [
-1,
5,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
2,
5,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"yScII1GA5P2",
"iclr_2022_e8JI3SBZKa4",
"iclr_2022_e8JI3SBZKa4",
"p9MjtIIDDym",
"iclr_2022_e8JI3SBZKa4",
"6lniLb9_Rf",
"iclr_2022_e8JI3SBZKa4",
"XvH1RM63KF3",
"Y9XURE2vfhQ",
"ZhjJLgNHRl9",
"rzSrbW3lHP2",
"8XUOlpuc1P0",
"qWxIieldZMO",
"iclr_2022_e8JI3SBZKa4",
"iclr_2022_e8JI3SBZKa4"
] |
iclr_2022_VCD05OEn7r | CAGE: Probing Causal Relationships in Deep Generative Models | Deep generative models excel at generating complex, high-dimensional data, often exhibiting impressive generalization beyond the training distribution. The learning principle for these models is however purely based on statistical objectives and it is unclear to what extent such models have internalized the causal relationships present in the training data, if at all. With increasing real-world deployments, such a causal understanding of generative models is essential for interpreting and controlling their use in high-stake applications that require synthetic data generation. We propose CAGE, a framework for inferring the cause-effect relationships governing deep generative models. CAGE employs careful geometrical manipulations within the latent space of a generative model for generating counterfactuals and estimating unit-level generative causal effects. CAGE does not require any modifications to the training procedure and can be used with any existing pretrained latent variable model. Moreover, the pretraining can be completely unsupervised and does not require any treatment or outcome labels. Empirically, we demonstrate the use of CAGE for: (a) inferring cause-effect relationships within a deep generative model trained on both synthetic and high resolution images, and (b) guiding data augmentations for robust classification where CAGE achieves improvements over current default approaches on image datasets. | Reject | This paper proposes the framework CAGE (causal probing of deep generative models) for estimating counterfactuals and unit-level causal effects in deep generative models. CAGE employs geometrical manipulations within the latent space of a generative model to estimate the counterfactual quantities. The estimator is written in potential outcome language and assumes unconfoundedness, positivity, stable unit treatment value assumption (SUTVA), and linear separability in semantic attributes of the latent space. Furthermore, the framework considers only the case of binary treatments.
One major concern raised by reviewers TgM5 and xP5d is that the method is based on a trained generative model, which may not be the true data-generating model. In this case, the paper appears to address statistical dependencies instead of the actual causal relationships in the real world. The authors claim to empirically show that their framework can probe unit-level (individual) causal effects. However, the reviewers are concerned that no theoretical support for the correctness of the method is provided. In other words, the problem is assumed away once a probabilistic model is assumed to be equal to the true generative model, which is almost never the case in practice and is well-known in the field. We want to encourage the authors to provide a more detailed theoretical justification, perhaps with proofs and/or references, that the proposed method can infer causal and counterfactual relationships given the underlying assumptions.
After all, reviewers were interested but somewhat skeptical about the method's ability to learn causal and counterfactual relationships. Unfortunately, the paper is not ready for publication yet. Still, we would like to encourage the authors to take the reviews seriously and try to improve the manuscript accordingly. | train | [
"bWgUAoVmLYg",
"0EiVi7-FRgl",
"HOXGEslemds",
"nLZn3LV_OHC",
"dvipz4M5cxX",
"fXsK150HM_B",
"VG43w6fg6Yt",
"a319euExfYm",
"lwkNEX3sPMp",
"MtnlxJ0MVF",
"PJ1fqDq1-H5",
"csMmTsJQ95U",
"U7mI2QfMOdI",
"p7DzKxdn0NJ",
"5uWFf9wUO5C",
"5gpBtrxJdnT",
"5hR4QqHkK7E",
"l0ICo2N2Pla",
"IQrKFuX5DR... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Furthermore, it is important to note that simply because a model is not designed to infer causal structure does not mean it will not implicitly encode causal structure present in the data. A clear example of this is in the work of [10], who fit linear ICA models to observational data and study the properties of t... | [
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"0EiVi7-FRgl",
"HOXGEslemds",
"iclr_2022_VCD05OEn7r",
"dvipz4M5cxX",
"iclr_2022_VCD05OEn7r",
"dvipz4M5cxX",
"HOXGEslemds",
"Zv525_OalF2",
"IQrKFuX5DR",
"iclr_2022_VCD05OEn7r",
"dvipz4M5cxX",
"PJ1fqDq1-H5",
"p7DzKxdn0NJ",
"HOXGEslemds",
"5gpBtrxJdnT",
"Zv525_OalF2",
"l0ICo2N2Pla",
"... |
iclr_2022_CyKQiiCPBEv | Stepping Back to SMILES Transformers for Fast Molecular Representation Inference | In the intersection of molecular science and deep learning, tasks like virtual screening have driven the need for a high-throughput molecular representation generator on large chemical databases. However, as SMILES strings are the most common storage format for molecules, using deep graph models to extract molecular feature from raw SMILES data requires an SMILES-to-graph conversion, which significantly decelerates the whole process. Directly deriving molecular representations from SMILES is feasible, yet there exists a large performance gap between the existing SMILES-based models and graph-based models at benchmark results. To address this issue, we propose ST-KD, an end-to-end SMILES Transformer for molecular representation learning boosted by Knowledge Distillation. In order to conduct knowledge transfer from graph Transformers to ST-KD, we have redesigned the attention layers and introduced a pre-transformation step to tokenize the SMILES strings and inject structure-based positional embeddings. ST-KD shows competitive results on latest standard molecular datasets PCQM4M-LSC and QM9, with $3\text{-}14\times$ inference speed compared with existing graph models. | Reject | The paper builds fast and high-quality SMILES-based molecular embeddings by distilling state-of-the-art graph-based models teachers.
This has the advantage of speeding inference time w.rt to graph based methods.
The reviews were split regarding the motivation of the work, in the sense of why not train directly on SMILES instead of distilling graph based methods that are in some tasks behind SMILES transformer. Authors provided clarifications in the rebuttal showing that on Knowledge distillation of graph models surpasses SMILES only model training.
I think given the experimental nature of the paper the main motivation of the paper should be better clarified and supported with more experimentation and downstream tasks. | val | [
"rexcn90-Uy",
"Ja3BFhp40zm",
"iN9LSnkwJl",
"R0LCERrU8PRa",
"Wn3Gdb0yt8",
"4OyH6AKf7T5",
"qC1cCC6cNDJ",
"nwoyXLRbx4I",
"NWCu4qf8j-L",
"k8S-xbmmbKe",
"PSZx0GDX3rZ",
"PgG_6XK3X0x"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments. We would still like to emphasize some facts corresponding to your first two concerns:\n\n1. Are SMILES-to-graph conversion and featurization not a bottleneck? They surely are. Experiments show that **SMILES-to-graph conversion takes 5 times longer than inference in GCNs.**\n\n2. Can prep... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"iN9LSnkwJl",
"Wn3Gdb0yt8",
"R0LCERrU8PRa",
"PgG_6XK3X0x",
"PSZx0GDX3rZ",
"k8S-xbmmbKe",
"NWCu4qf8j-L",
"iclr_2022_CyKQiiCPBEv",
"iclr_2022_CyKQiiCPBEv",
"iclr_2022_CyKQiiCPBEv",
"iclr_2022_CyKQiiCPBEv",
"iclr_2022_CyKQiiCPBEv"
] |
iclr_2022_UYDtmk6BMf5 | Decomposing Texture and Semantics for Out-of-distribution Detection | Out-of-distribution (OOD) detection has made significant progress in recent years because the distribution mismatch between the training and testing can severely deteriorate the reliability of a machine learning system.Nevertheless, the lack of precise interpretation of the in-distribution limits the application of OOD detection methods to real-world system pipielines. To tackle this issue, we decompose the definition of the in-distribution into texture and semantics, motivated by real-world scenarios. In addition, we design new benchmarks to measure the robustness that OOD detection methods should have. To achieve a good balance between the OOD detection performance and robustness, our method takes a divide-and-conquer approach. That is, the model first tackles each component of the texture and semantics separately, and then combines them later. Such design philosophy is empirically proven by a series of benchmarks including not only ours but also the conventional counterpart. | Reject | This paper proposes a categorization of out-of-distribution examples by texture and semantics, and proposed a model that extracts the texture and semantic information separately before combining them via a normalizing flow-based method to obtain good results. While the categorization provides some interesting perspectives, most reviewers found the assumptions too strong, and there are some issues with the derivation. Reviewers have some positive feedbacks on the proposed algorithm for OOD, but also expressed concerns about the fair comparison with more recent baselines. The paper, in its current form, is not ready for the publication, but the authors are encouraged to improve the paper with reviewers' suggestions and resubmit. | train | [
"lnSrxczsOX2",
"3XTjj94gidt",
"eoDEScPU0h",
"wENIhAbIkE9",
"lQSHsacT95t",
"YN4R6bXozdi",
"JBrvVPLcmgu",
"Mz2E7ExUKMp",
"M2sn4D24imC",
"nnVDC94YZ2",
"w7BSbYxb_V"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the update.\n\nThere might be some misunderstanding with [1]: I believe the point made in [1] is that the word “semantic” need not refer to object-identity all the time, it could also refer to texture, or other things, depending on context. It is only in an object classifier that the object-identity is... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"lQSHsacT95t",
"iclr_2022_UYDtmk6BMf5",
"wENIhAbIkE9",
"3XTjj94gidt",
"M2sn4D24imC",
"nnVDC94YZ2",
"w7BSbYxb_V",
"iclr_2022_UYDtmk6BMf5",
"iclr_2022_UYDtmk6BMf5",
"iclr_2022_UYDtmk6BMf5",
"iclr_2022_UYDtmk6BMf5"
] |
iclr_2022_VQyHD2R3Aq | SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural Networks | Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware. However, most supervised SNN training methods require complex computation or impractical neuron models, which hinders them from spike-based energy-efficient training. Among them, the recently proposed method, implicit differentiation on the equilibrium state (IDE), for training feedback SNNs is a promising way that is possible for generalization to locally spike-based learning with flexible network structures. In this paper, we study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the IDE method for supervised local learning with spikes, which could be possible for energy-efficient training on neuromorphic hardware. Specifically, we first introduce ternary spiking neuron couples that can realize ternary outputs with the common neuron model, and we prove that implicit differentiation can be solved by spikes based on this design. With this approach, the whole training procedure can be made as event-driven spike computation and weights are updated locally with two-stage average firing rates. Then to reduce the approximation error of spikes due to the finite simulation time steps, we propose to modify the resting membrane potential. Based on it, the average firing rate, when viewed as a stochastic estimator, achieves an unbiased estimation of iterative solution for implicit differentiation and the variance of this estimator is reduced. With these key components, we can train SNNs with either feedback or feedforward structures in a small number of time steps. Further, the firing sparsity during training demonstrates the great potential for energy efficiency. Meanwhile, even with these constraints, our trained models could still achieve competitive results on MNIST, CIFAR-10 and CIFAR-100. Our proposed method demonstrates the great potential for energy-efficient training of SNNs on neuromorphic hardware. | Reject | The paper introduces a purely spike based method for training spiking neural networks with recurrence, by extending the recently published "implicit differentiation on the equilibration state (IDE)" technique. As a purely spike based method for both the forward pass and the gradient computation, the proposed technique potentially represents an important advance.
Based on the original submission, the reviewers had difficulties understanding the paper's contributions and verify its claims. I commend the authors for engaging with the reviewers by answering their questions and updating the paper to better explain the contributions. However, even after considerable back and forth, the most positive reviewer still expressed major concerns [[1](https://openreview.net/forum?id=VQyHD2R3Aq¬eId=Ubrdawfds5)], and the other reviewers appeared unmoved, based on their scores.
The reviewers' principal concerns were a little hard to distill. It is possible that their initial difficulties with understanding and validating the paper's contributions made it hard to fully appreciate the paper. One reviewer is unable to verify that the algorithm is purely spike based, and that the energy costs are appropriately calculated. They are also unsure if the method will scale to more complex settings [[1](https://openreview.net/forum?id=VQyHD2R3Aq¬eId=Ubrdawfds5)]. A second reviewer was also initially unable to verify the same claims, was unsure if the theoretical improvements were sufficiently significant, and whether the method could apply to non-IF models of spiking neural networks [[2](https://openreview.net/forum?id=VQyHD2R3Aq¬eId=cc2tMpSst0t)]. The authors addressed this in their response by performing additional experiments with LIF neurons, but it wasn't clear if the main concerns were sufficiently addressed, since the reviewer did not change their score.
Based on the largely negative appraisal by all reviewers, I recommend that be paper be rejected. However, I strongly encourage the authors to revise and resubmit their paper to a future conference, focusing on making sure that their central claims can be more easily understood and verified. | train | [
"LgGfJDd-O2I",
"h0YWtlZTz-L",
"3TokK7blOcN",
"hTEabYSquO",
"sWOmsjAZFXT",
"cc2tMpSst0t",
"Ubrdawfds5",
"Lgxt_73JbH5",
"g2KN-wQrQ1z",
"IsXJi_IiOL",
"KJnIEGTfds2",
"S-DWsIFTWmY",
"l1n4HcmpQhW",
"WmZQM3sdx4n",
"i5HmMk2fHGc",
"yOwzU1Qnyfm",
"jryABWKWk92",
"qD1eqvHx2Bv",
"HqPFXYw8mRu"... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you very much for your valuable comments. We try our best to address your concern as follows.\n\n1. Our method can be applied to LIF model with weighted average firing rate coding which leverages the temporal information as well. The original IDE method (Xiao et al., 2021) has derived the equilibrium state ... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"cc2tMpSst0t",
"3TokK7blOcN",
"hTEabYSquO",
"LgGfJDd-O2I",
"Lgxt_73JbH5",
"iclr_2022_VQyHD2R3Aq",
"iclr_2022_VQyHD2R3Aq",
"g2KN-wQrQ1z",
"Ubrdawfds5",
"OJVvBzgjjR3",
"cc2tMpSst0t",
"Ubrdawfds5",
"iclr_2022_VQyHD2R3Aq",
"i5HmMk2fHGc",
"OJVvBzgjjR3",
"jryABWKWk92",
"cc2tMpSst0t",
"Hq... |
iclr_2022_0bXmbOt1oq | Towards Learning to Speak and Hear Through Multi-Agent Communication over a Continuous Acoustic Channel | While multi-agent reinforcement learning has been used as an effective means to study emergent communication between agents, existing work has focused almost exclusively on communication with discrete symbols. Human communication often takes place (and emerged) over a continuous acoustic channel; human infants acquire language in large part through continuous signalling with their caregivers. We therefore ask: Are we able to observe emergent language between agents with a continuous communication channel trained through reinforcement learning? And if so, what is the impact of channel characteristics on the emerging language? We propose an environment and training methodology to serve as a means to carry out an initial exploration of these questions. We use a simple messaging environment where a "speaker" agent needs to convey a concept to a "listener". The Speaker is equipped with a vocoder that maps symbols to a continuous waveform, this is passed over a lossy continuous channel, and the Listener needs to map the continuous signal to the concept. Using deep Q-learning, we show that basic compositionality emerges in the learned language representations. We find that noise is essential in the communication channel when conveying unseen concept combinations. And we show that we can ground the emergent communication by introducing a caregiver predisposed to "hearing" or "speaking" English. Finally, we describe how our platform serves as a starting point for future work that uses a combination of deep reinforcement learning and multi-agent systems to study our questions of continuous signalling in language learning and emergence. | Reject | While a lot of previous work on emergent communications studies discrete protocols, this work explores a continuous and audio-based channel for learning multi-agent communication. Reviewers have commented positively on the novelty of the topic. At the same time, there are a number of concerns raised with respect to experimental design and implementation (6auy) and general approach of the topic which, as reviewers t576 and 42Xh point, doesn't really go deep into the analysis and understanding of the particular experimental setup and findings. So, unfortunately as the papers stands I cannot recommend acceptance at this time. However, given that continuous communication in emergent communication is a somewhat overlooked topic, I would encourage the authors to use the reviewers' feedback and strengthen their manuscript. | train | [
"jusANKgQMQi",
"3Q4-hjSsH9",
"uBvnkOd-T3E",
"UTQ8c5J5LCD",
"wT1j6kERKJs",
"j5cjlcDWWqV",
"8HLi6cVe7wr",
"wXOgsX1QUFz",
"RM0Pk26sf55",
"0zftHPv0ri",
"H0sINJzJodV",
"k6abGf5DQz",
"zcZ8axtzxfq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to move in the direction of emerging oral communication, instead of written communication. They create an architecture where the sender agent first outputs a string of symbols, which map statically to a fixed table of phonems. The sender uses an off-the-shelf pre-trained text-to-speech synthesiz... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2022_0bXmbOt1oq",
"8HLi6cVe7wr",
"0zftHPv0ri",
"wXOgsX1QUFz",
"RM0Pk26sf55",
"iclr_2022_0bXmbOt1oq",
"jusANKgQMQi",
"zcZ8axtzxfq",
"k6abGf5DQz",
"H0sINJzJodV",
"iclr_2022_0bXmbOt1oq",
"iclr_2022_0bXmbOt1oq",
"iclr_2022_0bXmbOt1oq"
] |
iclr_2022_66miN107dRS | Contrastive Attraction and Contrastive Repulsion for Representation Learning | Contrastive learning (CL) methods effectively learn data representations without label supervision, where the encoder needs to contrast each positive sample over multiple negative samples via a one-vs-many softmax cross-entropy loss. By leveraging large amounts of unlabeled image data, recent CL methods have achieved promising results when pretrained on ImageNet, a well-curated dataset with balanced image classes. However, they tend to yield worse performance when pretrained on images in the wild. In this paper, to further improve the performance of CL and enhance its robustness on uncurated datasets, we propose a doubly CL strategy that contrasts positive samples and negative ones within themselves separately. We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples. Theoretical analysis reveals that CACR generalizes CL's behavior by positive attraction and negative repulsion. It further considers the intra-contrastive relation within the positive and negative pairs to narrow the gap between the sampled and true distribution, which is important when datasets are less curated. Extensive large-scale experiments on standard vision tasks show that CACR not only consistently outperforms existing CL methods on benchmark datasets in representation learning, but also shows better robustness when generalized to pretrain on wild large image datasets. | Reject | The submission receives mixed ratings initially. Reviewer WK7k stays positive, edUG and qbmC are borderline, and MXU1 is negative. They raise several issues including limited technical contribution, insufficient sota experimental comparison, and detailed theoretical proof. The authors have responded to these issues in the rebuttal. The final ratings of these reviewers do not alter.
After checking the revised manuscript, all the reviews and responses, the AC feels the review from MXU1 lacks sufficient details and the claim shall be better supported with evidence. On the other hand, the issue of the technical contribution raised by edUG still exists. The modification upon original CL is marginal. Also, the experimental validation, although tested in several configurations, only includes ResNet 50 as the encoder backbone. The sufficient validation upon different scales of CNNs are common strategies used in sota SSL methods (e.g., MoCo, BYOL, SimCLR). Without a thorough evaluation, the effectiveness of CACR on different scales of CNN backbones is in doubt. Meanwhile, the experimental comparison upon sota methods (e.g., BYOL, SwAV+multi crop) does not clearly show the performance advantages of CACR. The small-scale dataset utilized for ablation study (i.e., CIFAR) is not as convincing as ImageNet.
Overall, the AC feels that the marginal technical contribution is acceptable, but it shall be equipped with thorough experimental validation and thus serves as a fundamental baseline (i.e., potentially benefits the SSL community). Based on the current form, the authors are suggested to further improve the current submission and are welcome to submit for the next venue. | train | [
"YeiVTQiySa",
"fvYWtUEtENl",
"xZbmKfDuivS",
"Q-tn1RPMo4",
"hLuaHtA4kYq",
"ICCz8BLwaTh",
"cj_IRZMXv5t",
"BIJKLp1Ual",
"nUfa16rzZBA",
"daEEqQ9RJMv",
"AbDZFz06E_2",
"s1Mv9WtdGO-",
"axuv7-0pFA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their response. After reading the response from the authors as well as comments from other reviewers, most of my concerns are addressed while the novelty is still a bit limited. As a result, I would keep my original rating.",
" The authors' responses have addressed my concerns. \nIt's ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"ICCz8BLwaTh",
"BIJKLp1Ual",
"hLuaHtA4kYq",
"AbDZFz06E_2",
"axuv7-0pFA",
"s1Mv9WtdGO-",
"AbDZFz06E_2",
"daEEqQ9RJMv",
"iclr_2022_66miN107dRS",
"iclr_2022_66miN107dRS",
"iclr_2022_66miN107dRS",
"iclr_2022_66miN107dRS",
"iclr_2022_66miN107dRS"
] |
iclr_2022_tlkHrUlNTiL | Disentangling deep neural networks with rectified linear units using duality | Despite their success deep neural networks (DNNs) are still largely considered as black boxes. The main issue is that the linear and non-linear operations are entangled in every layer, making it hard to interpret the hidden layer outputs. In this paper, we look at DNNs with rectified linear units (ReLUs), and focus on the gating property (‘on/off’ states) of the ReLUs. We extend the recently developed dual view in which the computation is broken path-wise to show that learning in the gates is more crucial, and learning the weights given the gates is characterised analytically via the so called neural path kernel (NPK) which depends on inputs and gates. In this paper, we present novel results to show that convolution with global pooling and skip connection provide respectively rotational invariance and ensemble structure to the NPK. To address ‘black box’-ness, we propose a novel interpretable counterpart of DNNs with ReLUs namely deep linearly gated networks (DLGN): the pre- activations to the gates are generated by a deep linear network, and the gates are then applied as external masks to learn the weights in a different network. The DLGN is not an alternative architecture per se, but a disentanglement and an interpretable re-arrangement of the computations in a DNN with ReLUs. The DLGN disentangles the computations into two ‘mathematically’ interpretable linearities (i) the ‘primal’ linearity between the input and the pre-activations in the gating network and (ii) the ‘dual’ linearity in the path space in the weights network characterised by the NPK. We compare the performance of DNN, DGN and DLGN on CIFAR-10 and CIFAR-100 to show that, the DLGN recovers more than 83.5% of the performance of state-of-the-art DNNs, i.e., while entanglement in the DNNs enable their improved performance, the ‘disentangled and interpretable’ computations in the DLGN recovers most part of the performance. This brings us to an interesting question: ‘Is DLGN a universal spectral approximator?’ | Reject | The paper examines a sum-over-paths representation of ReLU networks, for which learning can be broken into two parts: learning the gates, and learning the weights given the gates, the latter of which being described by the Neural Path Kernel. The paper introduces a dual architecture, Deep Linear Gated Networks (DLGN) that parameterizes these two processes separately. The DLGN is argued to aid in interpretability of ReLU networks, with a main conclusion being that the neural network is learned path-by-path instead of layer-by-layer.
The reviewers generally found strength in the motivation and perspective and thought that the DLGN could serve as a useful architecture for aiding interpretability. Some reviewers found the presentation hard to follow, and others were not entirely convinced by the ultimate conclusions. Overall, the reviewers opinions were mixed.
I believe the ICLR community would generally find interest in the DLGN and the interpretations it might afford to deep ReLU networks. However, the number and strength of the conclusions obtained in the current analysis are rather weak. The conclusion that networks learn path-by-path instead of layer-by-layer was emphasized but the implications were not highlighted, and it remains unclear to me and at least some reviewers what the concrete significance of this observation actually is. Another major claim is that the DLGN recovers more than 83.5% of the performance of state-of-the-art DNNs, but a priori it is not obvious what this number means, or if it is even good or bad performance. A more detailed analysis with additional common baselines, ablations, etc., would really help readers understand the significance of the performance gap.
Overall, this is an interesting direction with significant potential, but for the above reasons I cannot recommend the current version for acceptance. | train | [
"WbajdMKSSb3",
"Kuba9LTxkxB",
"1gUss92feb",
"eYn0rWF69sJ",
"w_y3cybGOp",
"aWDrL-oYTyK",
"cryxVGyHA9S",
"KMB7wCaE-4v",
"JWLoSoXJVW1",
"4eRCi_kuSU7",
"LTag2NckmTT",
"R-KMaqS0MB2",
"H8J-ZgTqRSE",
"SGTblUja42e",
"sVvXWXRJsaG",
"DuNu3gUgBn",
"e8Zxxe7_StB",
"JPBQAKtau1s",
"d7JHe4uthF6"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We have added a detailed discussion at the end of the appendix in the recently updated version of our paper. We mention the important points here. \n\n$\\bullet$ **Non-Comparable Aspects**: There are many non-comparable aspects between the DLGN in our work and the Gated Linear Network (GLN) of Veness et al. (2019... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"DuNu3gUgBn",
"DuNu3gUgBn",
"DuNu3gUgBn",
"e8Zxxe7_StB",
"d7JHe4uthF6",
"e8Zxxe7_StB",
"DuNu3gUgBn",
"JPBQAKtau1s",
"e8Zxxe7_StB",
"e8Zxxe7_StB",
"e8Zxxe7_StB",
"e8Zxxe7_StB",
"e8Zxxe7_StB",
"DuNu3gUgBn",
"DuNu3gUgBn",
"iclr_2022_tlkHrUlNTiL",
"iclr_2022_tlkHrUlNTiL",
"iclr_2022_tl... |
iclr_2022_swbAS4OpXW | One-Shot Generative Domain Adaptation | This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain $\textit{referring to as few as just one target image}$. The main challenge is that, under limited supervision, it is extremely difficult to synthesize photo-realistic and highly diverse images, while acquiring representative characters of the target. Different from existing approaches that adopt the vanilla fine-tuning strategy, we import two lightweight modules to the generator and the discriminator respectively. Concretely, we introduce an $\textit{attribute adaptor}$ into the generator yet freeze its original parameters, through which it can reuse the prior knowledge to the most extent and hence maintain the synthesis quality and diversity. We then equip the well-learned discriminator backbone with an $\textit{attribute classifier}$ to ensure that the generator captures the appropriate characters from the reference. Furthermore, considering the poor diversity of the training data ($\textit{i.e.}$, as few as only one image), we propose to also constrain the diversity of the generative domain in the training process, alleviating the optimization difficulty. Our approach brings appealing results under various settings, $\textit{substantially}$ surpassing state-of-the-art alternatives, especially in terms of synthesis diversity. Noticeably, our method works well even with large domain gaps, and robustly converges $\textit{within a few minutes}$ for each experiment. | Reject | This work was the subject of significant back and forth (between authors and reviewers, but also between reviewers & myself) due to the wide range of opinions. Two of the reviewers have found this work below the bar: they have provided multiple reasonings that I would rather not repeat here. The third reviewer found this work more compelling and argued for its acceptance. My attempts at reaching a consensus have yielding the following conclusions:
* There's agreement that one-shot generation is indeed a challenging task
* Some of the results are indeed impressive, but many results are not compelling.
* The rebuttal addressed some of the concerns (e.g. visualization of latents), but some issues are unaddressed (e.g. more motivation, explanation of why the proposed method works better)
* One of the reviewers has argued rather forcefully that the work doesn't quite do domain adaptation in the typically understood sense. Moving beyond definitions of domain adaptation, the same reviewer was not very convinced by the quality of the results themselves.
* The reviewer most positive about this work agrees that this work only explores a limited form of domain transfer. They argued that some of the potential applications of this work do make the submission interesting.
Fundamentally, the discussion did not necessarily resolve the differences in opinion one way or another. Ultimately, all 3 reviewers believe that it would fine if this work was not accepted to ICLR at this time, despite some of the interesting results and promise. Given the discussion and this mildest consensus, I am inclined to recommend rejection too. I do think there's a substantial amount of constructive feedback in the reviews that would make a subsequent revision of this work quite a bit better. | train | [
"fdfZGoWiDF",
"xSlBXZFrEM4",
"tw2BGM7DjAh",
"MWr8jRRrQf0",
"IeqP2g_DxCG",
"So8YaGHLTbR",
"IP-EogrQOSP",
"6YOKJjlXVpk",
"UlJY9uHKGjr",
"Qtig8hmhfzy",
"yxzJygyCdpd",
"pRs605hnEAQ",
"-qi5_DhBLJm",
"vJUSofAoFj",
"UDWt3rkNOzJ",
"TEMHNNCDG0B"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To be more specific, all results where the target domain is given by a painting are not good. The method successfully takes on the colors of the target image, but fails to take on the distinct characteristics of a painting, such as brush strokes. The results look scary. One can simply look at the results in the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"xSlBXZFrEM4",
"tw2BGM7DjAh",
"MWr8jRRrQf0",
"IeqP2g_DxCG",
"UlJY9uHKGjr",
"IP-EogrQOSP",
"6YOKJjlXVpk",
"yxzJygyCdpd",
"-qi5_DhBLJm",
"vJUSofAoFj",
"TEMHNNCDG0B",
"UDWt3rkNOzJ",
"Qtig8hmhfzy",
"iclr_2022_swbAS4OpXW",
"iclr_2022_swbAS4OpXW",
"iclr_2022_swbAS4OpXW"
] |
iclr_2022_X2V7RW3Sul | Improving Hyperparameter Optimization by Planning Ahead | Hyperparameter optimization (HPO) is generally treated as a bi-level optimization problem that involves fitting a (probabilistic) surrogate model to a set of observed hyperparameter responses, e.g. validation loss, and consequently maximizing an acquisition function using a surrogate model to identify good hyperparameter candidates for evaluation. The choice of a surrogate and/or acquisition function can be further improved via knowledge transfer across related tasks. In this paper, we propose a novel transfer learning approach, defined within the context of model-based reinforcement learning, where we represent the surrogate as an ensemble of probabilistic models that allows trajectory sampling. We further propose a new variant of model predictive control which employs a simple look-ahead strategy as a policy that optimizes a sequence of actions, representing hyperparameter candidates to expedite HPO. Our experiments on three meta-datasets comparing to state-of-the-art HPO algorithms including a model-free reinforcement learning approach show that the proposed method can outperform all baselines by exploiting a simple planning-based policy. | Reject | This paper proposes an algorithm for hyperparameters optimization that exploits a formulation as an MDP and thus makes use of a model-based reinforcement learning approach.
The formulation of HPO as an MDP although not novel (Jomaa et al. is not the only one to have considered this case, and the connection between the two was already known in the community) is indeed an interesting topic that could be impactful for the community. Unfortunately, the current manuscript is not providing much new insight into the topic.
After carefully reading the paper, I agree with Reviewers cKwe and 8eE4 that the current manuscript has several points of concern:
1) the formulation as a sequential decision-making problem is not fully elaborated
2) lacking comparison to look-ahead (i.e., non-myopic) HPO algorithms (there is plenty of literature on Bayesian Optimization for doing this). This also makes it difficult to understand if the performance benefits come from the look-ahead or from the MDP formulation
3) the writing is generally understandable, but some of the important design choices and details of the algorithms are not easy to find in the manuscript -- improving the clarity of the text would be very beneficial.
I encourage the authors to incorporate the feedback from the reviewer and to polish this paper into the shiny gem that it deserves to be.
Suggestions:
- The MDP formulation for HPO might actually prove very beneficial for hyperparameters control (i.e., dynamically adjusting parameters during the learning process) where there is a real transition function rather than hyperparameters optimization. Might be worth reading https://arxiv.org/abs/2102.13651 which attempts to do hyperparameters control in the context of MBRL.
- Adding better visuals to explain formulation and algorithm might go a long way.
- Tables 1 and 2 could be replaced by learning curves for a more intuitive way of visualizing the results. | train | [
"Rb76HnqYGgr",
"ubj_HaNhj6u",
"ByfEx-DKiK5",
"8OiGC4ecqOc",
"Um2FkgZbmu6",
"HUo49Gqog3Y",
"eRn2jAe6T5",
"dAYbos_vgZx",
"-SjcW1AwDIN",
"4QU-VS57A4H",
"z4j1idT_Ffn",
"tqdUBEiFZR9"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Considering also the comments made by reviewers yo76 and cKwe, it would be better to extend section 2 instead and make clear the distinction between your work and the related work. ",
" We thank all the reviewers for their constructive criticism of this paper. Aside from the individual rebuttals, we present her... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"8OiGC4ecqOc",
"iclr_2022_X2V7RW3Sul",
"4QU-VS57A4H",
"-SjcW1AwDIN",
"HUo49Gqog3Y",
"tqdUBEiFZR9",
"z4j1idT_Ffn",
"eRn2jAe6T5",
"iclr_2022_X2V7RW3Sul",
"iclr_2022_X2V7RW3Sul",
"iclr_2022_X2V7RW3Sul",
"iclr_2022_X2V7RW3Sul"
] |
iclr_2022_IR-V6-aP-mv | Batch size-invariance for policy optimization | We say an algorithm is batch size-invariant if changes to the batch size can largely be compensated for by changes to other hyperparameters. Stochastic gradient descent is well-known to have this property at small batch sizes, via the learning rate. However, some policy optimization algorithms (such as PPO) do not have this property, because of how they control the size of policy updates. In this work we show how to make these algorithms batch size-invariant. Our key insight is to decouple the proximal policy (used for controlling policy updates) from the behavior policy (used for off-policy corrections). Our experiments help explain why these algorithms work, and additionally show how they can make more efficient use of stale data. | Reject | This paper studies the method to achieve the batch size-invariant for policy gradient algorithms (PPO, PPG). The paper achieves this by decoupling the proximal policy from the behavior policy. Empirical results show that the methods are somewhat effective at providing batch size invariance.
After reading the authors' feedback, the reviewer discussed the paper and they did not reach a consensus. On the one hand, the rebuttal made some reviewers change their minds who appreciated the explanations provided by the authors and the new Figure that better highlights the batch size invariance property.
On the other hand, some reviewers think that there is still significant work to be done to get this paper ready for publication. In particular, it is necessary to improve the theoretical analysis and the evaluation of the empirical results.
I encourage the authors to follow the reviewers' suggestions while they will update their paper for a new submission. | test | [
"nFkZxmr4RSP",
"s1iAdPNt_00",
"YGxuJBRoigN",
"NyMtDZSbmLK",
"pQJg50SpRJ3",
"iWiNoqxVcIp",
"llb0NYl48rl",
"Ws-xDn44AKp",
"Xx-TQMSocQz",
"om2LbWxXWsD",
"SHGAT2k2TJA",
"UWFyBiiqIlC",
"Nzac0vb8zya"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your explanation.\n\nI believe the result can demonstrate that your method can be batch-invariant (and I never suspect it both in my initial review and my response). I appreciate the additional results in Figure 3 and Appendix F, which did improve my confidence.\n\nHowever, I believe there is still sp... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
5,
1,
5
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"s1iAdPNt_00",
"YGxuJBRoigN",
"llb0NYl48rl",
"Ws-xDn44AKp",
"iclr_2022_IR-V6-aP-mv",
"iclr_2022_IR-V6-aP-mv",
"Nzac0vb8zya",
"pQJg50SpRJ3",
"UWFyBiiqIlC",
"SHGAT2k2TJA",
"iclr_2022_IR-V6-aP-mv",
"iclr_2022_IR-V6-aP-mv",
"iclr_2022_IR-V6-aP-mv"
] |
iclr_2022_9u5E8AFudRx | Help Me Explore: Minimal Social Interventions for Graph-Based Autotelic Agents | In the quest for autonomous agents learning open-ended repertoires of skills, most works take a Piagetian perspective: learning trajectories are the results of interactions between developmental agents and their physical environment. The Vygotskian perspective, on the other hand, emphasizes the centrality of the socio-cultural environment: higher cognitive functions emerge from transmissions of socio-cultural processes internalized by the agent. This paper acknowledges these two perspectives and presents GANGSTR, a hybrid agent engaging in both individual and social goal-directed exploration. In a 5-block manipulation domain, GANGSTR discovers and learns to master tens of thousands of configurations. In individual phases, the agent engages in autotelic learning; it generates, pursues and makes progress towards its own goals. To this end, it builds a graph to represent the network of discovered configuration and to navigate between them. In social phases, a simulated social partner suggests goal configurations at the frontier of the agent’s current capabilities. This paper makes two contributions: 1) a minimal social interaction protocol called Help Me Explore (HME); 2) GANGSTR, a graph-based autotelic agent. As this paper shows, coupling individual and social exploration enables the GANGSTR agent to discover and master the most complex configurations (e.g. stacks of 5 blocks) with only minimal intervention. | Reject | The paper introduces GANGSTR, an agent that performs goal-directed exploration both individually and "socially", with suggestions from a partner. It builds a graph of different configurations of a 5-block manipulation domain, and navigates this graph. The theoretical motivations for this algorithm are solid, and the direction is interesting. However, the results are less than convincing. In particular, as was mentioned in the discussion, it is not clear how this algorithm would generalize beyond the very simple 5-block manipulation domain. While having a simple benchmark has the advantage that you can explore it in depth, it also might obscure problems with the algorithm, unless complemented by a set of other benchmarks. It therefore seems that the paper is not ready for publication yet. | train | [
"CuOIOprmsKs",
"KULHofpxl6",
"9d4TKn7pM6",
"ZBGEmddf5ks",
"yJbxMonss7-",
"3GCaDlzaZlL",
"nazHy_hMwP",
"iUADN30jlJE",
"wjxZHJhPIU3",
"5o9Q_cJWTVx",
"ag3p0JAGR06",
"78AKhVQ_dkh",
"lL8gneIvBVZ",
"dJCDhM8Tlu"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors would like to thank the reviewer for taking the time to submit an additional comment. We understand the general feeling expressed by the reviewer, but stated as is, the review does not provide much guidance on how to improve our work.\n\nFirst, the reviewer questions \"actual novelty\" without pointin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"KULHofpxl6",
"ag3p0JAGR06",
"iclr_2022_9u5E8AFudRx",
"dJCDhM8Tlu",
"dJCDhM8Tlu",
"lL8gneIvBVZ",
"78AKhVQ_dkh",
"iclr_2022_9u5E8AFudRx",
"iclr_2022_9u5E8AFudRx",
"iclr_2022_9u5E8AFudRx",
"lL8gneIvBVZ",
"iclr_2022_9u5E8AFudRx",
"iclr_2022_9u5E8AFudRx",
"iclr_2022_9u5E8AFudRx"
] |
iclr_2022_JEoDctbwCmP | Enforcing physics-based algebraic constraints for inference of PDE models on unstructured grids | Data-driven neural network models have recently shown great success in modelling and learning complex PDE systems. Several works have proposed approaches to include specific physics-based constraints to avoid unrealistic modelling outcomes. While previous works focused on specific constraints and uniform spatial grids, we propose a novel approach for enforcing general pointwise, differential and integral constraints on unstructured spatial grids. The method is based on representing a black-box PDE model's output in terms of a function approximation and enforcing constraints directly on that function. We demonstrate applicability of our approach in learning PDE-driven systems and generating spatial fields with GANs, both on free-form spatial and temporal domains, and show how both kinds of models benefit from incorporation of physics-based constraints. | Reject | The paper introduces a framework for enforcing constraints into deep NNs used for modeling spatio-temporal dynamics characterizing physical systems. The authors consider different types of constraints (pointwise, differential and integral). They start from a formulation approximating PDEs as set of ODEs (method of lines). Their main idea is to approximate the solution of the equations using an interpolant between observations and imposing the constraints on this approximation function. The interpolant is built using basis functions located at observation points. The formalism considers irregular spatial grids and both soft and hard constraints. The main claim is then the introduction of a general formalism for considering different types of constraints on irregular grids. Experiments illustrate the behavior of the proposed method on different types of evolution equations and constraints.
The reviewers agree that the proposed approach is interesting and that some of the ideas are original. However, they also consider that the paper is not convincing enough to demonstrate the interest and novelty of the approach, compared to alternative methods. The experimental section mainly considers (except for one application) regular grids and constraints that could be handled by other methods as well. The authors should present cases where their method provides a clear advantage, distinct from existing solutions. The authors provided a well-argued rebuttal, clarifying several points. However, all reviewers retained their original scores and encourage the authors to further develop the experimental analysis to present a stronger paper. In addition, the presentation could be improved, and some technical aspects better explained (e.g., description of interpolation methods, and some advice on which interpolant to choose for a given problem). | train | [
"JZFOmCA4uyQ",
"jzUFARNebcU",
"-jdrgZlw0i",
"xTAVFZleDZQ",
"QZYux8g-ny0",
"dF-37FTsekZ",
"LwZ2tZRRjPX",
"JuB_z7XqBNq",
"2gdraeowcg",
"KkK3FS3Lpcp",
"xpLYBzUdrny",
"ONyan1L_77d",
"EWBrd7tiXmj",
"lYqtJ_tZqF",
"lwUITMyUF2m",
"EAE5BXl-7BX",
"9-rItcnq9zo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My thanks to the authors and other reviewers for their engagement in this process.\n\nI believe the kernel of the idea presented is interesting, effective, and novel within the presented context, but I also stand by my original review, and that of the other reviewers, that the empirical demonstration of this meth... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"jzUFARNebcU",
"-jdrgZlw0i",
"ONyan1L_77d",
"KkK3FS3Lpcp",
"JuB_z7XqBNq",
"LwZ2tZRRjPX",
"2gdraeowcg",
"9-rItcnq9zo",
"EAE5BXl-7BX",
"xpLYBzUdrny",
"lwUITMyUF2m",
"EWBrd7tiXmj",
"lYqtJ_tZqF",
"iclr_2022_JEoDctbwCmP",
"iclr_2022_JEoDctbwCmP",
"iclr_2022_JEoDctbwCmP",
"iclr_2022_JEoDct... |
iclr_2022_EG5Pgd7-MY | Privacy Auditing of Machine Learning using Membership Inference Attacks | Membership inference attacks determine if a given data point is used for training a target model. Thus, this attack could be used as an auditing tool to quantify the private information that a model leaks about the individual data points in its training set. In the last five years, a variety of membership inference attacks against machine learning models are proposed, where each attack exploits a slightly different clue. Also, the attacks are designed under different implicit assumptions about the uncertainties that an attacker has to resolve. Thus attack success rates do not precisely capture the information leakage of models about their data, as they also reflect other uncertainties that the attack algorithm has (for example, about data distribution or characteristics of the target model). In this paper, we present a framework that can explain the implicit assumptions and also the simplifications made in the prior work. We also derive new attack algorithms from our framework that can achieve a high AUC score while also highlighting the different factors that affect their performance. Thus, our algorithms can be used as a tool to perform an accurate and informed estimation of privacy risk in machine learning models. We provide a thorough empirical evaluation of our attack strategies on various machine learning tasks trained on benchmark datasets. | Reject | This work provides a formal framework for discussing membership inference attacks (MIA). It then examines existing attacks and proposes some new ones. The attacks are evaluated on several datasets. The framework mostly formalizes the types of information and error types that an attack may use and is presented as the main contribution of this work. However the presented formalizations do not appear to contribute significantly beyond the existing work on MIAs. The new attacks may be of interest and, according to the presented experiments, (mildly) improve on some of the existing MIAs. At the same time, as presented, the discussion of the the benefits of the new attacks is relatively short and reviewers did not find the results to be sufficiently convincing. Therefore I cannot recommend acceptance for this work in its current form. | train | [
"E1CpHgx62_n",
"hsIr6wVyI2T",
"RdvuWpcVyHF",
"3VgDGf895vl",
"wnI-mXQpBFK",
"JlHAEnxpm9R",
"UWACr6hso-5",
"Cve9EdKbR8y",
"j328HeLXsYy",
"4N1S_k8XRFp",
"XiU7RoPY7Lp",
"lqDQYaN0TRv",
"hmQIK_jZQR"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" > I do not think that my concern raised in Q3 has been properly addressed (e.g., formally addressed).\n\nThe concern for Q3 is addressed in the revised paper, during the construction of the LRT strategy, above equation (6). We explained the applicability of this Bayesian posterior sampling equation in (Bayesian) ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"hsIr6wVyI2T",
"4N1S_k8XRFp",
"wnI-mXQpBFK",
"iclr_2022_EG5Pgd7-MY",
"j328HeLXsYy",
"j328HeLXsYy",
"hmQIK_jZQR",
"JlHAEnxpm9R",
"3VgDGf895vl",
"XiU7RoPY7Lp",
"lqDQYaN0TRv",
"iclr_2022_EG5Pgd7-MY",
"iclr_2022_EG5Pgd7-MY"
] |
iclr_2022_tCx6AefvuPf | Node-Level Differentially Private Graph Neural Networks | Graph neural networks (GNNs) are a popular technique for modelling graph-structured data that compute node-level predictions via aggregation of information from the local neighborhood of each node. However, this aggregation implies increased risk of revealing sensitive information, as a node can participate in the inference for multiple nodes. This implies that standard privacy preserving machine learning techniques like differentially private stochastic gradient descent (DP-SGD) – which are designed for situations where each node/data point participate in inference of one point only – either do not apply or lead to inaccurate solutions. In this work, we formally define the problem of learning 1-layer GNNs with node-level privacy, and provide a method for the problem with a strong differential privacy guarantee. Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis and a non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters. Empirical evaluation on standard benchmarks demonstrates that our method is indeed able to learn accurate privacy preserving GNNs, while still outperforming standard non-private methods that completely ignore graph information. | Reject | This paper received some additional discussion between the reviewers and the area chair. The reviewers were largely unswayed by the author responses. One concern was the level of technical novelty, feeling that this was largely a straightforward adaptation of DPSGD (as, admittedly, most works in the DP ML setting are). The primary technical contribution may be the sampling amplification theorem, which one reviewer felt was also straightforward from previous work. Other criticisms was that the privacy parameter epsilon is rather large, and that results are restricted to 1-layer GNNs. Generally, the work did not feel very novel to reviewers from either the privacy or the GNN community. However, they felt that the paper could benefit substantially from exploration and implementation of the comments made in the responses, so the authors are encouraged to pursue those directions. Some of the many suggestions from reviewer Xcpu may help the authors make the paper appeal more to the GNN community. | train | [
"jBO1ucnicFv",
"hrkggH9wjNh",
"8YKMFXtDyUQ",
"BzmSBOKc_F",
"B8YZljJhrk",
"m1JLh97qOvS",
"yIlyIYqLv7",
"z50Gt3fir68",
"jYy8xu6-Rsu",
"uECMSqppjIC",
"b6Qjhw-OjT_",
"7S4o-o12u0P",
"Fn-gp8pj2SO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work studies the problem of ensuring node-level privacy when training GNNs. This privacy setting is much stronger than edge-level privacy because it considers the impact a single node can have on training, which may be connected to several other nodes, whereas edge-level privacy considers only the impact of ... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_tCx6AefvuPf",
"Fn-gp8pj2SO",
"7S4o-o12u0P",
"jBO1ucnicFv",
"b6Qjhw-OjT_",
"yIlyIYqLv7",
"Fn-gp8pj2SO",
"7S4o-o12u0P",
"jBO1ucnicFv",
"b6Qjhw-OjT_",
"iclr_2022_tCx6AefvuPf",
"iclr_2022_tCx6AefvuPf",
"iclr_2022_tCx6AefvuPf"
] |
iclr_2022_rvost-n5X4G | SPP-RL: State Planning Policy Reinforcement Learning | We introduce an algorithm for reinforcement learning, in which the actor plans for the next state provided the current state. To communicate the actor output to the environment we incorporate an inverse dynamics control model and train it using supervised learning.
We train the RL agent using off-policy state-of-the-art reinforcement learning algorithms: DDPG, TD3, and SAC. To guarantee that the target states are physically relevant, the overall learning procedure is formulated as a constrained optimization problem, solved via the classical Lagrangian optimization method. We benchmark the state planning RL approach using a varied set of continuous environments, including standard MuJoCo tasks, safety-gym level 0 environments, and AntPush. In SPP approach the optimal policy is being searched for in the space of state-state mappings, a considerably larger space than the traditional space of state-action mappings. We report that quite surprisingly SPP implementations attain superior performance to vanilla state-of-the-art off-policy RL algorithms in the tested environments. | Reject | The paper presents a RL planning algorithms where a policy selects a reachable state. The empirical evaluation shows promising results in some environments. While all the reviewers agreed that the state planning RL is a relevant and promising direction, the reviewers expressed concerns with the rigor, significance of the results, and incremental novelty.
To improve them paper the authors should:
- Bring the theoretical foundation in the main text, and add more rigorous analysis, including the limitations of the method.
- The readability of the figures needs to be improved. The legend on the figures is too small and colors are too similar that renders the figures unreadable and confusing.
- If the authors' goal is to develop a method for interpretable RL, then some results and analysis need to address the interpretability of method. | test | [
"SI7CnkXOb3B",
"GP2NaPL2Klw",
"_AYmIsSv1N8",
"iC6Wb-fBLZM",
"rYmCws09fDY",
"a7s1t0dN3t4",
"vt1zOXhECao",
"JZg4TesDaZ",
"P9ZgdR3--lr",
"nP3aX0c4rKb"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read all the reviews and the authors response, and I still think this paper is worth publishing. In particular, it opens an interesting discussion about the possible advantages between state-state and state-action policies (although they are not properly addressed in the paper).",
" Let us take this oppo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"rYmCws09fDY",
"_AYmIsSv1N8",
"a7s1t0dN3t4",
"JZg4TesDaZ",
"nP3aX0c4rKb",
"P9ZgdR3--lr",
"P9ZgdR3--lr",
"iclr_2022_rvost-n5X4G",
"iclr_2022_rvost-n5X4G",
"iclr_2022_rvost-n5X4G"
] |
iclr_2022_WDBo7y8lcJm | Teacher's pet: understanding and mitigating biases in distillation | Knowledge distillation is widely used as a means of improving the performance of a relatively simple “student” model using the predictions from a complex “teacher” model. Several works have shown that distillation significantly boosts the student’s overall performance; however, are these gains uniform across all data sub-groups? In this paper, we show that distillation can harm performance on certain subgroups, e.g., classes with few associated samples, compared to the vanilla student trained using the one-hot labels. We trace this behavior to errors made by the teacher distribution being transferred to and amplified by the student model. To mitigate this problem, we present techniques which soften the teacher influence for subgroups where it is less reliable. Experiments on several image classification benchmarks show that these modifications of distillation maintain boost in overall accuracy, while additionally ensuring improvement in subgroup performance. | Reject | This paper studies knowledge distillation and explores why distillation gains are not uniform. Reviewers consistently find this paper an interesting read, but had common concerns on generalizability and limited improvements/contributions.
In general, reviewers mostly gave a score that is below the acceptance threshold, or expressed concerns otherwise. Summing these up, we conclude this paper is of interest to the ICLR audience, but current form is not ready yet for acceptance.
Summary Of Reasons To Publish:
interesting analysis of the causes of non-uniform gains in distillation
Summary Of Suggested Revisions:
(1) the improvements are marginal and (2) the contribution of AdaMargin is limited, (3) generalizability to other KDs | train | [
"5J-GoPR6Q__",
"IYpJ6uYV_N",
"9RP5eXHOvIS",
"Zi-De0QpWE",
"FfKyRJsAwb0",
"bE8KMaAnZx",
"PkJVsR0vBQ7",
"bAVjZbrN-6Z",
"p-MCCX7q8eD",
"t2OsGUaL9WA",
"by4ZkepEc-a",
"iP8aWwIN-w6",
"jSLocArpyi",
"BvD7s37pht"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree that studying multiple KD methods would be interesting. However, space is finite, and even for a single KD method, there are many things to study: does it harm under cross-architecture, small # of classes, label imbalance, ... Adding another KD method means adding yet another dimension to the study. We t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"IYpJ6uYV_N",
"9RP5eXHOvIS",
"Zi-De0QpWE",
"bAVjZbrN-6Z",
"p-MCCX7q8eD",
"PkJVsR0vBQ7",
"BvD7s37pht",
"jSLocArpyi",
"iP8aWwIN-w6",
"by4ZkepEc-a",
"iclr_2022_WDBo7y8lcJm",
"iclr_2022_WDBo7y8lcJm",
"iclr_2022_WDBo7y8lcJm",
"iclr_2022_WDBo7y8lcJm"
] |
iclr_2022_vr39r4Rjt3z | Designing Less Forgetful Networks for Continual Learning | Neural networks usually excel in learning a single task. Their weights are plastic and help them to learn quickly, but these weights are also known to be unstable. Hence, they may experience catastrophic forgetting and lose the ability to solve past tasks when assimilating information to solve a new task. Existing methods have mostly attempted to address this problem through external constraints. Replay shows the backbone network externally stored memories; regularisation imposes additional learning objectives; and dynamic architecture often introduces more parameters to host new knowledge. In contrast, we look for internal means to create less forgetful networks. This paper demonstrates that two simple architectural modifications -- Masked Highway Connection and Layer-Wise Normalisation -- can drastically reduce the forgetfulness in a backbone network. When naively employed to sequentially learn over multiple tasks, our modified backbones were as competitive as those unmodified backbones with continual learning techniques applied. Furthermore, our proposed architectural modifications were compatible with most if not all continual learning archetypes and therefore helped those respective techniques in achieving new state of the art. | Reject | Unfortunately, I feel the paper is not quite ready for ICLR, even if the reviews seem in general quite positive (though of low confidence).
After reading the reviews and rebuttal, and going over the paper I have to make the following comments:
* The paper do two modification to the backbone architecture, that have an impact on the ability of these systems to continually learn; these changes are adding layer normalization and a mask
* The paper is mostly empirical in nature; while there are some intuitions presented clearly about these ideas, their efficiency is proved empirically, which is completely fine
However:
* The empirical validation seems not sufficient; the main results are permuted MNIST, incremental CIFAR 10, incremental CUB200; the results on permuted MNIST in terms of final accuracy seem surprisingly low (particularly when involving CL solutions, like EWC, ER, HAT .. see table 2; e.g. FA1 < 80% seems very surprising). This seems strange to me and adds a bit of shade on the results
* The proposed methods are simple; There is a strong message behind them, namely that the choice of the backbone (architecture size, normalization layers) has a huge impact on learning. But being a purely empirical result, this really needs to be backed up with analysis and an attempt at understanding of what is going on. E.g. looking at the masks over time .. to they converge to be task specific? Anything that would give a bit of depth to the results. Discussing the Figures (e.g. I'm looking at Fig 3 and I was grep-ing the text to see a discussion of how one would interpret those results). Why is FashionMNIST used to produced Fig 3, and why is not something like this done for one of the CL benchmark considered. Providing additional typical measures for CL (e.g. showing learning curves).
* Just overall does not seem that the work provides sufficient insight, or analysis.
I do think there is something really interesting in this work, and I do hope the authors will resubmit this work after some modification. And I do agree that there are many aspects of the backbone or architecture that have big impacts on CL and this is an understudied and not well understood topic. So in that sense I think the idea of this work is good. But I just feel it fails short in terms of results, analysis. I feel in the current format, the work will not have the impact it deserves. | train | [
"uom2dnu-SvA",
"SuFhTgCoLrb",
"UOeU0cNpuMF",
"p8Wpy6Hy3tz",
"TVYjorZSWyw",
"hWEXIprMKRt",
"28Dbck0HVYN",
"jyK1tiM9pFM",
"0a9MG3tLNur",
"e1gCtnVnjj5",
"w3IeJDgb0bH",
"gGZ3gQVHX3",
"EW73UuWSqvU",
"MCItTa782gw",
"w_hikljPmKx",
"uNb8j_cbJeT",
"-TjQZBdAlpz",
"YgRUmTJt5w3",
"ze7SySDLmy... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" The authors clarified the relationship between vanishing gradient and catastrophic forgetting in their equations. I think it's a good paper and raise my score.",
"The paper proposes modifications to the highway connection network (HCN) architecture, which exhibits some inherent resistance to catastrophic forge... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"hWEXIprMKRt",
"iclr_2022_vr39r4Rjt3z",
"gGZ3gQVHX3",
"iclr_2022_vr39r4Rjt3z",
"EW73UuWSqvU",
"SuFhTgCoLrb",
"SuFhTgCoLrb",
"SuFhTgCoLrb",
"p8Wpy6Hy3tz",
"p8Wpy6Hy3tz",
"p8Wpy6Hy3tz",
"p8Wpy6Hy3tz",
"Wv3nHOvOzKy",
"TXvYNcyZEHW",
"07fRQmLHJ9Q",
"07fRQmLHJ9Q",
"07fRQmLHJ9Q",
"07fRQmL... |
iclr_2022_-9uy3c7b_ks | Learning Controllable Elements Oriented Representations for Reinforcement Learning | Deep Reinforcement Learning (deep RL) has been successfully applied to solve various decision-making problems in recent years. However, the observations in many real-world tasks are often high dimensional and include much task-irrelevant information, limiting the applications of RL algorithms. To tackle this problem, we propose LCER, a representation learning method that aims to provide RL algorithms with compact and sufficient descriptions of the original observations. Specifically, LCER trains representations to retain the controllable elements of the environment, which can reflect the action-related environment dynamics and thus are likely to be task-relevant. We demonstrate the strength of LCER on the DMControl Suite, proving that it can achieve state-of-the-art performance. To the best of our knowledge, LCER is the first representation learning algorithm that enables the pixel-based SAC to outperform state-based SAC on the DMControl 100K benchmark, showing that the obtained representations can match the oracle descriptions ($i.e.$ the physical states) of the environment. | Reject | This paper introduces an objective for representation learning that captures "controllable elements" in the environment (i.e., things that are affected by the agent's actions). In their reviews and discussion, the reviewers agreed this idea was intuitive, well-motivated, and the paper well written. However, multiple reviewers raised concerns about the evaluation and the extent to which LCER is truly an improvement over PI-SAC. Although many of the reviewer's concerns were addressed in the rebuttal period, at the end of the discussion they were still unconvinced or confused about how much LCER really helps over PI-SAC. Based on this, my assessment is that this paper is a promising piece of work, and that with some more controlled comparisons (see suggestion below) it would be a useful contribution to the literature. However, given that the claims are not fully supported as it currently stands, I recommend rejection.
Specific suggestion to improve the paper: based on reading the paper and the discussion, it seems to me (as per the authors' own statement in response to Reviewer uWv6) that the most valid/controlled comparison between LCER and PI-SAC is in Figure 4, where LCER w/ $\beta=0.1$ "can be seen as a variant of PI-SAC with the same embedding choices as LCER" (author's words). However, when taking into account the error bars of the training curves, other values of $\beta$ are only clearly better than $\beta=0.1$ in 1/3 environments (D.walker-walk). This does not make for a particularly convincing result that LCER is better than PI-SAC. To improve the paper, I'd encourage the authors to run further well-controlled comparisons such as this in a larger number of environments. If they can show via such controlled comparisons that LCER is generally better than PI-SAC (i.e. LCER w/ $\beta=0.1$) then that would be a much more compelling demonstration of LCER's superiority. | train | [
"6sD0179z2NM",
"DEv1NbaM6av",
"-AvOE-yX-RU",
"7LZxqH_6Hmy",
"Lj73UvC5xAr",
"dfKcPUebqBB",
"qKI4ab0uBMF",
"lLGcQf_x3zJ",
"qeOKToHXOfK",
"qlbrsoBqqQV",
"i78IxfGz0e3",
"EfkXkz_C08d",
"KOuMH4SWCA",
"CGkod7dFIz4",
"naO6eSGOW1b",
"xtZ9z7kyl0V"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response!\n\nAlthough the calculations of $\\psi_{t+k}, k=1,2..$ are sequential (and thus cannot be parallelized), they are actually carried out in a small latent space (about 50-dim). Therefore, they are not as time-consuming as the forward passes of the encoder (i.e. the calculation of $\\phi(S_... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"DEv1NbaM6av",
"qlbrsoBqqQV",
"7LZxqH_6Hmy",
"lLGcQf_x3zJ",
"iclr_2022_-9uy3c7b_ks",
"qKI4ab0uBMF",
"KOuMH4SWCA",
"xtZ9z7kyl0V",
"qlbrsoBqqQV",
"naO6eSGOW1b",
"CGkod7dFIz4",
"naO6eSGOW1b",
"iclr_2022_-9uy3c7b_ks",
"iclr_2022_-9uy3c7b_ks",
"iclr_2022_-9uy3c7b_ks",
"iclr_2022_-9uy3c7b_ks... |
iclr_2022_u7UxOTefG2 | Uncertainty-based out-of-distribution detection requires suitable function space priors | The need to avoid confident predictions on unfamiliar data has sparked interest in out-of-distribution (OOD) detection. It is widely assumed that Bayesian neural networks (BNNs) are well suited for this task, as the endowed epistemic uncertainty should lead to disagreement in predictions on outliers. In this paper, we question this assumption and show that proper Bayesian inference with function space priors induced by neural networks does not necessarily lead to good OOD detection. To circumvent the use of approximate inference, we start by studying the infinite-width case, where Bayesian inference can be exact due to the correspondence with Gaussian processes. Strikingly, the kernels induced under common architectural choices lead to uncertainties that do not reflect the underlying data generating process and are therefore unsuited for OOD detection. Importantly, we find this OOD behavior to be consistent with the corresponding finite-width networks. Desirable function space properties can be encoded in the prior in weight space, however, this currently only applies to a specified subset of the domain and thus does not inherently extend to OOD data. Finally, we argue that a trade-off between generalization and OOD capabilities might render the application of BNNs for OOD detection undesirable in practice. Overall, our study discloses fundamental problems when naively using BNNs for OOD detection and opens interesting avenues for future research. | Reject | The authors question the assumption that the epistemic uncertainty provided by Bayesian neural networks should be useful for out of distribution detection. They start their analysis in the infinite width limit so as to be able to understand how the induced kernels in a Gaussian process behave. The paper also discusses the potential tradeoffs between generalization and detection. Overall, the paper presents some facts that, while not surprising, (Reviewer fGuy), are helpful in questioning the default assumption. Overall, though, the combination of the lack of surprise with the multi-part, somewhat loosely connected message reduces the quality of the submission. | train | [
"ryM9GZq3NOt",
"rC-nDwsfAdb",
"H5EC909-mTI",
"xY8YJQqF_Xu",
"z_dkzTKIR0n",
"CBhTxsP-zfm",
"a2Rg_dCl7-",
"8b1Xh34caF",
"RDkhz1i7QkQ",
"-RvYenxKY7Z",
"n8Octggu8wW",
"jyTS4bLW-7H",
"0Kr0hBDISZv",
"krvYjTD_bUA",
"3Q0B10GpMeN"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" **More theoretical analysis:** The reviewer's response made us reflect on how to more convincingly convey our arguments using theoretical insights. We currently study the infinite-width limit to make conjectures about the finite-width limit due to intractability. However, there is the known special case of linea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"z_dkzTKIR0n",
"H5EC909-mTI",
"ryM9GZq3NOt",
"CBhTxsP-zfm",
"a2Rg_dCl7-",
"RDkhz1i7QkQ",
"jyTS4bLW-7H",
"iclr_2022_u7UxOTefG2",
"n8Octggu8wW",
"krvYjTD_bUA",
"8b1Xh34caF",
"3Q0B10GpMeN",
"iclr_2022_u7UxOTefG2",
"iclr_2022_u7UxOTefG2",
"iclr_2022_u7UxOTefG2"
] |
iclr_2022_JmU7lyDxTpc | Multi-scale Feature Learning Dynamics: Insights for Double Descent | A key challenge in building theoretical foundations for deep learning is the complex optimization dynamics of neural networks, resulting from the high-dimensional interactions between the large number of network parameters. Such non-trivial interactions lead to intriguing model behaviors such as the phenomenon of "double descent" of the generalization error. The more commonly studied aspect of this phenomenon corresponds to model-wise double descent where the test error exhibits a second descent with increasing model complexity, beyond the classical U-shaped error curve. In this work, we investigate the origins of the less studied epoch-wise double descent in which the test error undergoes two non-monotonous transitions, or descents as the training time increases. We study a linear teacher-student setup exhibiting epoch-wise double descent similar to that in deep neural networks. In this setting, we derive closed-form analytical expressions for the evolution of generalization error over training. We find that double descent can be attributed to distinct features being learned at different scales: as fast-learning features overfit, slower-learning features start to fit, resulting in a second descent in test error. We validate our findings through numerical experiments where our theory accurately predicts empirical findings and remains consistent with observations in deep neural networks. | Reject | This paper examines the time-dependent generalization behavior of high-dimensional student-teacher linear regression models. It introduces a simple two-scale covariance model and examines the exact solutions for the dynamics, finding a tradeoff between the fast- and slow-learning features, leading to epoch-wise double descent. Qualitative comparisons are made with the SGD dynamics of ResNe18 on CIFAR-10.
The reviewers offer split opinions on this work, with most reviewers finding strength in exhibiting the complex behavior of epoch-wise double descent in a simple and analytically-tractable setting. Weaknesses highlighted in the discussion include clarity, discussion of prior work, and rigor of the analyses.
I believe a clear demonstration and analytical explanation for epoch-wise double-descent would certainly be of interest to the ICLR community, and I concur with the reviewers who emphasize these strengths of the paper. However, as one reviewer mentioned, this paper is primarily a theoretical work, and as such, the main theoretical advancements over prior work should be clear, and the novel results should be sufficiently rigorous. In this regard, the paper is lacking, as detailed below.
First of all, the discussion of SGD is imprecise, with no explicit definition of the optimization method that is actually being performed. What is the batch size? How is the sampling performed? What is the learning rate/schedule? The formulas in Secs. 2.1-2.2 suggest that full-batch gradient descent is being performed. In Sec. 2.3, stochasticity from SGD is induced via a Gibbs distribution. However, contrary to the discussion, I don't think that this is a "well-known" **result** (though of course it is a well-known **model**), and in high-dimensions I am not sure it is even correct (see e.g. [1]).
Second of all, even assuming the Gibbs distribution, the substitution on line (23) is only justified in words, whereas the cited results from Ali et al., 2020, only provide a bound. What is meant by "$\approx$"? Some discussion is given about this step of the derivation, but more precise statements would really help make the argument convincing.
Finally, the derivations seem to rely on the replica method from statistical physics, which is not rigorous. While I am generally supportive of such methods for technically challenging problems that do not readily admit alternative analyses, given the simplicity of the linear model setup here, I believe a more rigorous approach would not be prohibitively difficult. At the very least, some acknowledgement should be given about the lack of rigor in the derivation.
Overall, this paper presents a simple and analytically tractable model that sheds light on the importance phenomenon of epoch-wise double descent. Unfortunately, the presentation is not sufficiently clear and the derivations not sufficiently rigorous to merit publication at this time.
[1] Paquette, Courtney et al. “SGD in the Large: Average-case Analysis, Asymptotics, and Stepsize Criticality.” COLT (2021). | train | [
"xzP8vG4x_Gq",
"RN1AQ8TjHaR",
"VEUVe_hRtI",
"yUM1xqK3znN",
"S2Fbg3MTXRN",
"Lq-YAxyjl8X",
"Q4aUSuzPm6v",
"m56-iMH90o",
"c4CPdt6WLov",
"iTpZvfrPOV",
"jOjSsixKh6j",
"1FND8aDfwA7",
"I939fR65Pdt",
"zThpMQ1w39"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Denny,\n\nThank you for your interest and sorry for the late reply!\\\nGradient descent dynamics admit an exact solution as in Eqs. 9, 10, however, as you correctly pointed out, it is not convenient as it requires computation of the eigenvalues of $X^TX$.\\\nAs a result, we also study the special case in Sec... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"jOjSsixKh6j",
"iclr_2022_JmU7lyDxTpc",
"iclr_2022_JmU7lyDxTpc",
"S2Fbg3MTXRN",
"Lq-YAxyjl8X",
"Q4aUSuzPm6v",
"zThpMQ1w39",
"I939fR65Pdt",
"iTpZvfrPOV",
"1FND8aDfwA7",
"iclr_2022_JmU7lyDxTpc",
"iclr_2022_JmU7lyDxTpc",
"iclr_2022_JmU7lyDxTpc",
"iclr_2022_JmU7lyDxTpc"
] |
iclr_2022_TXqemS7XEH | M6-10T: A Sharing-Delinking Paradigm for Efficient Multi-Trillion Parameter Pretraining | Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters. However, under limited resources, extreme-scale model training that requires enormous amounts of computes and memory footprint suffers from frustratingly low efficiency in model convergence. In this paper, we propose a simple training strategy called “Pseudo-to-Real” for high-memory-footprint-required large models. Pseudo-to-Real is compatible with large models with architecture of sequential layers. We demonstrate a practice of pretraining unprecedented 10-trillion-parameter model, an order of magnitude larger than the state-of-the-art, on solely 512 GPUs within 10 days. Besides demonstrating the application of Pseudo-to-Real, we also provide a technique, Granular CPU offloading, to manage CPU memory for training large model and maintain high GPU utilities. Fast training of extreme-scale models on a decent amount of resources can bring much smaller carbon footprint and contribute to greener AI. | Reject | This paper proposed a weight sharing method to speed up the pretraining of large language models. Basically, during the training, it first share weights across all the layers with the same architecture, and then untie the shared weights at some point later. The main advantage of weight sharing is that it can reduce the memory load. Our reviewers have many concerns on this work. The method is not well motivated or explained, and many experimental details are missing. In particular, there is no downstream task result presented for the so-called 10T parameter model. The claim highlighted in the title remains unsupported. In addition, one of our reviewers pointed out that the proposed method is fairly similar to the method in a previous ICLR submission: https://openreview.net/forum?id=jz7tDvX6XYR. | train | [
"eu_YrWTnfma",
"5-T4oGAbYu",
"T-9aYDPa2qU",
"ZJuwxcsFLb9",
"TKpcfwAbwj_",
"AKVMp0-eZ6j",
"8owfgJA9naR",
"BM-CWmNh_Ox",
"0Xwo6-HxA__",
"TzmfrPl9P6"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your helpful review! Below we address your concerns:\n1. About the downstream performance of M6-10T: Due to the limitation of computation resources and time budget, currently we could not provide downstream results and thus we evaluated the model by log perplexity. According to the experience of pre... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5,
4
] | [
"0Xwo6-HxA__",
"BM-CWmNh_Ox",
"8owfgJA9naR",
"TzmfrPl9P6",
"AKVMp0-eZ6j",
"iclr_2022_TXqemS7XEH",
"iclr_2022_TXqemS7XEH",
"iclr_2022_TXqemS7XEH",
"iclr_2022_TXqemS7XEH",
"iclr_2022_TXqemS7XEH"
] |
iclr_2022_AFH3FnBksHT | Model Fusion of Heterogeneous Neural Networks via Cross-Layer Alignment | Layer-wise model fusion via optimal transport, named OTFusion, applies soft neuron association for unifying different pre-trained networks to save computational resources. While enjoying its success, OTFusion requires the input networks to have the same number of layers. To address this issue, we propose a novel model fusion framework, named CLAFusion, to fuse neural networks with a different number of layers, which we refer to as heterogeneous neural networks, via cross-layer alignment. The cross-layer alignment problem, which is an unbalanced assignment problem, can be solved efficiently using dynamic programming. Based on the cross-layer alignment, our framework balances the number of layers of neural networks before applying layer-wise model fusion. Our synthetic experiments indicate that the fused network from CLAFusion achieves a more favorable performance compared to the individual networks trained on heterogeneous data without the need for any retraining. With an extra finetuning process, it improves the accuracy of residual networks on the CIFAR10 dataset. Finally, we explore its application for model compression and knowledge distillation when applying to the teacher-student setting. | Reject | While fusing multiple heterogeneous neural networks into a single network looks like an interesting exploration, there are many major concerns raised by the reviewers:
1) The motivation why the proposed method works is not convincing. In other words, under what conditions the proposed would work or would not work is not clear.
2) The authors failed to provide either theoretical analysis or convincing empirical studies of the proposed method. In the rebuttal, the authors did not address the critical issues raised by the reviewers.
3) There are many other detailed problems about the proposed method as well as the experimental setup.
Therefore, by considering the above concerns, this submission does not meet the standard of publication at ICLR. | test | [
"bz6qh-OiLyy",
"d-8UWhZ8maB",
"438xDCffGZI",
"Mvy9tf2IO3",
"JzcY9BlQSR5",
"Rc7ODGhHb6A",
"dl7nAe-cyrd",
"RJccPWqmq3",
"S-pn7seLB6w",
"Y6bk12A0GPK",
"PVya3-GN19j",
"meXKqLVtfrE"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Q1**: First, I am thinking of the practical value of this algorithm, as well as the OTFusion algorithm. If the cost of training multiple to-be-fused networks is acceptable, the baseline that simply uses all these costs to train the target network should be compared. For example, if training a ResNet34 needs 10 ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"meXKqLVtfrE",
"Y6bk12A0GPK",
"Y6bk12A0GPK",
"PVya3-GN19j",
"PVya3-GN19j",
"PVya3-GN19j",
"S-pn7seLB6w",
"S-pn7seLB6w",
"iclr_2022_AFH3FnBksHT",
"iclr_2022_AFH3FnBksHT",
"iclr_2022_AFH3FnBksHT",
"iclr_2022_AFH3FnBksHT"
] |
iclr_2022_zFyCvjXof60 | Hypergraph Convolutional Networks via Equivalency between Hypergraphs and Undirected Graphs | As a powerful tool for modeling the complex relationships, hypergraphs are gaining popularity from the graph learning community. However, commonly used algorithms in deep hypergraph learning were not specifically designed for hypergraphs with edge-dependent vertex weights (EDVWs). To fill this gap, we build the equivalency condition between EDVW-hypergraphs and undirected simple graphs, which enables utilizing existing undirected graph neural networks as subroutines to learn high-order interactions induced by EDVWs of hypergraphs. Specifically, we define a generalized hypergraph with vertex weights by proposing a unified random walk framework, under which we present the equivalency condition between generalized hypergraphs and undigraphs. Guided by the equivalency results, we propose a Generalized Hypergraph Convolutional Network (GHCN) architecture for deep hypergraph learning. Furthermore, to improve the long-range interactions and alleviate the over-smoothing issue, we further propose the Simple Hypergraph Spectral Convolution (SHSC) model by constructing the Discounted Markov Diffusion Kernel from our random walk framework. Extensive experiments from various domains including social network analysis, visual objective classification, and protein fold classification demonstrate that the proposed approaches outperform state-of-the-art spectral methods with a large margin. | Reject | Standard algorithms for deep hypergraph learning have not been designed for hypergraphs with edge-dependent vertex weights (EDVWs), where the weight of a vertex can depend on the edge of which it isa member. This paper develops a connection between EDVW-hypergraphs and undirected simple graphs, thus enabling the use of existing undirected-graph neural networks as subroutines. This is done via a unified random-walk framework.
(Two typos: ``equivalency" should be ``equivalence", and ``undigraphs" should be ``undirected graphs".)
The theory of equivalence between EDVW-hypergraphs and undirected graphs via random walks is a good contribution. The experimentation across different domains is laudable.
However, there are concerns over the lack of key baselines in the experiments. The author rebuttal has presented additional results with some baselines: sensitive hyperparameters (e.g., learning rate) are not tuned for the baselines. The clarity of the paper is mixed. The map from hypergraphs to graphs is not injective, so there could be ambiguity issues (different hypergraphs mapped to the same graph, thus having the same representations).
Also, the contributions of Section 3 (designed for simple undirected graphs alone) do not appear significantly novel. | train | [
"O_FbxbQtyGF",
"MgCTB8wDWaF",
"AGnLYrSSV_l",
"y-g7eLE8RHS",
"M9-TJJy1AlC",
"AK6dqP_Wvx",
"jtWlPoTnhGP",
"gFo4STdsB0W",
"RZrZ0Fl3J-",
"hdTOIaHg7wn",
"e5Clw02YxSx",
"sYQaHmH5dXm",
"ym34fE8bNbI",
"kR_DHUZFTfb",
"eEfBiAaSFT",
"1c2wmy7NWH",
"JUQ0pNeJspi",
"wBzqdfagSLz",
"_glI7cZIML",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
... | [
" Thanks for your attention again!\n\nHowever, the statement \"GCN on clique expanded graph when the task belongs to EIVW-hypergraph learning tasks\" is confusing. We would like to discuss under the literal understanding:\n\n1. The \"expanded clique graph\" means expanding each hyperedge to a clique **without cons... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"MgCTB8wDWaF",
"AGnLYrSSV_l",
"y-g7eLE8RHS",
"M9-TJJy1AlC",
"AK6dqP_Wvx",
"gFo4STdsB0W",
"hdTOIaHg7wn",
"RZrZ0Fl3J-",
"e5Clw02YxSx",
"Kc-Sr8SPTIC",
"sYQaHmH5dXm",
"wBzqdfagSLz",
"kR_DHUZFTfb",
"1c2wmy7NWH",
"JUQ0pNeJspi",
"eEfBiAaSFT",
"FMo020EonVA",
"_glI7cZIML",
"4VBRStplgB7",
... |
iclr_2022_2RYOwBOFesi | An Empirical Study of Pre-trained Models on Out-of-distribution Generalization | Generalizing to out-of-distribution (OOD) data -- that is, data from domains unseen during training -- is a key challenge in modern machine learning, which has only recently received much attention. Some existing approaches propose leveraging larger models and pre-training on larger datasets. In this paper, we provide new insights in applying these approaches. Concretely, we show that larger models and larger datasets need to be simultaneously leveraged to improve OOD performance. Moreover, we show that using smaller learning rates during fine-tuning is critical to achieving good results, contrary to popular intuition that larger learning rates generalize better when training from scratch. We show that strategies that improve in-distribution accuracy may, counter-intuitively, lead to poor OOD performance despite strong in-distribution performance. Our insights culminate to a method that achieves state-of-the-art results on a number of OOD generalization benchmark tasks, often by a significant margin. | Reject | The paper presented an empirical study of pre-trained models on the Out-of-distribution Generalization problem.
Authors evaluated various factors (such as model sizes, datasets, learning rate, etc) and claim some major findings: 1) larger models have better OOD generalization, and combining both larger models and larger datasets is critical; 2) smaller learning rate during fine-tuning is critical; 3) strategies improving in-distribution accuracy may hurt OOD. Overall, this paper is a well-written empirical study with some useful insights, but the new findings from the empirical studies are generally not surprising and the overall contribution is not significant enough for acceptance. | train | [
"_3dJ3wceLX0",
"E9eGzRxwh6Z",
"SDTFtJkIds5",
"Er9EDrSEiVQ",
"UBeUmDGlLd9",
"5UQbGM5j5I",
"IpcbB_t6-eB",
"x_yiJhdaftO",
"1Z7SjiDe8qe",
"wRzNxRVMxZf",
"gDdB7nNEdN7",
"l_i3bpY1iN",
"m_syxD6G3u9",
"WQt0cxkNl8W"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We genuinely thank you for the constructive suggestions to make our work better! We will include more empirical analysis based on other reviewers' feedback and update those results in our camera-ready version.",
"This paper provides a thorough and relatively in-depth analysis of how pre-trained models affect OO... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5
] | [
"SDTFtJkIds5",
"iclr_2022_2RYOwBOFesi",
"IpcbB_t6-eB",
"UBeUmDGlLd9",
"wRzNxRVMxZf",
"iclr_2022_2RYOwBOFesi",
"E9eGzRxwh6Z",
"WQt0cxkNl8W",
"x_yiJhdaftO",
"m_syxD6G3u9",
"l_i3bpY1iN",
"iclr_2022_2RYOwBOFesi",
"iclr_2022_2RYOwBOFesi",
"iclr_2022_2RYOwBOFesi"
] |
iclr_2022_9vsRT9mc7U | Generative Adversarial Training for Neural Combinatorial Optimization Models | Recent studies show that deep neural networks can be trained to learn good heuristics for various Combinatorial Optimization Problems (COPs). However, it remains a great challenge for the trained deep optimization models to generalize to distributions different from the training one. To address this issue, we propose a general framework, Generative Adversarial Neural Combinatorial Optimization (GANCO) which is equipped with another deep model to generate training instances for the optimization model, so as to improve its generalization ability. The two models are trained alternatively in an adversarial way, where the generation model is trained by reinforcement learning to find instance distributions hard for the optimization model. We apply the GANCO framework to two recent deep combinatorial optimization models, i.e., Attention Model (AM) and Policy Optimization with Multiple Optima (POMO). Extensive experiments on various problems such as Traveling Salesman Problem, Capacitated Vehicle Routing Problem, and 0-1 Knapsack Problem show that GANCO can significantly improve the generalization ability of optimization models on various instance distributions, with little sacrifice of performance on the original training distribution. | Reject | ## A Brief Summary
Recent works in deep learning have shown that it is possible to solve [[combinatorial optimization]] problems (COP) with neural networks. However, generalization beyond the examples seen in the training set is still challenging, e.g., generalizing to TSP with more cities than the ones seen in the training set. This paper proposes the GANCO approach, where a separate generative neural network based on GAN generates new hard-to-solve training instances for the optimizer. The optimizer and the generative network are trained in an alternating fashion. The authors have run experiments with the attention model (AM) and POMO with their GAN-based data augmentation approach. The authors provide experimental results on several well-known COPs, such as the traveling salesman problem.
## Reviewers' Feedback
Below, I will summarize some reviewers' feedback and would like the authors to address the cons noted below.
### Reviewer sEuD
*Pros:*
- Paper is well-written.
- Task is important and well-motivated.
- Good experimental results.
*Cons:*
- The paper's core contribution on the necessity of adversarial entities is not well-motivated.
- Missing baselines:
- RL agent trained on all target distributions. To figure out how far GANCO is from the optimal policy.
- The performance of an agent trained on a curriculum.
- Figure 2 is unnecessary/redundant in the paper.
### Reviewer tjCH
*Cons:*
- The paper is reasonably written. However, it would be much easier to follow with a few changes. For example, section 3.1 explains the architecture and, in related works, a more comprehensive overview of the methods to improve the robustness of RL methods.
- It is widely known that data augmentation helps in deep learning. The paper's claims would be more convincing if it provided some crucial baselines, such as comparing different data augmentations methods and carefully ablating them.
### Reviewer N945
*Pros:*
- Well-written
- Good evaluation
- Simple model with good results
*Cons:*
- Missing citation to the PAIRED paper.
- How important are the adversarial entities generated? Is it possible to achieve similar results by just training on more samples?
- Missing baselines: Instead of training in stages, alternate optimizer and generator network per step basis.
### Reviewer mumN
*Pros:*
- The proposed approach is novel.
- Comprehensive and extensive experiments.
- Figure 1 provides a good summary of the approach.
*Cons:*
- Motivation is for the GANCO is not very convincing.
- Concerns about the capacity of the neural nets used in the paper.
- Concerns on forgetting the original distribution.
- Concerns about experimental evaluation protocol.
- Including experiments on routing problems to show the generality of the proposed approach.
- Request for improvements in the writing and the formatting of the paper.
## Key Takeaways and Thoughts
I think this paper attacks an interesting problem. As far as I am aware of the approach is novel. However, generative adversarial networks have been used in the machine learning literature for data augmentation and RL for augmenting the environment (see the PAIRED paper.) GAN type of approaches hasn't been used to improve the generalization of the deep learning approaches for COP. The results look promising. However, as pointed out by Reviewer mumN and tjCH, this paper would benefit more from further ablations, particularly the necessity of adversarial generation part to make the arguments more convincing. As it stands now, it is not clear where exactly the improvements are coming from. Reviewer mumN also raised some concerns about the poorly configured LHK3 baseline in the discussion period. Furthermore, I agree with the reviewer mumN and tjcH that this paper would benefit from restructuring to make it flow better. I do think that this paper needs another round of reviews. I would recommend the authors go over the feedback provided here and address them for future submission.## References | test | [
"vwFCuE1IXJ3",
"5x-kIOonUi2",
"hK-P92fVKY",
"9Ieqp_9SOd7",
"QfFQ625jRf7",
"yEpCCDhyZc",
"YIcbgNxDvBF",
"EUozBx-4FP",
"ObuQlRpzzbB",
"1NkJ1cejYzA",
"nz79BT7Cbx-",
"MlbHb-yhiH",
"a2vy7NMXKkv",
"2SN8EcUzp03",
"IAU3TKOjCJh",
"jsEGfSjIyby",
"SB8HALcZva7",
"9yWy_SPf-5Z",
"abu3aVwNIUx"
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" While I agree with the concern raised that comparing a fully GPU batch vs batchsize 1 on CPU, I think the proper comparison would be to observe the scaling behaviour on some representative compute node (say amazon P3 and C5 instances) with the same dollar budget and then going across various batch size settings. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"5x-kIOonUi2",
"iclr_2022_9vsRT9mc7U",
"yEpCCDhyZc",
"YIcbgNxDvBF",
"YIcbgNxDvBF",
"jsEGfSjIyby",
"IAU3TKOjCJh",
"ObuQlRpzzbB",
"a2vy7NMXKkv",
"jsEGfSjIyby",
"jsEGfSjIyby",
"SB8HALcZva7",
"9yWy_SPf-5Z",
"abu3aVwNIUx",
"abu3aVwNIUx",
"iclr_2022_9vsRT9mc7U",
"iclr_2022_9vsRT9mc7U",
"... |
iclr_2022_7vcKot39bsv | Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum | Adaptive Momentum Estimation (Adam), which combines Adaptive Learning Rate and Momentum, would be the most popular stochastic optimizer for accelerating the training of deep neural networks. However, empirically Adam often generalizes worse than Stochastic Gradient Descent (SGD). We unveil the mystery of this behavior in the diffusion theoretical framework. Specifically, we disentangle the effects of Adaptive Learning Rate and Momentum of the Adam dynamics on saddle-point escaping and flat minima selection. We prove that Adaptive Learning Rate can escape saddle points efficiently, but cannot select flat minima as SGD does. In contrast, Momentum provides a drift effect to help the training process pass through saddle points, and almost does not affect flat minima selection. This partly explains why SGD (with Momentum) generalizes better, while Adam generalizes worse but converges faster. Furthermore, motivated by the analysis, we design a novel adaptive optimization framework named Adaptive Inertia, which uses parameter-wise adaptive inertia to accelerate the training and provably favors flat minima as well as SGD. Our extensive experiments demonstrate that the proposed adaptive inertia method can generalize significantly better than SGD and conventional adaptive gradient methods. | Reject | The paper is aimed at providing an explaining the perceived lack of generalization results for Adam as compared to SGD. To this end the paper decouples the effect of adaptive per parameter learning rate and the momentum aspect of Adam. The paper shows that the while adaptive rates help escape saddle points faster - they are worse when consider the flatness of minima being selected. Further momentum has no effect on the flatness of minima but again leads to better optimization by providing a drift leading to saddle point evasion. They also provide a new algorithm Adai (based on inertia) targeted at better generalization of adaptive methods.
The paper definitely provides an interesting perspective and the approach to decouple the effect of momentum and adaptive LR and study their efficacy in escaping saddle points and flatness of minima seems a very useful perspective. The primary reason for my recommendation is the presentation of the paper in terms of the rigor its assumptions to establish the results. These aspects have been highlighted by the reviewers in detail. I suggest the authors to carefully revisit the paper and improve the presentation of the assumptions, adding rigor to the presentation as well as adding justifications where appropriate especially in light of non-standardness of these assumptions in optimization literature. | train | [
"BqaeF55BjqW",
"JJudAwu9mEd",
"AxCv3W5e-K8",
"os5LWsvsUVk",
"H9_BADa0VzV",
"AaAITgnI3dF",
"tn5IG0s2b0H",
"J0ncbZt-44n",
"d-Q1Z2AFM_J",
"QhNhnA62L8P",
"lwcvMabPi1",
"Ak5Uw9oI5c2",
"36DZfnqI2q7",
"4JRxRkpvNOp"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate Reviewer 1sdV’s updated review. \n\nWe kindly argue that your main concern may be addressed.\n\nWe would like to present two more responses as follows. \n\nQ1: The assumptions (bounded gradients and bounded variance) for convergence analysis of Adai is strong. Recent papers that focus on convergence... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"JJudAwu9mEd",
"QhNhnA62L8P",
"iclr_2022_7vcKot39bsv",
"Ak5Uw9oI5c2",
"36DZfnqI2q7",
"4JRxRkpvNOp",
"4JRxRkpvNOp",
"4JRxRkpvNOp",
"lwcvMabPi1",
"lwcvMabPi1",
"iclr_2022_7vcKot39bsv",
"iclr_2022_7vcKot39bsv",
"iclr_2022_7vcKot39bsv",
"iclr_2022_7vcKot39bsv"
] |
iclr_2022_R0AzpCND-M_ | Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness | The vulnerability of deep neural networks to adversarial examples has motivated an increasing number of defense strategies for promoting model robustness. However, the progress is usually hampered by insufficient robustness evaluations. As the de facto standard to evaluate adversarial robustness, adversarial attacks typically solve an optimization problem of crafting adversarial examples with an iterative process. In this work, we propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically. Our method learns the optimizer in adversarial attacks parameterized by a recurrent neural network, which is trained over a class of data samples and defenses to produce effective update directions during adversarial example generation. Furthermore, we develop a model-agnostic training algorithm to improve the generalization ability of the learned optimizer when attacking unseen defenses. Our approach can be flexibly incorporated with various attacks and consistently improves the performance with little extra computational cost. Extensive experiments demonstrate the effectiveness of the learned attacks by MAMA compared to the state-of-the-art attacks on different defenses, leading to a more reliable evaluation of adversarial robustness. | Reject | This paper introduces a "model agnostic" attack to evaluate the robustness
of machine learning models, specifically focusing on adversarial training
style defenses. The attack uses a RNN optimizer and meta-learning to achieve
this goal.
The reviews are mixed. The two more positive reviewers appreciated the
technical ideas of the attack, found the paper well written, and noted
the results were stronger than prior attacks.
However the more negative reviewer raises valid concerns around (a) the
magnitude of the contribution compared to prior attacks, and (b) to what
extent this attack will be useful more generally.
Starting with the first point (also raised by the other reviewers) the
total contribution of this paper is to improve attack success rates by
~0.1% compared to the best prior attacks. This is a fairly limited total
gain, especially because this technique requires much more sophisticated
attack techniques.
More fundamental is the question if this attack is useful to the community.
I tend to agree with reviewer 8AXD here that this contribution is rather
limited for two reasons:
1. As the authors acknowledge, the attack is most effective for adversarial
training techniques, or others that don't make the gradient hard to
optimize. This limits the attack to a smaller subset of defenses.
2. Complexity complicates attack evaluations. It's hard enough to get an
attack working in the first place, and this paper has to use some fairly
sophisticated tools to just get the attack marginally better than
prior attacks that are (much) simpler. It's not clear that we would
expect authors of future defenses to be able to get as good results,
when prior methods are much simpler to apply.
And so on the whole, this paper's main contribution is improving attack
success rate by a small amount, with a fairly complex method, that only
applies to a class of defenses that don't make gradient descent difficult.
So while there is nothing outright wrong with this paper, in its current
form it seems to be adding unnecessary complexity to achieve something
that can already be done with existing techniques. | train | [
"eEwItM1MAgc",
"BGaWaDDkJe7",
"L-FGBRzJrnN",
"NoK3R777S85",
"zsSagR9EFXN",
"XXKn4DmLNfy",
"5EYqw6qFmx3",
"lqzrlYoUe6u",
"0DOXR15aCn",
"ka-4FibZDkr",
"NRrxLoJFNcq",
"fGh0UrXYryE",
"aucQpW2WtDV"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Table R1 further shows the results of different attacks averaged over the 12 defenses in Table 4. It can be clearly seen that the performance gap between MT, ODI, and AutoAttack is less than 0.1\\%. Compare with these attacks, we obtain a 0.1\\% improvement, which is **still meaningful** due to the essential requ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"BGaWaDDkJe7",
"NoK3R777S85",
"zsSagR9EFXN",
"XXKn4DmLNfy",
"ka-4FibZDkr",
"0DOXR15aCn",
"aucQpW2WtDV",
"fGh0UrXYryE",
"NRrxLoJFNcq",
"NRrxLoJFNcq",
"iclr_2022_R0AzpCND-M_",
"iclr_2022_R0AzpCND-M_",
"iclr_2022_R0AzpCND-M_"
] |
iclr_2022_9gz8qakpyhG | Test-time Batch Statistics Calibration for Covariate Shift | Deep neural networks have a clear degradation when applying to the unseen environment due to the covariate shift. Conventional approaches like domain adaptation requires the pre-collected target data for iterative training, which is impractical in real-world applications. In this paper, we propose to adapt the deep models to the novel environment during inference. An previous solution is test time normalization, which substitutes the source statistics in BN layers with the target batch statistics. However, we show that test time normalization may potentially deteriorate the discriminative structures due to the mismatch between target batch statistics and source parameters. To this end, we present a general formulation $\alpha$-BN to calibrate the batch statistics by mixing up the source and target statistics for both alleviating the domain shift and preserving the discriminative structures. Based on $\alpha$-BN, we further present a novel loss function to form a unified test time adaptation framework Core, which performs the pairwise class correlation online optimization. Extensive experiments show that our approaches achieve the state-of-the-art performance on total twelve datasets from three topics, including model robustness to corruptions, domain generalization on image classification and semantic segmentation. Particularly, our $\alpha$-BN improves 28.4\% to 43.9\% on GTA5 $\rightarrow$ Cityscapes without any training, even outperforms the latest source-free domain adaptation method. | Reject | The paper builds on ideas in test-time adaptation and test-time normalization to improve performance under covariate shift. Concretely, the paper proposes (i) alpha-BN, a method to calibrate batch statistics by mixing source and target statistics and (ii) test-time adaptation using the CORE loss (which was proposed by Jin et al., 2020). The authors compare the the proposed approach to existing approaches on multiple benchmarks.
The reviewers found the idea interesting and appreciated the additional ablations. The main concerns were around novelty (as the idea is closely related to prior work in test-time adaptation and normalization) and hyperparameter selection (e.g. how to choose alpha in practice). Overall, the reviewers and I felt that the current version falls slightly below the acceptance threshold. I encourage the authors to revise and resubmit to another venue.
Minor comment about Appendix C (this didn't affect the score, just a suggestion for future revisions):
I think it might be interesting to include other alternatives to cross-entropy that downweight easy examples, cf. focal loss https://arxiv.org/abs/1708.02002 and https://arxiv.org/abs/2002.09437. I'm curious to see if CORE and focal loss consistently outperform cross entropy. | val | [
"mS90fi_6jFt",
"xq81ANEPr9T",
"rNzz79DiULV",
"OjIjWsaw94r",
"uzLRpoqR3S9",
"7zKP1VJV55T",
"3iCdHPRiXMu",
"JjTez4JWt_3",
"Pp9r0hebdnY",
"lARav-0LUuS",
"gO7byv36VCk",
"Gr6hWo45DgH",
"03UPUBiC_t7",
"wCNeC_gIvJL",
"KwfWNKx9Slc",
"B2OA8iNZVnm",
"imm6Ud4bQNJ",
"VZx2WbuUO4n"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a domain adaptation algorithm for the recently introduced test-time adaptation setting by adapting batch-norm parameters. The algorithm has two key components: first, it adapts batch-norm statistics using a linear combination of source and estimated target domain statistics; second, it uses a c... | [
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2022_9gz8qakpyhG",
"gO7byv36VCk",
"iclr_2022_9gz8qakpyhG",
"KwfWNKx9Slc",
"B2OA8iNZVnm",
"3iCdHPRiXMu",
"mS90fi_6jFt",
"lARav-0LUuS",
"iclr_2022_9gz8qakpyhG",
"gO7byv36VCk",
"rNzz79DiULV",
"rNzz79DiULV",
"mS90fi_6jFt",
"03UPUBiC_t7",
"imm6Ud4bQNJ",
"VZx2WbuUO4n",
"iclr_2022_9gz... |
iclr_2022_JV4tkMi4xg | Constrained Discrete Black-Box Optimization using Mixed-Integer Programming | Discrete black-box optimization problems are challenging for model-based optimization (MBO) algorithms, such as Bayesian optimization, due to the size of the search space and the need to satisfy combinatorial constraints. In particular, these methods require repeatedly solving a complex discrete global optimization problem in the inner loop, where popular heuristic inner-loop solvers introduce approximations and are difficult to adapt to combinatorial constraints. In response, we propose NN+MILP, a general discrete MBO framework using piecewise-linear neural networks as surrogate models and mixed-integer linear programming (MILP) to optimize the acquisition function. MILP provides optimality guarantees and a versatile declarative language for domain-specific constraints. We test our approach on a range of unconstrained and constrained problems, including DNA binding and the NAS-Bench-101 neural architecture search benchmark. NN+MILP surpasses or matches the performance of algorithms tailored to the domain at hand, with global optimization of the acquisition problem running in a few minutes using only standard software packages and hardware. | Reject | The paper considers the problem of black-box optimization and proposes a discrete MBO framework using piecewise-linear neural networks as surrogate models and mixed-integer linear programming. The reviewers generally agree that the paper suggests an interesting approach but they also raised several concerns in their initial reviews. The response from the authors addressed a number of these concerns, for instance regarding scalability and expressivity of the model. However, some of these concerns remained after the discussion period, including doubts about the usefulness for typical applications in discrete black-box optimization and some concerns about the balance between exploration and exploration.
Overall the paper falls below the acceptance bar for now but the direction taken by the authors has some potential. I encourage the authors to address the problems discussed in the reviews before resubmitting. | val | [
"Tng2d6P_20p",
"XMHR_HTKH_F",
"IxRPf1YeYi4",
"0oxGqruk4gv",
"e2YgquJ5UT",
"A29cLYwAWnM",
"eo7PPTuakiD",
"XQwnmT61OnY",
"_1U-aQ3Zdfd",
"myZ3fVwpF7C",
"BZxd5juaNMx",
"Dw1MAwPskH",
"sABVgM7WQY_",
"IrBi9vtEO0r",
"sxaumdvH1q2",
"ZY5vv0ohXTi",
"go2CFvHRCzC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The authors present NN+MILP, a framework for the optimization of an expensive to evaluate blackbox fuction with a discrete combinatorially constrained domain. The acquisition problem of finding the surrogate minimum is solved to global optimality by solving a MILP formulation of the acquisition problem. The MILP f... | [
6,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_JV4tkMi4xg",
"IxRPf1YeYi4",
"0oxGqruk4gv",
"sABVgM7WQY_",
"iclr_2022_JV4tkMi4xg",
"eo7PPTuakiD",
"IrBi9vtEO0r",
"sxaumdvH1q2",
"iclr_2022_JV4tkMi4xg",
"BZxd5juaNMx",
"Dw1MAwPskH",
"Tng2d6P_20p",
"e2YgquJ5UT",
"go2CFvHRCzC",
"_1U-aQ3Zdfd",
"iclr_2022_JV4tkMi4xg",
"iclr_2022... |
iclr_2022_nT0GS37Clr | FSL: Federated Supermask Learning | Federated learning (FL) allows multiple clients with (private) data to collaboratively train a common machine learning model without sharing their private training data. In-the-wild deployment of FL faces two major hurdles: robustness to poisoning attacks and communication efficiency. To address these concurrently, we propose Federated Supermask Learning (FSL). FSL server trains a global subnetwork within a randomly initialized neural network by aggregating local subnetworks of all collaborating clients. FSL clients share local subnetworks in the form of rankings of network edges; more useful edges have higher ranks. By sharing integer rankings, instead of float weights, FSL restricts the space available to craft effective poisoning updates, and by sharing subnetworks, FSL reduces the communication cost of training. We show theoretically and empirically that FSL is robust by design and also significantly communication efficient; all this without compromising clients' privacy. Our experiments demonstrate the superiority of FSL in real-world FL settings; in particular, (1) FSL achieves similar performances as state-of-the-art FedAvg with significantly lower communication costs: for CIFAR10, FSL achieves same performance as Federated Averaging while reducing communication cost by $\sim35\%$. (2) FSL is substantially more robust to poisoning attacks than state-of-the-art robust aggregation algorithms. | Reject | The paper brings the "supermask" idea used in neural architecture search to the application of federated learning, here represented by a single mask of a larger network. The method can be seen as pruning-before-training, or more precisely pruning-instead-of-training. It is a simplified version of the related works LotteryFL, PruneFL or FedMask, with the difference that here no personalization and no training of the weights is performed, only learning of a global mask. Related work discussion should be improved. While the communication efficiency impact of the method seems minor but positive, the interesting point is that authors here argue that masking will improve robustness to adversarial participants during training.
Unfortunately no theoretical evidence is provided for success of training, in the sense of Byzantine robustness. It is known that robust training can be attacked with small perturbations correlated over time (e.g. 'little is enough'), so also over layers, two important aspects which are ignored here - as voting here is only analysed static at a single time-point. As pointed out by reviewer JJjz, the considered attack (inverting ranking) is far from being formally proven to be the strongest one, and we would have wished for a more precise discussion of these issues as the target of the paper seems to be mainly robustness.
Concerns on the paper also remained on the level of novelty, as it only uses existing building blocks which are more or less directly applicable from the centralized setting, and on the limited contributions towards formal robustness, and on the limited discussion of related work mentioned by several reviewers, only some of which we were able to address in the discussion phase.
We hope the detailed feedback helps to strengthen the paper in the future. | test | [
"jUnwfyZT9df",
"ceiunmHcwl",
"GoVceP6q_o",
"YFhwK9VPLUE",
"_IpNf7_s5F",
"2iOirMvpKDZ",
"ZnbZTXsWVc",
"QhDzAGpBsJC",
"hwDiDwFB84F",
"Gr3wYQHrn0S",
"2x0RzKswA4U",
"776iejt7Nbt",
"cUSanxERx1Y",
"UHDdYOHwkwP",
"FEm6fpOJrWA",
"-ZYks1y-Rw",
"VX5Ww2YxpHS",
"C2V1_6LeoLM",
"P0vAk2LpmKl",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
"The paper proposes to collaboratively learn a supermask within a randomly initialized neural networks, instead of learning the model parameters.\nThis idea is interesting and the authors verify its effectiveness on MNIST, CIFAR and FEMNIST, with the benefits of reduced communication cost and robustness to maliciou... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_nT0GS37Clr",
"_IpNf7_s5F",
"YFhwK9VPLUE",
"ceiunmHcwl",
"QhDzAGpBsJC",
"2x0RzKswA4U",
"QhDzAGpBsJC",
"UHDdYOHwkwP",
"776iejt7Nbt",
"iclr_2022_nT0GS37Clr",
"-ZYks1y-Rw",
"P0vAk2LpmKl",
"iclr_2022_nT0GS37Clr",
"jUnwfyZT9df",
"OH6fXGwY43d",
"VX5Ww2YxpHS",
"C2V1_6LeoLM",
"Gr... |
iclr_2022_4Stc6i97dVN | Sharper Utility Bounds for Differentially Private Models | In this paper, by introducing Generalized Bernstein condition, we propose the first $\mathcal{O}\big(\frac{\sqrt{p}}{n\epsilon}\big)$ high probability excess population risk bound for differentially private algorithms under the assumptions $G$-Lipschitz, $L$-smooth, and Polyak-{\L}ojasiewicz condition, based on gradient perturbation method. If we replace the properties $G$-Lipschitz and $L$-smooth by $\alpha$-H{\"o}lder smoothness (which can be used in non-smooth setting), the high probability bound comes to $\mathcal{O}\big(n^{-\frac{2\alpha}{1+2\alpha}}\big)$ w.r.t $n$, which cannot achieve $\mathcal{O}\left(1/n\right)$ when $\alpha\in(0,1]$. %and only better than previous results when $\alpha\in[1/2,1]$. To solve this problem, we propose a variant of gradient perturbation method, \textbf{max$\{1,g\}$-Normalized Gradient Perturbation} (m-NGP). We further show that by normalization, the high probability excess population risk bound under assumptions $\alpha$-H{\"o}lder smooth and Polyak-{\L}ojasiewicz condition can achieve $\mathcal{O}\big(\frac{\sqrt{p}}{n\epsilon}\big)$, which is the first $\mathcal{O}\left(1/n\right)$ high probability utility bound w.r.t $n$ for differentially private algorithms under non-smooth conditions. Moreover, we evaluate the performance of the new proposed algorithm m-NGP, the experimental results show that m-NGP improves the performance (measured by accuracy) of the DP model over real datasets. It demonstrates that m-NGP improves the excess population risk bound and the accuracy of the DP model on real datasets simultaneously. | Reject | The paper gives high probability bounds on excess risk for differentially private learning algorithms, in the setting where the loss is assumed to be Lipschitz, smooth, and assumed to satisfy the Polyak-Łojasiewicz (PL) condition. The key idea in the paper is to leverage the curvature in the loss (PL condition) and the generalized Bernstein condition.
Authors show that they get sharper bounds of the order \sqrt{p}/(n\epsilon) when the loss is assumed to satisfy the PL condition besides being convex Lipschitz/smooth. Without using some curvature information about the loss function, the best upper bounds we can get are in the order of \sqrt{p}/(n\epsilon) + 1/\sqrt{n} — and this is tight at least in terms of the dependence on n given the nearly matching lower bounds — in fact, the dependence on n is tight as it matches the non-private settings.
So, I find it a bit misleading when authors say that they improve over the existing results. That statement is not true in its generality — it is true that we can leverage the PL condition to give faster rates but that is not the setting of prior work. Again, the bounds that authors compare against are for smooth/Lipschitz convex loss functions and without any assumption on the curvature of the loss.
If we do look at the literature for when and/or how can curvature help, we can compare against the existing bounds for strongly convex losses. The best-known result in the setting that is most closely related is that of Feldman et al. (STOC 2020): https://dl.acm.org/doi/pdf/10.1145/3357713.3384335. As we can check from Theorem 4.9 in that paper, the bounds we get are in the order of 1/n + d/n^2 which is actually better — not surprising since PL condition is a weaker condition. There is merit to the results in this paper but the current narrative is quite misleading and a more careful comparison with the existing literature is needed. The bounds are hard to parse — for example, what is the dependence on the strong convexity parameter (\mu)? It would also help to instantiate specific loss functions so that we can fix some of the parameters in the bound to have a clear comparison with the existing bounds. | train | [
"nFVCGaSVww5",
"3lAJoxdUXe7",
"AImnQSV_a7S",
"rYBIRcTDXnL",
"2ylrimSdOJ",
"MTOaICWZlvU",
"XvJhj2QM31h",
"gppn2mLHX4l",
"W4F3x-QUP2",
"TlkDys83f-U",
"O641WpLDvk",
"w4GAYkrvaWp",
"cLHF7Y-XigT",
"BxneQaQhqDr",
"CO2cW-S77TS",
"q2WfmiN1MfS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" To Reviewer AguJ:\n\nThank you for the feedback and we will clarify your concerns in the following.\n\nFirstly, one of the key parts of our paper is to expend the smoothness assumption to H{\\\"o}lder smoothness.\nAnd based on theoretical analysis, we further design a new algorithm, and the new algorithm achieves... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"3lAJoxdUXe7",
"W4F3x-QUP2",
"w4GAYkrvaWp",
"MTOaICWZlvU",
"XvJhj2QM31h",
"TlkDys83f-U",
"O641WpLDvk",
"q2WfmiN1MfS",
"CO2cW-S77TS",
"cLHF7Y-XigT",
"BxneQaQhqDr",
"iclr_2022_4Stc6i97dVN",
"iclr_2022_4Stc6i97dVN",
"iclr_2022_4Stc6i97dVN",
"iclr_2022_4Stc6i97dVN",
"iclr_2022_4Stc6i97dVN"... |
iclr_2022_n54Drs00M1 | Learning affective meanings that derives the social behavior using Bidirectional Encoder Representations from Transformers | Cultural sentiments of a society characterize social behaviors, but modeling sentiments to manifest every potential interaction remains an immense challenge. Affect Control Theory (ACT) offers a solution to this problem. ACT is a generative theory of culture and behavior based on a three-dimensional sentiment lexicon. Traditionally, the sentiments are quantified using survey data which is fed into a regression model to explain social behavior. The lexicons used in the survey are limited due to prohibitive cost. This paper uses a fine-tuned Bidirectional Encoder Representations from Transformers (BERT) model for developing a replacement for these surveys. This model achieves state-of-the-art accuracy in estimating affective meanings, expanding the affective lexicon, and allowing more behaviors to be explained. | Reject | This work presents a new sentiment representation method with the use of affect control theory and BERT. Reviewers pointed out several major concerns towards the insufficient experiments and results, as well as the lack of ablation studies and related work discussion. I would like to encourage the authors to take into account the comments from reviewers to further improve their work for a stronger version for future submissions. | train | [
"8ELADebcsTp",
"DELU9ZqAX3-",
"u_IJQYNDgpT",
"3kvmYcLqTY4",
"-iw3clrcSmW",
"UMN9deTxAHC",
"Fe7UlA3xpkU",
"__W_SOq06wV",
"OrJJaieyWE",
"HSSamErTJg",
"7gHggHJnV8k"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The latest draft does have some improvements in various aspects compared to the initial version, but I still think that the method and experiment parts of the paper need further revision. In addition, there are still some problems with the layout and structure of some figures and tables in this paper, such as Fig... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"__W_SOq06wV",
"iclr_2022_n54Drs00M1",
"OrJJaieyWE",
"OrJJaieyWE",
"DELU9ZqAX3-",
"DELU9ZqAX3-",
"7gHggHJnV8k",
"HSSamErTJg",
"iclr_2022_n54Drs00M1",
"iclr_2022_n54Drs00M1",
"iclr_2022_n54Drs00M1"
] |
iclr_2022_827jG3ahxL | REFACTOR: Learning to Extract Theorems from Proofs | Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract $19.6\%$ of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted $16$ new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of $733.5$ times, and help to shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves relatively $14$-$30\%$ more test theorems by frequently leveraging a diverse set of newly extracted theorems. | Reject | The paper proposes a deep learning pipeline to extract lemmas from proofs and how to (re)use them to compress a proof library. The The topic is highly relevant, the motivation behind the work is clear but the current state of the manuscript is yet not ready for publication.
In fact, while all reviewers somehow recognized the potential impact of this work, they also highlighted some critical aspects that should be addressed before full acceptance. Namely:
- the role of GNNs in achieving the achieved performance shall be discussed more in detail (what if they are replaced in the pipeline? an ablation study would help)
- the broader literature of automated theorem proving should be addressed more precisely in the related works
- comparisons against other ML methods for theorem proving should be extended (the comparison with MetaGen added in the discussion is a good starting point that can turn into a full experiment) | train | [
"U3HS0K0JFjT",
"ESdLH4P92RD",
"45u0bcjIJi",
"PZytVE0Big",
"J-5kPAIdmIZ",
"-yvZwfY3tys",
"a6nFFhlECqW",
"nFtZySqlBqQ",
"1eXQkRzxijT",
"yqt0QRAOq2k",
"lABSmGotucB",
"VD98sWMvIWr",
"RKekPCtwQpg",
"OUHY7LTmDpX",
"7TXo-7h3prF",
"wTexrsUAKTR"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a deep learning-based approach for extracting theorems from existing human proofs. The problem is formulated by classifying the nodes of the proof trees in/out of the extracted theorem. Graph neural networks are applied to this problem of node classification. This approach is applied to the Met... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"iclr_2022_827jG3ahxL",
"iclr_2022_827jG3ahxL",
"nFtZySqlBqQ",
"1eXQkRzxijT",
"yqt0QRAOq2k",
"lABSmGotucB",
"VD98sWMvIWr",
"wTexrsUAKTR",
"7TXo-7h3prF",
"U3HS0K0JFjT",
"OUHY7LTmDpX",
"RKekPCtwQpg",
"iclr_2022_827jG3ahxL",
"iclr_2022_827jG3ahxL",
"iclr_2022_827jG3ahxL",
"iclr_2022_827jG... |
iclr_2022_gi4956J8g5 | Second-Order Unsupervised Feature Selection via Knowledge Contrastive Distillation | Unsupervised feature selection aims to select a subset from the original features that are most useful for the downstream tasks without external guidance information. While most unsupervised feature selection methods focus on ranking features based on the intrinsic properties of data, they do not pay much attention to the relationships between features, which often leads to redundancy among the selected features. In this paper, we propose a two-stage Second-Order unsupervised Feature selection via knowledge contrastive disTillation (SOFT) model that incorporates the second-order covariance matrix with the first-order data matrix for unsupervised feature selection. In the first stage, we learn a sparse attention matrix that can represent second-order relations between features. In the second stage, we build a relational graph based on the learned attention matrix and perform graph segmentation for feature selection. Experimental results on 12 public datasets show that SOFT outperforms classical and recent state-of-the-art methods, which demonstrates the effectiveness of our proposed method. | Reject | This paper proposes a new two stage second-order unsupervised feature selection method via knowledge contrastive distillation. In the first stage, a sparse attention matrix that represents second order statistics is learned. In the second stage, a relational graph based on the learned attention matrix is constructed to perform graph segmentation for feature selection.
This proposed method contains some new and interesting ideas and is novel in the unsupervised feature selection setting, though some components such as the second order affinity matrix are not totally new. The proposed method is technically sound. The authors compared their method with 10 methods including several recent deep methods on 12 datasets and demonstrated consistent improvements.
However, there are some concerns from the reviewers, even after the discussion phase. 1) The computational efficiency of the proposed method seems to be low. Since one goal of feature selection is to speed up downstream tasks, the efficiency of feature selection itself should also be considered. I suggest the authors analyze the computational bottleneck of the proposed method and improve the efficiency. 2) More ablation studies can be added to illustrate how the proposed method removes the redundancy issues of the selected features. 3) Some metrics like supervised classification accuracy can be potentially used as a metric. Though supervised classification is impossible in the unsupervised learning setting, running the experiments on some datasets that have labels by pretending having on label is one way to evaluate the method.
Overall, the paper provides some new and interesting ideas. However, given the above concerns, the novelty and significance of the paper will degenerate. Although we think the paper is not ready for ICLR in this round, we believe that the paper would be a strong one if the concerns can be well addressed. | train | [
"oX3WzWUz-aE",
"Y8jctUOMXcA",
"g23WTmOUjyr",
"r-A78eJxH5n",
"5vIWlUuPj6M",
"zrF-RLYmFN",
"a6nEWIa81CI",
"lHUV8pU7ie1",
"N5YHlK9FsKi",
"5Qev-ev2YqA",
"oU_WRa-fOD",
"uqhRyh60wtD",
"Zu2GFtKTtWk",
"Ud1gtJgG7hi",
"GnQf3zLUGrS",
"PFzY-S0rvXw",
"PX_Tnf4FspD",
"PEQmjiB7FXc",
"6iAc39YEB2f... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks for your comments, and we appreciate you recognizing the significance of our research problem and the soundness of the experiments. Here we provide more experimental results to address your comments in the Cons section.\n\n---\n\n**1. Ablation study without knowledge contrastive distillation**\n\nWe run ou... | [
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"Y8jctUOMXcA",
"iclr_2022_gi4956J8g5",
"iclr_2022_gi4956J8g5",
"zrF-RLYmFN",
"GnQf3zLUGrS",
"GnQf3zLUGrS",
"lHUV8pU7ie1",
"PFzY-S0rvXw",
"iclr_2022_gi4956J8g5",
"oU_WRa-fOD",
"g23WTmOUjyr",
"Zu2GFtKTtWk",
"6iAc39YEB2f",
"GnQf3zLUGrS",
"8JPCf2sYXv",
"uqhRyh60wtD",
"PEQmjiB7FXc",
"ic... |
iclr_2022_bglU8l_Pq8Q | In defense of dual-encoders for neural ranking | Transformer-based models such as BERT have proven successful in information retrieval problem, which seek to identify relevant documents for a given query. There are two broad flavours of such models: cross-attention (CA) models, which learn a joint embedding for the query and document, and dual-encoder (DE) models, which learn separate embeddings for the query and document. Empirically, CA models are often found to be more accurate, which has motivated a series of works seeking to bridge this gap. However, a more fundamental question remains less explored: does this performance gap reflect an inherent limitation in the capacity of DE models, or a limitation in the training of such models? And does such an understanding suggest a principled means of improving DE models? In this paper, we study these questions, with three contributions. First, we establish theoretically that with a sufficiently large embedding dimension, DE models have the capacity to model a broad class of score distributions. Second, we show empirically that on real-world problems, DE models may overfit to spurious correlations in the training set, and thus under-perform on test samples. To mitigate this behaviour, we propose a suitable distillation strategy, and confirm its practical efficacy on the MSMARCO-Passage and Natural Questions benchmarks. | Reject | Strengths:
* Well-written paper
*Theoretical analysis demonstrates that dual encoder models have similar capacity as CA models
*New distillation algorithm for learning DE students from CA teachers
Weaknesses:
* No reviewer seems particularly excited about this work
* Theoretical analysis doesn’t provide actionable insight -- it does not directly motivate the suggested distillation methods
* Empirical results are lacking -- reviewers asked for qualitative examples of improvements from their distillation method | train | [
"8EcUGjDQsju",
"vb7MCnHOW0j",
"8UmZ5cJ6zmc",
"Ij-8PTJz3u9",
"CD0GxDM4yR0",
"zd-PySfUAw9",
"B-9JtLUxSqr",
"iQ9ZNQF0zGa",
"klPNCxlXBlo",
"7t2Hu5RM8Ep"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the reply. That answers my questions.",
" Thanks for your reply and efforts! Your reply clarified my confusions on the capacity of DE models. ",
" Thanks for the detailed comments.\n\n> From a practical standpoint, the novel contribution of this work is pretty weak.\n\nWe wish to emphasise a few po... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
5
] | [
"Ij-8PTJz3u9",
"CD0GxDM4yR0",
"7t2Hu5RM8Ep",
"B-9JtLUxSqr",
"klPNCxlXBlo",
"iQ9ZNQF0zGa",
"iclr_2022_bglU8l_Pq8Q",
"iclr_2022_bglU8l_Pq8Q",
"iclr_2022_bglU8l_Pq8Q",
"iclr_2022_bglU8l_Pq8Q"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.