paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_KL5jILuehZ | End-to-End Balancing for Causal Continuous Treatment-Effect Estimation | We study the problem of observational causal inference with continuous treatment. We focus on the challenge of estimating the causal response curve for infrequently-observed treatment values.
We design a new algorithm based on the framework of entropy balancing which learns weights that directly maximize causal inference accuracy using end-to-end optimization. Our weights can be customized for different datasets and causal inference algorithms. We propose a new theory for consistency of entropy balancing for continuous treatments. Using synthetic and real-world data, we show that our proposed algorithm outperforms the entropy balancing in terms of causal inference accuracy. | Reject | The paper provides a new way of weighting data to build weighted estimators of causal effects (which themselves can be used in other contexts, e.g. doubly-robust estimators). It's novel in the sense that it optimizes the choice of weighting based on information about the response function space. The approach is simple to implement, and opens up other possibilities for different classes of estimators.
I liked it. I think the paper is nearly there in terms of a well-rounded contribution. But I have to say that I did share the concern about the choice of random response functions. It's not only a matter of function space (everybody wants the most flexible one), but also of the random measure that goes in it - so the more flexible the random space, the least understood (to me at least) is the influence of the random measure. Surely that are choices of function space distributions that can do worse than uniform weights for some classes of problems? It's not that it's a implausible starting point (Bayesians do it all the time in terms of prior distribution, on top of a full-blown likelihood function that is more often than not just a big nuisance parameter), but I think the paper covers this aspect just too lightly. I think it's of benefit to the authors to release a published version of this paper once they have some more formal guidance or a more complex experimental setup providing a more thorough insight of it. I do think the contribution is really promising, but it feels unfinished, and I'd be curious to see where it could go following this direction. | train | [
"lywgPyVpb7X",
"oplvfD-Yhxb",
"SSbXAjtBL09",
"dOIFUuYWTSX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an alternative to balancing weights that creates a series of pseudo-outcomes via a procedure similar to the wild bootstrap, estimating balancing weights, and then updating the causal parameter. The authors provide some theoretical evidence and a set of synthetic experiments to assess their clai... | [
3,
8,
6,
8
] | [
4,
3,
4,
4
] | [
"iclr_2022_KL5jILuehZ",
"iclr_2022_KL5jILuehZ",
"iclr_2022_KL5jILuehZ",
"iclr_2022_KL5jILuehZ"
] |
iclr_2022_p7LSrQ3AADp | Beyond Faithfulness: A Framework to Characterize and Compare Saliency Methods | Saliency methods calculate how important each input feature is to a machine learning model’s prediction, and are commonly used to understand model reasoning. “Faithfulness,” or how fully and accurately the saliency output reflects the underlying model, is an oft-cited desideratum for these methods. However, explanation methods must necessarily sacrifice certain information in service of user-oriented goals such as simplicity. To that end, and akin to performance metrics, we frame saliency methods as abstractions: individual tools that provide insight into specific aspects of model behavior and entail tradeoffs. Using this framing, we describe a framework of nine dimensions to characterize and compare the properties of saliency methods. We group these dimensions into three categories that map to different phases of the interpretation process: methodology, or how the saliency is calculated; sensitivity, or relationships between the saliency result and the underlying model or input; and, perceptibility, or how a user interprets the result. As we show, these dimensions give us a granular vocabulary for describing and comparing saliency methods — for instance, allowing us to develop “saliency cards” as a form of documentation, or helping downstream users understand tradeoffs and choose a method for a particular use case. Moreover, by situating existing saliency methods within this framework, we identify opportunities for future work, including filling gaps in the landscape and developing new evaluation metrics.
| Reject | The reviews end up split. The most positive reviewer notes that the paper may be useful broadly to researchers and practitioners. That is the main promise of the work. However as another reviewer points out, the authors fall short of convincingly explaining how the said practitioners and researchers would benefit from the work. And, also as noted in a review, the paper does not quite go deep enough into the discussion of what saliency means for different methods analyzed.
Contrary to the positive reviewers comment, I do not think that a paper must provide "novel work to built upon", but for a paper that doesn't, there is a significant threshold on how convincing it must be regarding the potential impact of the work. I think the paper in its current state, even after the rebuttal/revision, does not meet this threshold. | val | [
"Rcxvxlbwpb",
"jGEAOWGPy6B",
"xZNmMr5zYsS",
"xcfiZxP-pFv",
"Am-VU0KZJAh",
"HbPrKmST1TT"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful and detailed review; we appreciate you noting the usefulness of the framework for both researchers and end users. We have incorporated several changes to the paper based on your suggestions, and provide some additional clarifications here as well: \n\n>“The methodology is not entire... | [
-1,
-1,
-1,
8,
3,
5
] | [
-1,
-1,
-1,
5,
4,
5
] | [
"HbPrKmST1TT",
"Am-VU0KZJAh",
"xcfiZxP-pFv",
"iclr_2022_p7LSrQ3AADp",
"iclr_2022_p7LSrQ3AADp",
"iclr_2022_p7LSrQ3AADp"
] |
iclr_2022_T6lAFguUbw | Modeling Bounded Rationality in Multi-Agent Simulations Using Rationally Inattentive Reinforcement Learning | Multi-agent reinforcement learning (MARL) is a powerful framework for studying emergent behavior in complex agent-based simulations. However, RL agents are often assumed to be rational and behave optimally, which does not fully reflect human behavior. Here, we study more human-like RL agents which incorporate an established model of human-irrationality, the Rational Inattention (RI) model. RI models the cost of cognitive information processing using mutual information. Our RIRL framework generalizes and is more flexible than prior work by allowing for multi-timestep dynamics and information channels with heterogeneous processing costs. We evaluate RIRL in Principal-Agent (specifically manager-employee relations) problem settings of varying complexity where RI models information asymmetry (e.g. it may be costly for the manager to observe certain information about the employees). We show that using RIRL yields a rich spectrum of new equilibrium behaviors that differ from those found under rational assumptions. For instance, some forms of a Principal's inattention can increase Agent welfare due to increased compensation, while other forms of inattention can decrease Agent welfare by encouraging extra work effort. Additionally, new strategies emerge compared to those under rationality assumptions, e.g., Agents are incentivized to misrepresent their ability. These results suggest RIRL is a powerful tool towards building AI agents that can mimic real human behavior. | Reject | The paper proposes a multi-agent RL framework that make decisions in a more human-like manner by incorporating rational inattention. The approach is evaluated on two game theoretic problems. The reviewers agree that the topic of the paper is interesting. However, there are concerns about the significance of the proposed approach. As the method incorporates human-inspired limitations, it's aim is not to outperform SOTA RL methods on regularly considered domains; at the same time, as the approach is only evaluated on two simulation-based tasks, it is unclear how it would perform in more realistic scenarios that may benefit from human-like decision making. For these reasons, I recommend rejection. | train | [
"QhLtEHk180c",
"HWLNtN1M9AX",
"8Vg5fpu8zye",
"_XOwtdSlVGg",
"uZ8e0CXEWPp",
"9hdXL-aqkdC",
"i5zLX3-8j-O",
"bjB2ZBu8Hzj",
"mLF9uuX8mEM",
"OJHN6QwlwLj"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes Rational Inattention Reinforcement Learning (RIRL), a multi-agent reinforcement learning framework that incorporates the behavioral model of rational inattention. Building on the classical formalization of rational inattention, the objective penalizes for the mutual information between the actio... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"iclr_2022_T6lAFguUbw",
"bjB2ZBu8Hzj",
"OJHN6QwlwLj",
"mLF9uuX8mEM",
"bjB2ZBu8Hzj",
"QhLtEHk180c",
"iclr_2022_T6lAFguUbw",
"iclr_2022_T6lAFguUbw",
"iclr_2022_T6lAFguUbw",
"iclr_2022_T6lAFguUbw"
] |
iclr_2022_iim-R8xu0TG | FitVid: High-Capacity Pixel-Level Video Prediction | An agent that is capable of predicting what happens next can perform a variety of tasks through planning with no additional training. Furthermore, such an agent can internally represent the complex dynamics of the real-world and therefore can acquire a representation useful for a variety of visual perception tasks. This makes predicting the future frames of a video, conditioned on the observed past and potentially future actions, an interesting task which remains exceptionally challenging despite many recent advances. Existing video prediction models have shown promising results on simple narrow benchmarks but they generate low quality predictions on real-life datasets with more complicated dynamics or broader domain. There is a growing body of evidence that underfitting on the training data is one of the primary causes for the low quality predictions. In this paper, we argue that the inefficient use of parameters in the current video models is the main reason for underfitting. Therefore, we introduce a new architecture, named FitVid, which is capable of fitting the common benchmarks so well that it begins to suffer from overfitting -- while having similar parameter count as the current state-of-the-art models. We analyze the consequences of overfitting, illustrating how it can produce unexpected outcomes such as generating high quality output by repeating the training data, and how it can be mitigated using existing image augmentation techniques. As a result, FitVid outperforms the current state-of-the-art models across four different video prediction benchmarks on four different metrics. | Reject | This paper proposes a variational video prediction model FitVid and attains a better fit to video prediction datasets. The draft was reviewed by four experts in the field and received mixed scores (1 borderline accept, 3 reject). The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper. For a video prediction model, fitting a dataset is quite important. But AC agrees with the reviewer jPAY. It will be more exciting to build a causal model of the world and enable it to perform future and counterfactual prediction (e.g, CLEVRER). The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere. | train | [
"Tft8NZs3vuw",
"hXU1ZJ8j9fO",
"DqWtPDoUzHY",
"l5HrVw1_jMU",
"u99jPNDoL0b",
"k4kgr7Lp4JA",
"hPTICbEUTCt",
"7cRfNcuvOp-",
"SiB_8fhySsn",
"WEQXp8Kw5Ta",
"TkNMIdG-_g"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper studies conditional video prediction. In particular, it focuses on the problem of current models underfitting and not scaling to datasets. The paper proposes an architecture that is claimed capable of using parameters more efficiently in order to overfit. Then, data augmentation is introduced to improve ... | [
5,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_iim-R8xu0TG",
"DqWtPDoUzHY",
"iclr_2022_iim-R8xu0TG",
"iclr_2022_iim-R8xu0TG",
"l5HrVw1_jMU",
"DqWtPDoUzHY",
"Tft8NZs3vuw",
"TkNMIdG-_g",
"WEQXp8Kw5Ta",
"iclr_2022_iim-R8xu0TG",
"iclr_2022_iim-R8xu0TG"
] |
iclr_2022_qSTEPv2uLR8 | Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation | Optimal Mass Transport (OMT) is a well studied problem with a variety of applications in a diverse set of fields ranging from Physics to Computer Vision and in particular Statistics and Data Science. Since the original formulation of Monge in 1781 significant theoretical progress been made on the existence, uniqueness and properties of the optimal transport maps. The actual numerical computation of the transport maps, particularly in high dimensions, remains a challenging problem. In the past decade several neural network based algorithms have been proposed to tackle this task. In this paper, building on recent developments of input convex neural networks and physics informed neural networks for solving PDE's, we propose a new Deep Learning approach to solve the continuous OMT problem. Our framework is based on Brenier's theorem, which reduces the continuous OMT problem to that of solving a non-linear PDE of Monge-Ampere type whose solution is a convex function. To demonstrate the accuracy of our framework we compare our method to several other deep learning based algorithms. We then focus on applications to the ubiquitous density estimation and generative modeling tasks in statistics and machine learning. Finally as an example we present how our framework can be incorporated with an autoencoder to estimate an effective probabilistic generative model. | Reject | This work proposes to define densities via the pushforward of a base density through the gradient field of a convex potential as studied in OT theory and, in particular, inspired by Brenier's theorem.
More concretely, it proposes to use ICNNs to parametrize the convex potentials and considers two mechanisms to match a target density: 1) with a known (normalized) target approximately solve the Monge-Ampere equation via optimization; 2) with only samples available, they propose to use the maximum-likelihood approach.
While the paper is overall well-written, the idea is very close to existing work that was not mentioned or discussed in the paper. The paper would benefit from a substantial revision to incorporate the missing references and emphasize the relative novelty. | train | [
"dnDnNin4ou",
"_Vv0bx4hiAC",
"6PCBxHdjiYu",
"EYdv8KHWFsL",
"R5UOzDIUBur",
"up0DOE3owiZ",
"qe_M9y2LoZ",
"kSfJDGYVt3M",
"hb5ZY3quS0K",
"Vx2Cg5RxERY",
"h8XvvuOXj8A",
"auFs4yB9Qq6",
"fvBfUC5EI9V",
"spqetNouau-",
"Eyi67KNqhv9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the referee for his responsiveness and we are glad that our second answer was able to clarify the situation. It seems, however, that there is still some confusion regarding the invertibility of our estimated maps. Thus we want to emphasize that the invertibility of the transport maps is entirely unrelate... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"_Vv0bx4hiAC",
"R5UOzDIUBur",
"iclr_2022_qSTEPv2uLR8",
"hb5ZY3quS0K",
"up0DOE3owiZ",
"qe_M9y2LoZ",
"kSfJDGYVt3M",
"Eyi67KNqhv9",
"6PCBxHdjiYu",
"spqetNouau-",
"fvBfUC5EI9V",
"iclr_2022_qSTEPv2uLR8",
"iclr_2022_qSTEPv2uLR8",
"iclr_2022_qSTEPv2uLR8",
"iclr_2022_qSTEPv2uLR8"
] |
iclr_2022_yRYtnKAZqxU | Interrogating Paradigms in Self-supervised Graph Representation Learning | Graph contrastive learning (GCL) is a newly popular paradigm for self-supervised graph representation learning and offers an alternative to reconstruction-based methods.However, it is not well understood what conditions a task must satisfy such that a given paradigm is better suited. In this paper, we investigate the role of dataset properties and augmentation strategies on the success of GCL and reconstruction-based approaches. Using the recent population augmentation graph-based analysis of self-supervised learning, we show theoretically that the success of GCL with popular augmentations is bounded by the graph edit distance between different classes. Next, we introduce a synthetic data generation process that systematically controls the amount of style vs. content in each sample- i.e. information that is irrelevant vs. relevant to the downstream task- to elucidate how graph representation learning methods perform under different dataset conditions. We empirically show that reconstruction approaches perform better when the style vs. content ratio is low and GCL with popular augmentations benefits from moderate style. Our results provide a general, systematic framework for analyzing different graph representation learning methods and demonstrate when a given approach is expected to perform well. | Reject | In this paper, data augmentation for graph contrastive learning (GraphCL) is studied. Most reviewers agree that the problems addressed in this paper are interesting and important for unsupervised graph representation learning literature. However, many reviewers were not fully satisfied with the novelty and the claim of the main contribution of this paper, a theoretical analysis of the conditions under which data augmentation works in GraphCL, due to the lack of clear explanation and evidence. Unfortunately, no reviewer has suggested acceptance of this paper at this time. | train | [
"R9RUUGCYQp5",
"Fs7op8mx3R-",
"veh4FGIaSCK",
"BDsDs4RKEnM",
"TkmwKTEZQ7m",
"-D3d8EDsuXP",
"ODKhaVIrs7J",
"iOyq6ACExSU",
"GwtmtbEEZyL",
"I-z2t5V9OYv",
"qdva_6hVRy4",
"BKv0HsZ_vK3",
"CixNr5JKVl2",
"j73TxHqnBo",
"-ogxX2Yrti6",
"QmnQt_61F85"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" i appreciate it greatly that the authors give the detailed responses. the motivation and idea is in general interesting while i will still keep my score since it still needs a major revision before satisfactory.\n",
"This paper attempts to study the connection between graph edit distance and the reason behind t... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"I-z2t5V9OYv",
"iclr_2022_yRYtnKAZqxU",
"iOyq6ACExSU",
"GwtmtbEEZyL",
"ODKhaVIrs7J",
"iclr_2022_yRYtnKAZqxU",
"QmnQt_61F85",
"Fs7op8mx3R-",
"-ogxX2Yrti6",
"qdva_6hVRy4",
"j73TxHqnBo",
"iclr_2022_yRYtnKAZqxU",
"iclr_2022_yRYtnKAZqxU",
"iclr_2022_yRYtnKAZqxU",
"iclr_2022_yRYtnKAZqxU",
"i... |
iclr_2022_AJO2mBSTOHl | Analytically Tractable Bayesian Deep Q-Learning | Reinforcement learning (RL) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning (DQN). The current consensus of DQN for training neural networks (NNs) on such complex environments is to rely on gradient-descent optimization (GD). This consensus ignores the uncertainty of the NN's parameters which is a key aspect for the selection of an optimal action given a state. Although alternative Bayesian deep learning methods exist, most of them still rely on GD and numerical approximations, and they typically do not scale on complex benchmarks such as the Atari game environment. In this paper, we present how we can adapt the temporal difference Q-learning framework to make it compatible with the tractable approximate Gaussian inference (TAGI) which allows estimating the posterior distribution of NN's parameters using a closed-form analytical method. Throughout the experiments with on- and off-policy reinforcement learning approaches, we demonstrate that TAGI can reach a performance comparable to backpropagation-trained networks while using only half the number of hyperparameters, and without relying on GD or numerical approximations. | Reject | The authors combine TAGI with Q-learning to create an approximate Bayesian Q-learning algorithm. They evaluate their approach and show it has comparable performance to DQN.
All of the reviewers were positive about the potential of the paper. Unfortunately, the paper suffers from lack of clarity, of motivation, and comparison to relevant approaches. All of the reviewers brought up nearly the same concerns and I agree with these concerns. The authors have not address them and the reviewers do not think this paper is ready for publication at this time. I agree and recommend rejection.
Evaluating TAGI-DQN is a valuable contribution, but alone it is not sufficient. The reviewers have made many suggestions on how to improve the paper, and I hope the authors follow up on these suggestions. | train | [
"-KDFUFlkWYI",
"2kX2o_2km2f",
"wzirB36PyZE",
"ruxcNiYRYq",
"fE9_DOiYtZF",
"tHdn9UfuQtV",
"d9l519TzAo",
"Qgq50jxJ1n",
"uTJr0LWMSLq",
"z-dvrWLG-ou",
"fZosh-eEbPe",
"yUUa8IDdiIV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response, and for taking the time to answer these questions.\n\nI understand and appreciate the value of applying Bayesian techniques to reinforcement learning, and I think that is a worthwhile line of research to pursue. The broader point that I was trying to make with my review was that the pa... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
3
] | [
"tHdn9UfuQtV",
"d9l519TzAo",
"yUUa8IDdiIV",
"uTJr0LWMSLq",
"wzirB36PyZE",
"fE9_DOiYtZF",
"z-dvrWLG-ou",
"fZosh-eEbPe",
"iclr_2022_AJO2mBSTOHl",
"iclr_2022_AJO2mBSTOHl",
"iclr_2022_AJO2mBSTOHl",
"iclr_2022_AJO2mBSTOHl"
] |
iclr_2022_mRF387I4Wl | FlowX: Towards Explainable Graph Neural Networks via Message Flows | We investigate the explainability of graph neural networks (GNNs) as a step towards elucidating their working mechanisms. While most current methods focus on explaining graph nodes, edges, or features, we argue that, as the inherent functional mechanism of GNNs, message flows are more natural for performing explainability. To this end, we propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows. To quantify the importance of flows, we propose to employ the concept of Shapley values from cooperative game theory. To tackle the complexity of computing Shapley values, we propose an approximation scheme to compute Shapley values as initial assessments of flow importance. We then propose a learning algorithm to refine scores and improve explainability. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed FlowX leads to improved explainability of GNNs. | Reject | The paper proposes an explanation method based on message flows, and shows better performance than the state-of-the-art methods.
The authors addressed most of the reviewer's comments but the reviewers are not enthusiastic. So I give my evaluation (some concerns are shared with reviewers and were not well addressed in the rebuttal)
Pros:
- State-of-the-art results on edge scoring.
Concerns:
- The main claim is not supported. The authors say "we argue that message flows are more natural for performing explainability. To this end, we propose..." But I see no such argument after the proposed method is introduced. Also, no advantage of the flow-based approach is shown. The experiments only show edge scoring, which ignores the layer-wise edge scoring. For this task, many existing methods are similar to the proposed approach in the sense that they measure how much information goes through subgraphs. Although the proposed method shows good performance in edge scoring, this is not necessarily because the proposed method is flow-based, and cannot be evidence of the superiority of flow-based methods. Fine details of the algorithm can contribute to the performance. In the rebuttal, the authors mentioned a virus infection dataset as a situation where the flow-based method can do beyond what existing methods can do. This kind of experiment should be shown in the paper to support the main claim.
- Difference between flows and walks is unclear. The authors imply that this paper is the first paper based on the flows, and reviewers understood so. The authors say a walk is "similar" to a flow but the difference is not explained. (the authors only talk about the difference in how to compute the score in Schnake et al.) Essential difference between walks and flows should be explained.
- The reason why the proposed method performs better than the existing methods is not analyzed. The authors say they "believe" that this is because the proposed method is based on flows, but what readers want to see is evidence.
- Presentation should be improved. Some formulations are unclear, e.g., I have no idea what F_?{t} means. If this would be the best notation the authors think of, it should be explained with a figure. Use another character if t is not the layer id. Notation is not consistent, e.g., edges are denoted by e in Section 2.1 while they are denoted by a later.
- Marginal technical contributions.
- High complexity. The proposed method seems not scalable even with the crude Monte Carlo approximation with a small number of samples.
With my concerns above and the reviewer's evaluations, I would not recommend acceptance. | train | [
"3_XuJiXC_4C",
"tL_4R6Wgsim",
"YgMKwPHRbx",
"cWvOZjWyQbB",
"n7SEULwhyD4",
"hbJ-kySJIxz",
"KlBWLrMxmS",
"7SuH_sFGM5i",
"aKXxib0wpK",
"BIx61Nnj0R",
"gLhkPNCAjB",
"EF_86HEfNvZ",
"pEztCmjGv-o"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the feedback! Considering my concerns are not well addressed, I do not think this paper is qualified enough to be published in ICLR. I will keep my original rating.",
" Thanks for the feedback! Considering the concerns raised by other reviewers, I will keep my original rating.",
"FlowX is an explan... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"iclr_2022_mRF387I4Wl",
"cWvOZjWyQbB",
"iclr_2022_mRF387I4Wl",
"EF_86HEfNvZ",
"BIx61Nnj0R",
"pEztCmjGv-o",
"7SuH_sFGM5i",
"YgMKwPHRbx",
"hbJ-kySJIxz",
"gLhkPNCAjB",
"iclr_2022_mRF387I4Wl",
"iclr_2022_mRF387I4Wl",
"iclr_2022_mRF387I4Wl"
] |
iclr_2022_O5Wr-xX0U2y | Deep Reinforcement Learning for Equal Risk Option Pricing and Hedging under Dynamic Expectile Risk Measures | Recently equal risk pricing, a framework for fair derivative pricing, was extended to consider coherent risk measures. However, all current implementations either employ a static risk measure or are based on traditional dynamic programming solution schemes that are impracticable in realistic settings: when the number of underlying assets is large or only historical trajectories are available. This paper extends for the first time the deep deterministic policy gradient algorithm to the problem of solving a risk averse Markov decision process that models risk using a time consistent dynamic expectile risk measure. Our numerical experiments, which involve both a simple vanilla option and a more exotic basket option, confirm that the new ACRL algorithm can produce high quality hedging strategies that produce accurate prices in simple settings, and outperform the strategies produced using static risk measures when the risk is evaluated at later points of time. | Reject | This paper proposes a risk-sensitive actor critic reinforcement learning (RL) method that optimizes the policy with respect to a dynamic (iterated) expectile risk measure. The expectile risk measure has the elicitability property and can be expressed as the minimizer of an expected scoring function, which is exploited in critic update. The proposed approach is applied to option pricing and hedging.
A main point of discussion was the applicability and effectiveness of the proposed method beyond particular financial problems. The original submission was indeed specialized to particular financial tasks. The authors have rewritten the paper in a way that it claims to propose a risk-sensitive RL method for the general MDP with finite horizon. This however leaves the question regarding the advantages of the proposed approach over existing methods of risk-sensitive RL, including those that work with non-coherent (dynamic) risk measures (since coherence is often not needed in domains outside finance). | train | [
"ZnJpDBRxghx",
"jSk9-YJoQlm",
"Mb-aB5mEpx",
"bB1mow8C6Y",
"cA0le0dYv7W",
"9uAiNbhYg84",
"aT4-5aCjssr",
"uwh9v_s2y_w",
"tQgW9obkm0n",
"AXOW7Oeyv3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your feedback, I think the changes you made improved the quality of your paper. I will keep my positive mark.",
"This paper targets a fundamentally important issue: risk-aversion in pricing and hedging. The authors claims the first algorithm to identify optimal risk averse for option hedging strat... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"tQgW9obkm0n",
"iclr_2022_O5Wr-xX0U2y",
"iclr_2022_O5Wr-xX0U2y",
"cA0le0dYv7W",
"aT4-5aCjssr",
"iclr_2022_O5Wr-xX0U2y",
"AXOW7Oeyv3",
"jSk9-YJoQlm",
"Mb-aB5mEpx",
"iclr_2022_O5Wr-xX0U2y"
] |
iclr_2022_scSheedMzl | Locally Invariant Explanations: Towards Causal Explanations through Local Invariant Learning | Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level. Although many variants have been proposed, few provide a simple way to produce high fidelity explanations that are also stable and intuitive. In this work, we provide a novel perspective by proposing a model agnostic local explanation method inspired by the invariant risk minimization (IRM) principle -- originally proposed for (global) out-of-distribution generalization -- to provide such high fidelity explanations that are also stable and unidirectional across nearby examples. Our method is based on a game theoretic formulation where we theoretically show that our approach has a strong tendency to eliminate features where the gradient of the black-box function abruptly changes sign in the locality of the example we want to explain, while in other cases it is more careful and will choose a more conservative (feature) attribution, a behavior which can be highly desirable for recourse. Empirically, we show on tabular, image and text data that the quality of our explanations with neighborhoods formed using random perturbations are much better than LIME and in some cases even comparable to other methods that use realistic neighbors sampled from the data manifold, where the latter is a popular strategy to obtain high quality explanations. This is a desirable property given that learning a manifold to either create realistic neighbors or to project explanations is typically expensive or may even be impossible. Moreover, our algorithm is simple and efficient to train, and can ascertain stable input features for local decisions of a black-box without access to side information such as a (partial) causal graph as has been seen in some recent works. | Reject | The reviewers are largely in agreement that this proposal would benefit from more clarity and comparison to key papers/findings in this space. While one reviewer is leaning towards acceptance, and their points were considered by the other reviewers, there wasn't a consensus towards aligning towardsa an acceptance. Thus, I recommend that the authors take advantage of the reviewers' comments to further improve their manuscript. | train | [
"jveSZUMJ20H",
"1OGzkbh85ZZ",
"YgN-6yONNUy",
"N3-mAmXxX16",
"-sRSzQxCoCQ",
"QbI04Ok38h2",
"WRZGe05hGBh",
"HinhttUYRMB",
"Cj0r7KtsJow",
"rSo39X9d1Tp",
"kVvtKWPm6Av",
"ZW9Y1HSGeC0",
"9RIkBXf9og"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to all reviewers for their critical reviews. Additionally, thanks to reviewers tbk2 and 388N for the engaging discussions. We believe we have addressed most of the reviewers concerns. Please let us know if any more clarifications or explanations are required. Thank you.",
" We believe we have addressed m... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"iclr_2022_scSheedMzl",
"Cj0r7KtsJow",
"iclr_2022_scSheedMzl",
"ZW9Y1HSGeC0",
"QbI04Ok38h2",
"9RIkBXf9og",
"N3-mAmXxX16",
"YgN-6yONNUy",
"kVvtKWPm6Av",
"iclr_2022_scSheedMzl",
"iclr_2022_scSheedMzl",
"iclr_2022_scSheedMzl",
"iclr_2022_scSheedMzl"
] |
iclr_2022_b-ZaBVGx8Q | DP-REC: Private & Communication-Efficient Federated Learning | Privacy and communication efficiency are important challenges in federated training of neural networks, and combining them is still an open problem. In this work, we develop a method that unifies highly compressed communication and differential privacy (DP). We introduce a compression technique based on Relative Entropy Coding (REC) to the federated setting. With a minor modification to REC, we obtain a provably differentially private learning algorithm, DP-REC, and show how to compute its privacy guarantees. Our experiments demonstrate that DP-REC drastically reduces communication costs while providing privacy guarantees comparable to the state-of-the-art. | Reject | This submission describes an approach to compressing the communication in federated learning. The key idea is using a set of random samples from a prior distribution and then performing importance weighed sampling. The work performs an analysis of the privacy guarantees of this process and experimental evaluation.
The main issue with this work is the authors appear to be unaware that the basic problem they pose is solved in a more comprehensive and lossless way in a recent work https://arxiv.org/abs/2102.12099 (Feldman and Talwar, ICML 2021). That work shows that any differentially private randomizer can be compressed via a simpler algorithm that performs rejection sampling using a PRG. The algorithm does not loose privacy or utility (under standard cryptographic assumptions) while guaranteeing low communication. In contrast this work loses significantly in utility and provides opaque privacy guarantees.
This submission analyzes a randomized that adds Gaussian distribution and, in particular, the compression technique in (Feldman and Talwar) applies to it. The technique proposed in this work is very similar in spirit (with prior distribution corresponding to reference distribution in the earlier work.
In light of the earlier work I do not think the contributions in this submission are sufficient for publication. | train | [
"XCwix4ZY0-Y",
"Xy5Cp_I97L",
"2MkwjHDdKWw",
"IWgLHsF1rMb",
"zlHb9Gpf3A5",
"OaWNtEHnltf",
"U_E68GxTiYz",
"pwJFW3fjsg1",
"TkqW7ap0iSM",
"uwv-ZCvMMXG",
"FRps6lPsFok",
"64a9n8F4aOP",
"G0t5Znyyn_",
"kB290oSTOci",
"IBTCsljG3HC",
"0bIoyPAq-6q",
"jl1Hedbf4-h",
"widCFNdLjH5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies differentially private algorithms in federated learning, and proposes to take advantage of the randomness in Relative Entropy Coding (REC) to achieve good privacy-utility trade offs while significantly saving the communication costs. On four benchmark datasets (MNIST, FEMNIST, Shakespeare, Stack... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_b-ZaBVGx8Q",
"zlHb9Gpf3A5",
"iclr_2022_b-ZaBVGx8Q",
"kB290oSTOci",
"uwv-ZCvMMXG",
"2MkwjHDdKWw",
"TkqW7ap0iSM",
"iclr_2022_b-ZaBVGx8Q",
"IBTCsljG3HC",
"FRps6lPsFok",
"0bIoyPAq-6q",
"2MkwjHDdKWw",
"2MkwjHDdKWw",
"widCFNdLjH5",
"pwJFW3fjsg1",
"XCwix4ZY0-Y",
"iclr_2022_b-ZaBV... |
iclr_2022__LNdXw0BSx | Towards Coherent and Consistent Use of Entities in Narrative Generation | Large pre-trained language models (LMs) have demonstrated impressive capabilities in generating long, fluent text; however, there is little to no analysis on their ability to maintain entity coherence and consistency. In this work, we focus on the end task of narrative generation and systematically analyse the long-range entity coherence and consistency in generated stories. First, we propose a set of automatic metrics for measuring model performance in terms of entity usage. Given these metrics, we quantify the limitations of current LMs. Next, we propose augmenting a pre-trained LM with a dynamic entity memory in an end-to-end manner by using an auxiliary entity-related loss for guiding the reads and writes to the memory. We demonstrate that the dynamic entity memory increases entity coherence according to both automatic and human judgment and helps preserving entity-related information especially in settings with a limited context window. Finally, we also validate that our automatic metrics are correlated with human ratings and serve as a good indicator of the quality of generated stories. | Reject | This work analyzes the ability of pre-trained language models to maintain entity coherence and consistency in long narrative generation. Along with new automatic metrics for analyzing narrative generation, it proposes a memory-augmented model that allows tracking entities to improve narrative generation. Although all the reviewers appreciated the importance of the problem, the novelty of the proposed approach, as well as empirical improvements in a subset of experiments, they also acknowledge several major weaknesses including the lack of rigor in defining the method, the lack of clarity in writing (especially in the experiments section), insufficiently strong baselines, and an issue of reproducibility since the code cannot be released. These concerns were in part addressed during rebuttal, but not enough to accept the paper. | train | [
"mjYoS_jZeLl",
"db_gj2uG-nr",
"bJQkhjG0Vjy",
"AzaWQG0hvpw",
"LJDfD4Ra80",
"CJyebiFP2fQ",
"33zWLyLqptR",
"hWNLojU6vJ7",
"07xcir-UM_O",
"dw4tozbwgZZ",
"F0Ek0N0PRUi",
"qaA9gSQvpd6",
"l64HQEfAdu3",
"2gWdWN4Kcp9"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, we hope you've had the chance to take a look at our response and paper revision. Please let us know if the response addressed your concerns or there is anything else that is still unclear. We are happy to provide further clarification.\n\nThank you for your time! ",
" Dear reviewer, we hope you'v... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"2gWdWN4Kcp9",
"l64HQEfAdu3",
"F0Ek0N0PRUi",
"iclr_2022__LNdXw0BSx",
"2gWdWN4Kcp9",
"l64HQEfAdu3",
"qaA9gSQvpd6",
"F0Ek0N0PRUi",
"dw4tozbwgZZ",
"iclr_2022__LNdXw0BSx",
"iclr_2022__LNdXw0BSx",
"iclr_2022__LNdXw0BSx",
"iclr_2022__LNdXw0BSx",
"iclr_2022__LNdXw0BSx"
] |
iclr_2022_iFf26yMjRdN | Federated Learning with Partial Model Personalization | We propose and analyze a general framework of federated learning with partial model personalization. Compared with full model personalization, partial model personalization relies on domain knowledge to select a small portion of the model to personalize, thus imposing a much smaller on-device memory footprint. We propose two federated optimization algorithms for training partially personalized models, where the shared and personal parameters are updated either simultaneously or alternately on each device, but only the shared parameters are communicated and aggregated at the server. We give convergence analyses of both algorithms for minimizing smooth nonconvex functions, providing theoretical support of them for training deep learning models. Our experiments on real-world image and text datasets demonstrate that (a) partial model personalization can obtain most of the benefit of full model personalization with a small fraction of personalized parameters, and, (b) the alternating update algorithm often outperforms the simultaneous update algorithm. | Reject | Dear authors,
I have read the reviewers and your careful rebuttals. I would have liked to see much more engagement from the reviewers. However, even after your rebuttal, no reviewer suggested acceptance, with two reviewers proposing reject (3) and two proposing weak reject (5).
The reviewers found the paper well written. I concur. The reviewers also notice that the contributions are very marginal compared to prior literature. Personalized FL formulation studied here was in a simpler form first proposed by Hanzely and Richtarik (Federated learning of a mixture of global and local models, 2020) and later generalized by Hanzely et al - a paper the authors cite. That work performed an in-depth analysis, also including the nonconvex case, which the authors (claim) did not notice. Compared to that work, the authors perform an analysis in the partial participation regime. However, partial participation is by now a standard technique which can usually be combined with other techniques without much difficulty. The authors tried to argue that their analysis approach is unique, but the reviewers remained unconvinced.
In summary, I think this is a solid piece of work which is perhaps judged, looking at the raw scores, a bit too harshly. However, most verbal comments are indeed fair. I am also of the opinion that the paper in its current form does not reach the necessary bar for acceptance. I would encourage the authors to carefully revise the manuscript, taking into account all feedback that they find useful. I think the paper can be improved, with not too much effort perhaps, to a state in which the bar could be reached.
Kind regards,
Area chair | train | [
"8DlX_NUYfoh",
"U4t5w8vTGUC",
"98CvjAMrGrt",
"-HjP5lpLJV",
"esX9DQSCkdA",
"FF6uouK_R70",
"tLdgLQWtc3H",
"94UFgpJ2oBk",
"NA2VchyA2c_",
"NDADjCqM2LJ",
"rg7mwFhTH3T"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have to check the theory again as the authors clarified the main contribution is in theory. I want to start a discussion now so that the authors still have a chance to update the draft. \n\n(1) Shadow iterates is commonly used, and I think the previous mentioned proof for FedAvg for full client participation ca... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"FF6uouK_R70",
"iclr_2022_iFf26yMjRdN",
"rg7mwFhTH3T",
"NA2VchyA2c_",
"NDADjCqM2LJ",
"-HjP5lpLJV",
"94UFgpJ2oBk",
"iclr_2022_iFf26yMjRdN",
"iclr_2022_iFf26yMjRdN",
"iclr_2022_iFf26yMjRdN",
"iclr_2022_iFf26yMjRdN"
] |
iclr_2022_H3zl1mDHDTn | Lagrangian Method for Episodic Learning | This paper considers the problem of learning optimal value functions for finite-time decision tasks via saddle-point optimization of a nonlinear Lagrangian function that is derived from the $Q$-form Bellman optimality equation. Despite a long history of research on this topic in the literature, previous works on this general approach have been almost exclusively focusing on a linear special case known as the linear programming approach to RL/MDP. Our paper brings new perspectives to this general approach in the following aspects: 1) Inspired by the usually-used linear $V$-form Lagrangian, we proposed a nonlinear $Q$-form Lagrangian function and proved that it enjoys strong duality property in spite of its nonlinearity. The Lagrangian duality property immediately leads to a new imitation learning algorithm, which we applied to Machine Translation and obtained favorable performance on standard MT benchmark. 2) We pointed out a fundamental limit of existing works, which seeks to find minimax-type saddle points of the Lagrangian function. We proved that another class of saddle points, the maximin-type ones, turn out to have better optimality property. 3) In contrast to most previous works, our theory and algorithm are oriented to the undiscounted episode-wise reward, which is practically more relevant than the usually considered discounted-MDP setting, thus have filled a gap between theory and practice on the topic. | Reject | Although the submission studies an interesting question, many parts are not clear enough and the presentation needs to be improved. I encourage authors to revise the paper accordingly and resubmit in future venues. | train | [
"uLxOfNFc32k",
"CUS2BS-prHi",
"zTCXaawNJJw",
"AkwNnlSm4C5",
"5aI0sdAuMZ",
"vwME7Qa2XiC",
"k46vD0CRdf",
"EaG-E9EXTe",
"-cvZB06FcbH",
"u32j1TY_M6Hs",
"_53Uk7oErtp",
"ubJ6nV8UkR1",
"lLW14rPDyDC",
"Ua67dOLMbZr",
"JycHlrVtU2R"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This response is dedicated to the reviewer's concern on the challenges of nonlinear optimization in our proposed algorithmic idea.\n\n(1)\"*The policy optimization procedure of taking gradients while replacing max_a with some heuristic (like using the behavior policy) is a bit unsatisfactory. Why is such a substi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"lLW14rPDyDC",
"iclr_2022_H3zl1mDHDTn",
"Ua67dOLMbZr",
"Ua67dOLMbZr",
"EaG-E9EXTe",
"lLW14rPDyDC",
"lLW14rPDyDC",
"lLW14rPDyDC",
"Ua67dOLMbZr",
"JycHlrVtU2R",
"JycHlrVtU2R",
"JycHlrVtU2R",
"iclr_2022_H3zl1mDHDTn",
"iclr_2022_H3zl1mDHDTn",
"iclr_2022_H3zl1mDHDTn"
] |
iclr_2022_pzgENfIRBil | Self-consistent Gradient-like Eigen Decomposition in Solving Schrödinger Equations | The Schrödinger equation is at the heart of modern quantum mechanics. Since exact solutions of the ground state are typically intractable, standard approaches approximate Schrödinger's equation as forms of nonlinear generalized eigenvalue problems $F(V)V = SV\Lambda$ in which $F(V)$, the matrix to be decomposed, is a function of its own top-$k$ smallest eigenvectors $V$, leading to a ``self-consistency problem''. Traditional iterative methods heavily rely on high-quality initial guesses of $V$ generated via domain-specific heuristics methods based on quantum mechanics. In this work, we eliminate such a need for domain-specific heuristics by presenting a novel framework, Self-consistent Gradient-like Eigen Decomposition (SCGLED) that regards $F(V)$ as a special ``online data generator'', thus allows gradient-like eigendecomposition methods in streaming $k$-PCA to approach the self-consistency of the equation from scratch in an iterative way similar to online learning. With several critical numerical improvements, SCGLED is robust to initial guesses, free of quantum-mechanism-based heuristics designs, and neat in implementation. Our experiments show that it not only can simply replace traditional heuristics-based initial guess methods with large performance advantage (achieved averagely 25x more precise than the best baseline in similar wall time), but also is capable of finding highly precise solutions independently without any traditional iterative methods. | Reject | This paper presents a numerical approach to solving the multi-body Schrodinger equation. Three reviews give low confidence scores and the one review with high confidence, and high score, is very brief and the reviewer appears to have a weak background in this area. My feeling is that the ICLR reviewer pool does not contain reviewers who are really competent to review this paper. There is a large literature in the physics community on this problem and the paper should be reviewed in an appropriate venue. This is especially true for evaluating the empirical results. If the mathematical techniques are relevant to general machine learning, and the authors want to have an impact on machine learning community, then it should be possible to give empirical results on a problem commonly used to evaluate machine learning methods at machine learning venues. Whether or not this is important for physics should be judged by physicists. In any case, the reviews are for the most part not enthusiastic. | val | [
"0AvVXKhdME",
"Wxsnm9ISU4i",
"OBjleY4EKBM",
"BunYp7PwY9O",
"6CXGqtn68GG",
"MlyfWk10er1",
"mGw9NLaq93U",
"RqhCXTXPOmb",
"pKqTXWaYIyE",
"LlEZhVpptvL",
"kg88-JGeNo9",
"RhkQcuoa9uf",
"jcop_jcwly4",
"IMHfTb2dVlA"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your attention! AI for Science started to emerge as one of the important application areas for machine learning, and as part of it, we would hope this work to appear in the machine learning community such as ICLR due to the following reasons:\n\nFirst, we intend to introduce and tackle the quantum p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"OBjleY4EKBM",
"0AvVXKhdME",
"iclr_2022_pzgENfIRBil",
"iclr_2022_pzgENfIRBil",
"MlyfWk10er1",
"LlEZhVpptvL",
"kg88-JGeNo9",
"jcop_jcwly4",
"IMHfTb2dVlA",
"RhkQcuoa9uf",
"iclr_2022_pzgENfIRBil",
"iclr_2022_pzgENfIRBil",
"iclr_2022_pzgENfIRBil",
"iclr_2022_pzgENfIRBil"
] |
iclr_2022_FYUzzBPh_j | Communicating via Markov Decision Processes | We consider the problem of communicating exogenous information by means of Markov decision process trajectories. This setting, which we call a Markov coding game (MCG), generalizes both source coding and a large class of referential games. MCGs also isolate a problem that is important in decentralized control settings in which cheap-talk is not available---namely, they require balancing communication with the associated cost of communicating. We contribute a theoretically grounded approach to MCGs based on maximum entropy reinforcement learning and minimum entropy coupling that we call greedy minimum entropy coupling (GME). We show both that GME is able to outperform a relevant baseline on small MCGs and that GME is able to scale efficiently to extremely large MCGs. To the latter point, we demonstrate that GME is able to losslessly communicate binary images via trajectories of Cartpole and Pong, while simultaneously achieving the maximal or near maximal expected returns, and that it is even capable of performing well in the presence of actuator noise. | Reject | The paper proposes Markov coding game (MCG), which generalizes both source coding and a large class of referential games. All the reviews are negative. The reviewers think the work is not ready for publication in its current form. | train | [
"5KYYGtRI_WU",
"SuTtXvhVOGD",
"5WLPCQnlw-V",
"yTpONHPRTqy",
"er9o7CteU0i",
"a1Rt6wZDcEb",
"y5vY0XoetKU",
"qQ_e-DJe4E"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their comments.\n\n1. We are having some trouble parsing this question, but are happy to discuss further if the reviewer wouldn’t mind clarifying.\n\n2. Indeed, if the MDP policy is deterministic, the sender cannot communicate any information. However, a central point of the submission i... | [
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"a1Rt6wZDcEb",
"er9o7CteU0i",
"qQ_e-DJe4E",
"y5vY0XoetKU",
"iclr_2022_FYUzzBPh_j",
"iclr_2022_FYUzzBPh_j",
"iclr_2022_FYUzzBPh_j",
"iclr_2022_FYUzzBPh_j"
] |
iclr_2022_bUKyC0UiZcr | Temporal abstractions-augmented temporally contrastive learning: an alternative to the Laplacian in RL | In reinforcement learning (RL), the graph Laplacian has proved to be a valuable tool in the task-agnostic setting, with applications ranging from option discovery to dynamics-aware metric learning. Conveniently, learning the Laplacian representation has recently been framed as the optimization of a temporally-contrastive objective to overcome its computational limitations in large or even continuous state spaces (Wu et al., 2019). However, this approach relies on a uniform access to the state space S, and overlooks the exploration problem that emerges during the representation learning process. In this work, we reconcile such representation learning with exploration in a non-uniform prior setting, while recovering the expressive potential afforded by a uniform prior. Our approach leverages the learned representation to build a skill-based covering policy which in turn provides a better training distribution to extend and refine the representation. We also propose to integrate temporal abstractions captured by the learned skills into the representation, which encourages exploration and improves the representation’s dynamics-awareness. We find that our method scales better to challenging environments, and that the learned skills can solve difficult continuous navigation tasks with sparse rewards, where standard skill discovery methods are limited. | Reject | The topic of this paper is non-uniform priors and exploration in reinforcement learning with the graph Laplacian.
All reviewers appreciated several aspects of this work but they all also have several reservations.
Looking at the paper, reviews and discussions, I see the potential a very nice more general contribution. This potential is not fully realised as the paper stands now. Acceptance can therefore not be recommended. | train | [
"qM-azN8aTw-",
"_3qqiIK-7T",
"tTTEEZreBb3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an exploration strategy for adapting Lapliacan RL to settings without the possibility to uniformely sample from the state space. In particular, the work builds on the Wu et al. discussion of estimating a spectral decomposition of the state space. However, the estimation of the Laplacian's eigenv... | [
5,
6,
5
] | [
4,
4,
4
] | [
"iclr_2022_bUKyC0UiZcr",
"iclr_2022_bUKyC0UiZcr",
"iclr_2022_bUKyC0UiZcr"
] |
iclr_2022_6Jf6HX4MoLH | Motion Planning Transformers: One Model to Plan them All | Transformers have become the powerhouse of natural language processing and recently found use in computer vision tasks. Their effective use of attention can be used in other contexts as well, and in this paper, we propose a transformer-based approach for efficiently solving complex motion planning problems. Traditional neural network-based motion planning uses convolutional networks to encode the planning space, but these methods are limited to fixed map sizes, which is often not realistic in the real world. Our approach first identifies regions on the map using transformers to provide attention to map areas likely to include the best path and then applies traditional planners to generate the final collision-free path. We validate our method on a variety of randomly generated environments with different map sizes, demonstrating reduction in planning complexity and achieving comparable accuracy to traditional planners.
| Reject | The paper proposes a planning framework that uses a transformer-based architecture as an attention mechanism that guides the search of a traditional sample-based planner (e.g., RRT*). More specifically, features extracted from a sliding window over the 2D search space serve as input to a transformer that produces a mask indicating where to draw samples from. By constraining the search space for the sample-based planner, the method reduces the time required for planning. The method is compared to both traditional and learning-based planners on different 2D navigation tasks and found to improve sample complexity (and, in turn, computation time), while also being capable of generalizing to unseen and real-world maps.
The manner by which the method combines the advantages of sample-based planning with an attentional mechanism as a way to constrain the sampling process is interesting. As the reviewers emphasize, the experimental evaluation shows that this approach results in performance gains over both traditional (sample-based) and learning-based planners, while also being able to scale to larger maps as well as better generalize to out-of-distribution settings (compared to learning-based methods). These results support the value of both the overall approach as well as the architectural components (e.g., the transformer and the use of positional encoding). The reviewers initially raised a few concerns with the paper, the most notable of which are the need to include preprocessing in the overall computation time, the accuracy of some of the claims in the paper (e.g., with regards to generalizability), generalization to higher-dimensional domains, and the performance on the Dubins car domain. The authors responded to each of the reviews and updated the submission to address many of these concerns. However, questions still remain regarding whether or not the approach can be adapted to state/configuration spaces with more than two dimensions, something that traditional planners are readily capable of, and the unconvincing results on the Dubins car domain.
Overall, the paper proposes an interesting approach to an important problem that is relevant to the robotics and machine learning communities. The paper makes promising contributions to improve the efficiency of planning, however the significance of these contributions needs to be made clearer. | train | [
"qwrobVHTVZ",
"Pw4Ygh7s-XL",
"QPr5FY4pJUI",
"HgJxzPcIcJ9",
"LZhVVPbasmG",
"mhxH4LBRGwL",
"_fkdQWEC6K4",
"esYKlqoHYzj",
"NZPHZdZXaIA",
"VseLSFR_M-m",
"bpyG-ZTTIEg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposes a transformer based trajectory estimator for 2D navigation. The proposed method is shown to not be limited by input size as other learning based approaches. In the experimental section, the authors show that their estimated trajectories result in speed up and performance boost for planning in c... | [
6,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
3
] | [
4,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_6Jf6HX4MoLH",
"NZPHZdZXaIA",
"esYKlqoHYzj",
"iclr_2022_6Jf6HX4MoLH",
"_fkdQWEC6K4",
"iclr_2022_6Jf6HX4MoLH",
"mhxH4LBRGwL",
"HgJxzPcIcJ9",
"qwrobVHTVZ",
"bpyG-ZTTIEg",
"iclr_2022_6Jf6HX4MoLH"
] |
iclr_2022_qjN4h_wwUO | GradMax: Growing Neural Networks using Gradient Information | The architecture and the parameters of neural networks are often optimized independently, which requires costly retraining of the parameters whenever the architecture is modified. In this work we instead focus on growing the architecture without requiring costly retraining. We present a method that adds new neurons during training without impacting what is already learned, while improving the training dynamics. We do this by maximizing the gradients of the new neurons and find an approximation to the optimal initialization by means of the singular value decomposition (SVD). We call this technique Gradient Maximizing Growth (GradMax) and demonstrate its effectiveness in variety of vision tasks and architectures. | Accept (Poster) | This paper looks into growing neural networks, and finds an improved approach to the initialisations of new layers, viz by maximising the gradient norm. Simple, straightforward, neat, and no good reason to reject. It will benefit those who are using growing NNs. | train | [
"SCi-8_c8JJ4",
"e1BW57XrFNA",
"0ZNZD_IiQGV",
"1-dJRVN4Vis",
"GDjs00Thgg",
"Gfve0VknSAc",
"yG9cWnsqV2",
"UiU0ERShmIi",
"Dmx7CPdc0C_",
"BkHqus2vWsm",
"Wu-qa1A8TOx",
"tZqKHjjw_ne",
"Auq2T0VWBYW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a new method (as far as I know) of growing neural networks. This can be viewed as an approach to finding the meta-parameter corresponding to network size. The main idea is to add units such that they maximize the norm of the gradient. This is suggested to be superior to methods that grow by add... | [
6,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_qjN4h_wwUO",
"iclr_2022_qjN4h_wwUO",
"SCi-8_c8JJ4",
"Auq2T0VWBYW",
"Gfve0VknSAc",
"iclr_2022_qjN4h_wwUO",
"GDjs00Thgg",
"iclr_2022_qjN4h_wwUO",
"Auq2T0VWBYW",
"Wu-qa1A8TOx",
"SCi-8_c8JJ4",
"e1BW57XrFNA",
"iclr_2022_qjN4h_wwUO"
] |
iclr_2022_kEvhVb452CC | Transformed CNNs: recasting pre-trained convolutional layers with self-attention | Vision Transformers (ViT) have recently emerged as a powerful alternative to convolutional networks (CNNs). Although hybrid models attempt to bridge the gap between these two architectures, the self-attention layers they rely on induce a strong computational bottleneck, especially at large spatial resolutions. In this work, we explore the idea of reducing the time spent training these layers by initializing them from pre-trained convolutional layers. This enables us to transition smoothly from any pre-trained CNN to its functionally identical hybrid model, called Transformed CNN (T-CNN). With only 50 epochs of fine-tuning, the resulting T-CNNs demonstrate significant performance gains over the CNN as well as substantially improved robustness. We analyze the representations learnt by theT-CNN, providing deeper insights into the fruitful interplay between convolutions and self-attention. | Reject | The paper proposes a method to accelerate training of an architectural hybrid of Transformers and CNNs: first train a CNN and then use the learned parameters to initialize a more general Transformed CNN (T-CNN) model; subsequently continue training the T-CNN.
Reviewers ratings are marginal, with three "marginally above threshold" and one "marginally below threshold". However, no reviewer makes a compelling argument for acceptance, and all reviewers point to significant weaknesses in the work. Reviewer ojmG: "novelty of the proposed method is limited" and "do not always reach the performance of end-to-end Transformers". Reviewer Q4Pp: "experiments are very limited" and also (after rebuttal): "it would good to provide some experiments on a dataset different to ImageNet". Reviewer ZjBY: "proposed model is not compared with many of the existing model architectures" and (after rebuttal): "would benefit from additional experimental analysis". Reviewer zV42: "limited novelty prevents me from giving a higher rating".
In summary, while reviewer ratings span either side of above/below the acceptance threshold, the reviewer comments point to limited novelty and limited experimental impact. Results appear not particularly surprising or significant: while the method provide some savings in training time, it does not seem to ultimately improve top accuracy on tasks and still lags behind the latest vision transformer architectures. The author response did not substantially change reviewer opinion. The AC has also taken a detailed look at the paper and does not believe the contribution to be of sufficient significance to warrant acceptance. | train | [
"LUH2XIp9WTJ",
"Xy2w8VnhLAN",
"G4xnf9btMYv",
"vEb1zEKoHDE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes an approach to bridge CNNs and vision transformers for image recognition. The idea is to replace the last convolutional stage of a ResNet by a self-attention layer which is initialized from the weights of the convolutional stage. The approach is shown to improve the performance of the CNN, espec... | [
6,
5,
6,
6
] | [
4,
2,
3,
4
] | [
"iclr_2022_kEvhVb452CC",
"iclr_2022_kEvhVb452CC",
"iclr_2022_kEvhVb452CC",
"iclr_2022_kEvhVb452CC"
] |
iclr_2022_vSix3HPYKSU | Message Passing Neural PDE Solvers | The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far. Recently, there have been pushes to build neural--numerical hybrid solvers, which piggy-backs the modern trend towards fully end-to-end learned systems. Most works so far can only generalize over a subset of properties to which a generic solver would be faced, including: resolution, topology, geometry, boundary conditions, domain discretization regularity, dimensionality, etc. In this work, we build a solver, satisfying these properties, where all the components are based on neural message passing, replacing all heuristically designed components in the computation graph with backprop-optimized neural function approximators. We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes. In order to encourage stability in training autoregressive models, we put forward a method that is based on the principle of zero-stability, posing stability as a domain adaptation problem. We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, discretization, etc. in 1D and 2D. Our model outperforms state-of-the-art numerical solvers in the low resolution regime in terms of speed, and accuracy. | Accept (Spotlight) | This paper proposes a message passing neural network to solve PDEs. The paper has sound motivation, clear methodology, and extensive empirical study. However, on the other hand, some reviewers also raised their concerns, especially regarding the lack of clear notations and sufficient discussions on the difference between the proposed method and previous works. Furthermore, there is no ablation study and the generalization to multiple spatial resolution is not clearly explained. The authors did a very good job during the rebuttal period: many concerns/doubts/questions from the reviewers were successfully addressed and additional experiments have been performed to support the authors' answers. As a result, several reviewers decided to raise their scores, and the overall assessment on the paper turned to be quite positive. | train | [
"GJoCUx0EBKS",
"1cw32dovnf4",
"ue0kULtSVtO",
"4veCVCHLY3",
"vWPewHNQmtw",
"iRBtJ19juq0",
"-tf2fm62z-f",
"_7Habv975rW",
"ZElPulyKksg"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper utilizes neural message passing to propose another neural solver for PDEs. \n I think the paper has a clarity issues and still in its early stages from that perspective. \n\nIn particular, the METHOD section is very vague and not clearly explained. Notation are introduced without definition and no expla... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_vSix3HPYKSU",
"ZElPulyKksg",
"_7Habv975rW",
"GJoCUx0EBKS",
"-tf2fm62z-f",
"iclr_2022_vSix3HPYKSU",
"iclr_2022_vSix3HPYKSU",
"iclr_2022_vSix3HPYKSU",
"iclr_2022_vSix3HPYKSU"
] |
iclr_2022_tQ2yZj4sCnk | Divergence-Regularized Multi-Agent Actor-Critic | Entropy regularization is a popular method in reinforcement learning (RL). Although it has many advantages, it alters the RL objective and makes the converged policy deviate from the optimal policy of the original Markov Decision Process. Though divergence regularization has been proposed to settle this problem, it cannot be trivially applied to cooperative multi-agent reinforcement learning (MARL). In this paper, we investigate divergence regularization in cooperative MARL and propose a novel off-policy cooperative MARL framework, divergence-regularized multi-agent actor-critic (DMAC). Mathematically, we derive the update rule of DMAC which is naturally off-policy, guarantees a monotonic policy improvement and is not biased by the regularization. DMAC is a flexible framework and can be combined with many existing MARL algorithms. We evaluate DMAC in a didactic stochastic game and StarCraft Multi-Agent Challenge and empirically show that DMAC substantially improves the performance of existing MARL algorithms. | Reject | The paper presents multi-agent RL framework that uses the divergence between the learned policies and a target policy as a penalty that pushes the agent to learn cooperative strategies. The proposed method is built on top of an existing one (DAPO, Wang et al., 2019). Empirical experiments clearly show the advantage of the proposed method.
The reviews for this paper are mixed and borderline. The reviewers appreciate the experiments reported in the paper and that indicate the advantage of the proposed method. But two reviewers do not think that the proposed analysis is sufficiently novel compared to an existing one (DAPO). The responses provided by the authors were appreciated, but did not dissipate these concerns. | train | [
"lMLSFqTbT_",
"UzT3bKhk3gi",
"ntf5uh_tybA",
"OPn7hf6Jz_",
"6vYTshGKYla",
"7UWttrLUWSB",
"DAsQjMrcgXc",
"GKDMy2avEz",
"kLckhvvFxyb",
"XxdP15fPHT",
"nAG4kztRro5",
"sIidNLsFur",
"y0TImMHjifM",
"H7fL6AVZZPU"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your positive feedback. We will certainly release the code if the paper is accepted. ",
" For your question 2.(2), our paper focuses on the cooperative MARL algorithms, so we think the single agent on-policy algorithms such as PPO and IMPALA are out of scope. Though COMA has less satisfying perform... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"ntf5uh_tybA",
"H7fL6AVZZPU",
"kLckhvvFxyb",
"y0TImMHjifM",
"H7fL6AVZZPU",
"iclr_2022_tQ2yZj4sCnk",
"H7fL6AVZZPU",
"nAG4kztRro5",
"sIidNLsFur",
"y0TImMHjifM",
"iclr_2022_tQ2yZj4sCnk",
"iclr_2022_tQ2yZj4sCnk",
"iclr_2022_tQ2yZj4sCnk",
"iclr_2022_tQ2yZj4sCnk"
] |
iclr_2022_eVzy-BWKY6Z | Edge Rewiring Goes Neural: Boosting Network Resilience via Policy Gradient | Improving the resilience of a network protects the system from natural disasters and malicious attacks.
This is typically achieved by introducing new edges, which however may reach beyond the maximum number of connections a node could sustain.
Many studies then resort to the degree-preserving operation of rewiring, which swaps existing edges $AC, BD$ to new edges $AB, CD$.
A significant line of studies focuses on this technique for theoretical and practical results while leaving three limitations: network utility loss, local optimality, and transductivity.
In this paper, we propose ResiNet, a reinforcement learning (RL)-based framework to discover Resilient Network topologies against various disasters and attacks.
ResiNet is objective agnostic which allows the utility to be balanced by incorporating it into the objective function.
The local optimality, typically seen in greedy algorithms, is addressed by casting the cumulative resilience gain into a sequential decision process of step-wise rewiring.
The transductivity, which refers to the necessity to run a computationally intensive optimization for each input graph, is lifted by our variant of RL with auto-regressive permutation-invariant variable action space.
ResiNet is armed by our technical innovation, Filtration enhanced GNN (FireGNN), which distinguishes graphs with minor differences.
It is thus possible for ResiNet to capture local structure changes and adapt its decision among consecutive graphs, which is known to be infeasible for GNN.
Extensive experiments demonstrate that with a small number of rewiring operations, ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches. | Reject | The paper proposes an RL technique for dealing with the problem of network (graph) rewiring for robustness against attacks. Graph rewiring has been studied in a variety of fields, including graph theory (graph abstraction), graph ML (adversarial robustness, performance of GNNs), and combinatorial optimization. Reviewers had concerns with novelty, the correctness of some of the statements, and empirical evalution (in particular, baselines and scalability). While the rebuttal addressed some of the concerns, the overall feel about the paper is lukewarm and the AC believes the paper is below the bar. | train | [
"WZ3seeZDmV",
"2txKasvugrr",
"00bcFOkU2Qr",
"L4KMBUwM6To",
"yNMSL_BAIY",
"h2aNR2wBbM",
"HVAeCg8wrHp",
"nV3dF3PcxDc",
"n8QVoUftc7J",
"GCOfgJv3UiU",
"bRi6P1r1UCi",
"_qTl705IHH",
"qHVFBN2FxF",
"w1n5wt2moWn",
"j_UoNs756a",
"q16ikwyuB_N"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Based on the comments given by the first reviewer, which is entirely different from the feedbacks given by other reviewers, we realize that the reviewer may have looked into the problem from the perspective of practical power systems. The detail of the power system is essential, but it is not the focus of our pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"j_UoNs756a",
"00bcFOkU2Qr",
"L4KMBUwM6To",
"yNMSL_BAIY",
"bRi6P1r1UCi",
"nV3dF3PcxDc",
"n8QVoUftc7J",
"iclr_2022_eVzy-BWKY6Z",
"w1n5wt2moWn",
"iclr_2022_eVzy-BWKY6Z",
"_qTl705IHH",
"qHVFBN2FxF",
"q16ikwyuB_N",
"h2aNR2wBbM",
"iclr_2022_eVzy-BWKY6Z",
"iclr_2022_eVzy-BWKY6Z"
] |
iclr_2022_1O5UK-zoK8g | Adaptive Generalization for Semantic Segmentation | Out-of-distribution robustness remains a salient weakness of current state-of-the-art models for semantic segmentation. Until recently, research on generalization followed a restrictive assumption that the model parameters remain fixed after the training process. In this work, we empirically study an adaptive inference strategy for semantic segmentation that adjusts the model to the test sample before producing the final prediction. We achieve this with two complementary techniques. Using Instance-adaptive Batch Normalization (IaBN), we modify normalization layers by combining the feature statistics acquired at training time with those of the test sample. We next introduce a test-time training (TTT) approach for semantic segmentation, Seg-TTT, which adapts the model parameters to the test sample using a self-supervised loss. Relying on a more rigorous evaluation protocol compared to previous work on generalization in semantic segmentation, our study shows that these techniques consistently and significantly outperform the baseline and attain a new state of the art, substantially improving in accuracy over previous generalization methods. | Reject | This submission presents a technique to improve generalization of urban scenes segmentation. Based on a pre-trained deep net on synthetic data, the approach aims at adapting statistics on real target domain such as Cityscapes, BDD or IDD datasets using an Instance-adaptive Batch Normalization (IaBN) at test time. Results are reported on several synthetic to real scenarios.
Most of the reviewers were not convinced by the approach and have raised several issues. After rebuttal and discussion, no one really changed her/his mind. The novelty of the proposed method is limited to the use of the existing IaBN in this context except the one-sample adaptation. Although the proposed method is effective on some benchmarks, the extra processing time may be a significant limitation. Additional comparisons are necessary. We encourage the authors to consider the reviewers feedbacks for future publication. | val | [
"d6sgjwnrLe",
"NSAOQ6JmVN8",
"5FdxRmD-FO",
"icga9j7bsvsB",
"2ab-kPP2YOCc",
"lGItVX-1Ynv",
"bzHqoZrspLk",
"fKgU61vajKw",
"sy8Ry-pFUk",
"H07JreMKnr",
"F_AtFaHN-6L"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback!\\\nWe closely inspected all the previous works referenced in the reviews, but could not find any meaningful overlap with the *empirical* novelty of our study. We believe that our analysis offers important insights and a fresh perspective on domain generalization for semantic segmentat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"NSAOQ6JmVN8",
"2ab-kPP2YOCc",
"H07JreMKnr",
"fKgU61vajKw",
"F_AtFaHN-6L",
"sy8Ry-pFUk",
"iclr_2022_1O5UK-zoK8g",
"iclr_2022_1O5UK-zoK8g",
"iclr_2022_1O5UK-zoK8g",
"iclr_2022_1O5UK-zoK8g",
"iclr_2022_1O5UK-zoK8g"
] |
iclr_2022_33nhOe3cTd | Spending Thinking Time Wisely: Accelerating MCTS with Virtual Expansions | One of the most important AI research questions is to trade off computation versus performance, since "perfect rational" exists in theory but it is impossible to achieve in practice. Recently, Monte-Carlo tree search (MCTS) has attracted considerable attention due to the significant improvement of performance in varieties of challenging domains. However, the expensive time cost during search severely restricts its scope for applications. This paper proposes the Virtual MCTS (V-MCTS), a variant of MCTS that mimics the human behavior that spends adequate amounts of time to think about different questions. Inspired by this, we propose a strategy that converges to the ground truth MCTS search results with much less computation. We give theoretical bounds of the V-MCTS and evaluate the performance in $9 \times 9$ Go board games and Atari games. Experiments show that our method can achieve similar performances as the original search algorithm while requiring less than $50\%$ number of search times on average.
We believe that this approach is a viable alternative for tasks with limited time and resources. | Reject | The paper proposes Virtual MCTS, an early-termination rule for MCTS to improve its efficiency.
The basic idea is to introduce a termination rule that prunes the search process when the final policy at the root node is unlikely to change from the current one. The proposed approach is empirically evaluated on 9x9 Go and Atari games.
After reading the authors' feedback, all reviewers participated in the discussion without reaching a consensus.
Although all reviewers appreciated the authors' answers to their concerns, only one reviewer voted for acceptance. The other two reviewers, while acknowledging some merits, still have concerns: the technical contribution is minor, the theoretical findings are quite trivial, it is unclear when the proposed termination strategy is could help.
In summary, this paper is borderline and I think it still needs some work to clearly break the bar of a top conference. | train | [
"xStWUlipWSE",
"BTlxHa6ux88",
"IFt01wF0SRK",
"AjocZgyr4RjJ",
"9QYIOADkfVx",
"yYpGS93-LxU",
"nBxJDsYNNx",
"XyqcGqpubSZ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your updates and explanations!\n\nWe agree that it should be \"(Coulom, 2007)\" here and we will update this in the next revised version.\nThank you again for your detailed explanations!",
" Thank you for your responses. I have updated my score accordingly.\n\nJust to come back to this one (really... | [
-1,
-1,
8,
-1,
-1,
-1,
5,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
4,
5
] | [
"BTlxHa6ux88",
"AjocZgyr4RjJ",
"iclr_2022_33nhOe3cTd",
"IFt01wF0SRK",
"XyqcGqpubSZ",
"nBxJDsYNNx",
"iclr_2022_33nhOe3cTd",
"iclr_2022_33nhOe3cTd"
] |
iclr_2022_h4EOymDV3vV | Diffusion-Based Representation Learning | Score-based methods represented as stochastic differential equations on a continuous time domain have recently proven successful as a non-adversarial generative model. Training such models relies on denoising score matching, which can be seen as multi-scale denoising autoencoders. Here, we augment the denoising score-matching framework to enable representation learning without any supervised signal. GANs and VAEs learn representations by directly transforming latent codes to data samples. In contrast, the introduced diffusion based representation learning relies on a new formulation of the denoising score-matching objective and thus encodes information needed for denoising. We illustrate how this difference allows for manual control of the level of details encoded in the representation. Using the same approach, we propose to learn an infinite-dimensional latent code which achieves improvements of state-of-the-art models on semi-supervised image classification. As a side contribution, we show how adversarial training in score-based models can improve sample quality and improve sampling speed using a new approximation of the prior at smaller noise scales. | Reject | The paper proposes extracting multiple-scale features using denoising score matching. Reviewers pointed out the limited novelty in the work and that it does not cite various previous work and how it connects to them. The paper needs some further polishing on the writing, and in making the use of lambda divergences more rigorous and principled as explained in the comment of Reviewer VdM1 . | train | [
"4bTkrrWAE3",
"SEdtPhPdQW_",
"JeEKVkUWEN",
"7AbRFO9qPx6",
"YGOoCTWcw1g",
"LJy1XLINBU",
"RRXMZqBoPW",
"3UXMrei3ezN",
"4AGz3RF5m23",
"EeXZ9pM1ZkN",
"LJ2kG75J0u8",
"DoT4pWMjLhcP",
"ibcPF75Z_fv",
"j7yRJNJIdxF",
"jPkwJFKA7RV",
"1UW5AIZp0zF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I would like to thank the authors for their careful response and revision. The rebuttal has addressed some of my initial concerns and I would like to acknowledge this by increasing my review score. However, I still think the novelty of this work is marginal, and the optimization based on lambda-divergence is less... | [
-1,
5,
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
-1,
4,
5,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"EeXZ9pM1ZkN",
"iclr_2022_h4EOymDV3vV",
"iclr_2022_h4EOymDV3vV",
"jPkwJFKA7RV",
"DoT4pWMjLhcP",
"iclr_2022_h4EOymDV3vV",
"ibcPF75Z_fv",
"LJy1XLINBU",
"JeEKVkUWEN",
"SEdtPhPdQW_",
"1UW5AIZp0zF",
"LJy1XLINBU",
"JeEKVkUWEN",
"SEdtPhPdQW_",
"1UW5AIZp0zF",
"iclr_2022_h4EOymDV3vV"
] |
iclr_2022_OCgCYv7KGZe | Auto-Encoding Inverse Reinforcement Learning | Reinforcement learning (RL) provides a powerful framework for decision-making, but its application in practice often requires a carefully designed reward function. Inverse Reinforcement Learning (IRL) has shed light on automatic reward acquisition, but it is still difficult to apply IRL to solve real-world tasks. In this work, we propose Auto-Encoding Inverse Reinforcement Learning (AEIRL), a robust and scalable IRL framework, which belongs to the adversarial imitation learning class. To recover reward functions from expert demonstrations, AEIRL utilizes the reconstruction error of an auto-encoder as the learning signal, which provides more information for optimizing policies, compared to the binary logistic loss. Subsequently, we use the derived objective functions to train the reward function and the RL agent. Experiments show that AEIRL performs superior in comparison with state-of-the-art methods in the MuJoCo environments. More importantly, in more realistic settings, AEIRL shows much better robustness when the expert demonstrations are noisy. Specifically, our method achieves $16\%$ relative improvement compared to the best baseline FAIRL on clean expert data and $38\%$ relative improvement compared to the best baseline PWIL on noisy expert data both with the metric overall averaged scaled rewards. | Reject | This work addresses the problem of learning representations from noisy expert demonstrations in in adversarial imitation learning. The authors build on top of GAIL, which utilizes a discriminator to model a "pseudo"-reward from demonstrations. In this work, the discriminator is replaced with an auto-encoder. The authors hypothesis is that using an auto-encoder helps in 2 ways: 1) denoising expert trajectories for more "robust" learning; 2) using the reconstruction error (instead of binary classification loss) to distinguis experts from samples provides more informative signal for reward learning.
**Strengths**
on a global perspective this work is well motivated
a novel algorithmic variant of GAIL is proposed
thorough experimental evaluation
**weaknesses**
The manuscript doesn't clearly distinguish between adversarial imitation learning algorithms (like GAIL) and "true" inverse reinforcement learning algorithms. This makes it unclear what the real goal of the proposed method is. The ultimate goal of adversarial IL is to learn a policy (by inferring a pseudo-reward at "train" time which is then never used again), while the primary goal of IRL is to learn a reward function at train time, which can then be used at test time. The manuscript motivates the algorithm by saying it will have a more informative signal for learning reward functions, but the algorithm itself is an adversarial IL algorithm which primary goal is to learn a policy from demonstrations. Overall, makes the evaluation and analysis confusing. Ideally, the authors would have focussed on the question "Does the reconstruction error lead to better policies?" (through better pseudo-reward modeling) - or would have extended an IRL method.
Second, the motivation is that the autoencoder helps with more "robust" learning, but it's unclear to me that the evaluation really shows that learning is more robust (also because "robustness" is not clearly defined)
The experimental evaluation is a bit of a mixed bag, and it's a unclear why the new algorithm performs better on non-noisy data (when compared to baselines), but not less so on the noisy data.
**Summary**
Overall, this work provides a promising direction, however in it's current form the manuscript is not yet ready for publication. | test | [
"goeghmoygVE",
"XPF5T7bnN1Y",
"bMxpPvA-uZ",
"eRY3O5jamoTT",
"lnHN_dREco",
"LFfGoUYP0p0",
"tIIj_anQ9-j",
"reC5K81W55-",
"eXDTQ0S98GA",
"huCyqHJfFL2"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present an architecture that utilizes autoencoding as an approach to inverse reinforcement learning. They present a mathematical formalism, pseudocode, and some empirical analyses on MuJoCo tasks. The paper is an interesting approach to IRL, using minimax games and autoencoding to enhance training. ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2022_OCgCYv7KGZe",
"bMxpPvA-uZ",
"LFfGoUYP0p0",
"goeghmoygVE",
"huCyqHJfFL2",
"eXDTQ0S98GA",
"reC5K81W55-",
"iclr_2022_OCgCYv7KGZe",
"iclr_2022_OCgCYv7KGZe",
"iclr_2022_OCgCYv7KGZe"
] |
iclr_2022_wYqLTy4wkor | Grounding Aleatoric Uncertainty in Unsupervised Environment Design | In reinforcement learning (RL), adaptive curricula have proven highly effective for learning policies that generalize well under a wide variety of changes to the environment. Recently, the framework of Unsupervised Environment Design (UED) generalized notions of curricula for RL in terms of generating entire environments, leading to the development of new methods with robust minimax-regret properties. However, in partially-observable or stochastic settings (those featuring aleatoric uncertainty), optimal policies may depend on the ground-truth distribution over the aleatoric features of the environment. Such settings are potentially problematic for curriculum learning, which necessarily shifts the environment distribution used during training with respect to the fixed ground-truth distribution in the intended deployment environment. We formalize this phenomenon as curriculum-induced covariate shift, and describe how, when the distribution shift occurs over such aleatoric environment parameters, it can lead to learning suboptimal policies. We then propose a method which, given black box access to a simulator, corrects this resultant bias by aligning the advantage estimates to the ground-truth distribution over aleatoric parameters. This approach leads to a minimax-regret UED method, SAMPLR, with Bayes-optimal guarantees. | Reject | The paper tackles the problem of covariate shift in adaptive curriculum learning. Unfortunately, the paper lacks clarity and the experiments are insufficient. The author response clarified the notation and corrected many typos, however, the paper remains conceptually unclear as pointed out by the reviewers. Hence this work is not ready for publication. | train | [
"nCb0-mlKTaq",
"OMWYXbVd-p_",
"U4UEtHTZ1rF",
"8WpQbwHbmvz",
"MgwWx8w8_uT",
"C4D0mGTmptlr",
"suLRxHrQfcJ",
"1tVThwJCDY8",
"RacGuMgfKOf",
"5wXcDbGo3WH",
"3nxSe7OiZe0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their detailed responses, and the updates to the paper notation. The idea behind the paper seems interesting, although I think that only using experiments where estimating $p(\\theta|\\tau)$ is trivial really takes away from the empirical evaluation. However, the other reviewers also appea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"suLRxHrQfcJ",
"iclr_2022_wYqLTy4wkor",
"3nxSe7OiZe0",
"3nxSe7OiZe0",
"RacGuMgfKOf",
"5wXcDbGo3WH",
"1tVThwJCDY8",
"iclr_2022_wYqLTy4wkor",
"iclr_2022_wYqLTy4wkor",
"iclr_2022_wYqLTy4wkor",
"iclr_2022_wYqLTy4wkor"
] |
iclr_2022_6Dz7RiRiMFd | 3D-Transformer: Molecular Representation with Transformer in 3D Space | Spatial structures in the 3D space are important to determine molecular properties. Recent papers use geometric deep learning to represent molecules and predict properties. These papers, however, are computationally expensive in capturing long-range dependencies of input atoms; and more importantly, they have not considered the non-uniformity of interatomic distances, thus failing to learn context-dependent representations at different scales. To deal with such issues, we introduce 3D-Transformer, a variant of the Transformer for molecular representations that incorporates 3D spatial information. 3D-Transformer operates on a fully-connected graph with direct connections between atoms. To cope with the non-uniformity of interatomic distances, we develop a multi-scale self-attention module that exploits local fine-grained patterns with increasing contextual scales. As molecules of different sizes rely on different kinds of spatial features, we design an adaptive position encoding module that adopts different position encoding methods for small and large molecules. Finally, to attain the molecular representation from atom embeddings, we propose an attentive farthest point sampling algorithm that selects a portion of atoms with the assistance of attention scores, overcoming handicaps of the virtual node and previous distance-dominant downsampling methods. We validate 3D-Transformer across three important scientific domains: quantum chemistry, material science, and proteomics. Our experiments show significant improvements over state-of-the-art models on the crystal property prediction task and the protein-ligand binding affinity prediction task, and show better or competitive performance in quantum chemistry molecular datasets. This work provides clear evidence that biochemical tasks can gain consistent benefits from 3D molecular representations and different tasks require different position encoding methods. | Reject | The authors present a method to learn representations of 3D atomic structure. They consider two cases: "small" and "large" molecules based on a metric that takes the spatial extension and number of atoms in the molecule into account. Small molecules are represented by an interatomic distance map. Large molecules are represented by a "sinusoidal function-based absolute position encoding method". Both settings make use of a transformer architecture on top of the initial representation. The authors also introduce a subsampling step to select a subset of points/atoms and aggregate information from these. Experimental results are shown for datasets relating to small molecule property prediction, protein-ligand binding and a dataset from material science on metalorganic compounds.
Strengths:
- interesting modification of transformer architecture dedicated to the chemical compound.
Weaknesses:
- Poor presentation of methods with respect to prior work
- Limited technical novelty. The distance map representation for small molecules and the sinusoidal function-based absolute position encoding method for larger molecules have previously been proposed. Many components are built upon the design of PointNet++(Qi et al., 2017b) without significant modifications. The proposed "3D-Transformer" is very similar to an attention-based PointNet++ that is specially designed for molecular data.
- Experiments are applied only on classification tasks.
All reviewers voted for rejection. I recommend the authors to address the limitations listed above by improving the presentation with respect to prior work, clarifying more the novelty of the methods and including a more diverse range of experiments. | train | [
"0bCO3B0M2-h",
"VDvHw1lOJnxR",
"FFGQitBnMH",
"RtseAGnvGMd",
"6peg8ihyIpv",
"4eGvve7AhdO",
"MZT5thjpk2s",
"7oZEyk_DnjG"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your valuable comments and polish our work as follows:\n\n\n1. Hard to read.\n\nWe apologize for our poor and inconsistent arrangement of the paper. Due to the limited paper length, we put aside many parts and important information in the appendix. However, as you suggested, we should not concentrat... | [
-1,
-1,
-1,
-1,
5,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"7oZEyk_DnjG",
"MZT5thjpk2s",
"4eGvve7AhdO",
"6peg8ihyIpv",
"iclr_2022_6Dz7RiRiMFd",
"iclr_2022_6Dz7RiRiMFd",
"iclr_2022_6Dz7RiRiMFd",
"iclr_2022_6Dz7RiRiMFd"
] |
iclr_2022_mk8AzPcd3x | BCDR: Betweenness Centrality-based Distance Resampling for Graph Shortest Distance Embedding | Along with unprecedented development in network analysis such as biomedical structure prediction and social relationship analysis, Shortest Distance Queries (SDQs) in graphs receive an increasing attention. Approximate algorithms of SDQs with reduced complexity are of vital importance to complex graph applications. Among different approaches, embedding-based distance prediction has made a breakthrough in both efficiency and accuracy, ascribing to the significant performance of Graph Representation Learning (GRL). Embedding-based distance prediction usually leverages truncated random walk followed by Pointwise Mutual Information (PMI)-based optimization to embed local structural features into a dense vector on each node and integrates with a subsequent predictor for global extraction of nodes' mutual shortest distance. It has several shortcomings. Random walk as an unstrained node sequence possesses a limited distance exploration, failing to take into account remote nodes under graph's shortest distance metric, while the PMI-based maximum likelihood optimization of node embeddings reflects excessively versatile local similarity, which incurs an adverse impact on the preservation of the exact shortest distance relation during the mapping from the original graph space to the embedded vector space.
To address these shortcomings, we propose in this paper a novel graph shortest distance embedding method called Betweenness Centrality-based Distance Resampling (BCDR). First, we prove in a statistical perspective that Betweenness Centrality(BC)-based random walk can occupy a wider distance range measured by the intrinsic metric in the graph domain due to its awareness of the path structure. Second, we perform Distance Resampling (DR) from original walk paths before maximum likelihood optimization instead of the PMI-based optimization and prove that this strategy preserves distance relation with respect to any calibrated node via steering optimization objective to reconstruct a global distance matrix. Our proposed method possesses a strong theoretical background and shows much better performance than existing methods when evaluated on a broad class of real-world graph datasets with large diameters in SDQ problems. It should also outperform existing methods in other graph structure-related applications. | Reject | The authors propose a new methods for graph shortest distance embedding method called BCDR based on betweenness centrality. Then they show that the method is competitive both theoretically than experimentally with existing work.
After a discussion with the reviewers and after considering the nice changes in the paper and explanation in the rebuttal we agree that the paper contains some very interesting ideas but it is not probably ready for publication. The comparison with previous works is, in fact, still a bit limited and it should be extended. In addition the algorithm should also be tested on larger datasets. | train | [
"WkttkCXHOP7",
"kciLyALuTi8",
"GZSyTWecKrR",
"cDc7OeuVLRB",
"IJ956PpB8om",
"FLx9rY0JKTm",
"0l6shLI_7jw",
"MGuvSBZVzb4K",
"7W63i-aCe0",
"N7TlK9t9-e",
"NcqNAY1AKBh",
"DJC93xgEv3r",
"7o3Ofw6Z9q",
"pnfh-jXYyR",
"SlJfkxzgEB2",
"P4nE_fJJXyY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your valuable suggestions. We are willing to provide some quantitative evaluations regarding cycles for both density and resident node types. Further discussion will be added to the final paper soon.",
" Thank you very much for your reply. It could be great if you could add the information you men... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"kciLyALuTi8",
"0l6shLI_7jw",
"iclr_2022_mk8AzPcd3x",
"iclr_2022_mk8AzPcd3x",
"FLx9rY0JKTm",
"7W63i-aCe0",
"P4nE_fJJXyY",
"GZSyTWecKrR",
"N7TlK9t9-e",
"NcqNAY1AKBh",
"SlJfkxzgEB2",
"7o3Ofw6Z9q",
"pnfh-jXYyR",
"iclr_2022_mk8AzPcd3x",
"iclr_2022_mk8AzPcd3x",
"iclr_2022_mk8AzPcd3x"
] |
iclr_2022__gZf4NEuf0H | Towards Understanding the Condensation of Neural Networks at Initial Training | Implicit regularization is important for understanding the learning of neural networks (NNs). Empirical works show that input weights of hidden neurons (the input weight of a hidden neuron consists of the weight from its input layer to the hidden neuron and its bias term) condense on isolated orientations with a small initialization. The condensation dynamics implies that the training implicitly regularizes a NN towards one with much smaller effective size. In this work, we utilize multilayer networks to show that the maximal number of condensed orientations in the initial training stage is twice the multiplicity of the activation function, where ``multiplicity'' is multiple roots of activation function at origin. Our theoretical analysis confirms experiments for two cases, one is for the activation function of multiplicity one, which contains many common activation functions, and the other is for the layer with one-dimensional input. This work makes a step towards understanding how small initialization implicitly leads NNs to condensation at initial training stage, which lays a foundation for the future study of the nonlinear dynamics of NNs and its implicit regularization effect at a later stage of training. | Reject | *Summary:* Study isolated orientations of weights for networks with small initialization depending on multiplicity of activation functions.
*Strengths:*
- Interesting analysis of properties in early stages depending on activations.
*Weaknesses:*
- Reviewers found the settings limited.
- Reviewers found experiments limited.
*Discussion:*
In response to ejGJ authors reiterate scope of covered cases and submit to consideration that their experiments should be adequate for basic research. Reviewer acknowledges the response, but maintains their assessment (limited scope of theory, limited experiments). KucV found the experimental part limited in scope, the settings unclear (notion of early stage, compatibility with theory), and review of previous works lacking. KucV’s sincerely acknowledged authors for their efforts to address their comments and improving the manuscript, and raised their score, but maintained the experimental analysis is not fully convincing and unclear, and the comparison with prior work insufficient. zuZq also expressed concerns with the experiments and the notions and settings under consideration. They also raised questions about the comparison with standard initialization. Authors made efforts to address zuZq concerns. zuZq acknowledged this but maintained initial position that the article is just marginally above threshold. jDJ5 found the paper well written and the conclusion insightful. However, also raised concerns about the experiments the settings under consideration. Authors made efforts to address jDJ5’s concerns, who appreciated this but was not convinced to raise their score.
*Conclusion:*
Two reviewers consider this article marginally above and two more marginally below the acceptance threshold. I find the article draws an interesting connection pertaining an interesting topic. However, the reviews and discussion conclude that the article lacks in several regards that in my view still could and should be improved. Therefore I am recommending reject at this time. I encourage the authors to revise and resubmit. | train | [
"as5XaNl0Kim",
"MkVOU8SY-LY",
"gPW6X9aKC5I",
"7VAOeJoOJUl",
"dLyqDNeCLi4",
"I5-VdnNl6L9",
"TiK8cJW8FE9N",
"MXVnfZv2x2N",
"mI_SeQIEPY7",
"G_H4QlnXdrL8",
"IXHMcdLpt-",
"33pr8XlRQ0",
"g1B3eE5CnXWY",
"wl7BW1QbPT",
"-_dDfTOvrfQ",
"aq3znCLu-Oo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the relation between condensation of neurons activations and the multiplicity of the used non-linearity. Essentially, it is found an empirical link between the multiplicity of the activation function and the number of condensation directions. The paper aims at giving an intriguing analysis of the... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"iclr_2022__gZf4NEuf0H",
"gPW6X9aKC5I",
"33pr8XlRQ0",
"TiK8cJW8FE9N",
"MXVnfZv2x2N",
"iclr_2022__gZf4NEuf0H",
"aq3znCLu-Oo",
"-_dDfTOvrfQ",
"-_dDfTOvrfQ",
"as5XaNl0Kim",
"as5XaNl0Kim",
"wl7BW1QbPT",
"wl7BW1QbPT",
"iclr_2022__gZf4NEuf0H",
"iclr_2022__gZf4NEuf0H",
"iclr_2022__gZf4NEuf0H"... |
iclr_2022_gdWQMQVJST | Neural Tangent Kernel Empowered Federated Learning | Federated learning (FL) is a privacy-preserving paradigm where multiple participants jointly solve a machine learning problem without sharing raw data. Unlike traditional distributed learning, a unique characteristic of FL is statistical heterogeneity, namely, data distributions across participants are different from each other. Meanwhile, recent advances in the interpretation of neural networks have seen a wide use of neural tangent kernel (NTK) for convergence and generalization analyses. In this paper, we propose a novel FL paradigm empowered by the NTK framework. The proposed paradigm addresses the challenge of statistical heterogeneity by transmitting update data that are more expressive than those of the traditional FL paradigms. Specifically, sample-wise Jacobian matrices, rather than model weights/gradients, are uploaded by participants. The server then constructs an empirical kernel matrix to update a global model without explicitly performing gradient descent. We further develop a variant with improved communication efficiency and enhanced privacy. Numerical results show that the proposed paradigm can achieve the same accuracy while reducing the number of communication rounds by an order of magnitude compared to federated averaging. | Reject | This paper proposes a novel Federated Learning (FL) framework that leverages the Neural Tangent Kernel (NTK), to replace the gradient-descent algorithm for optimization. Specifically, the workers upload the labels and the Jacobian matrices to the server, and the server uses the tools from the NTK to obtain a trained neural network. However since this could lead to increased communication cost and compromise of data privacy, the authors propose data sampling and random projection techniques to alleviate the problem. The authors provide a theoretical analysis that the proposed scheme has a faster convergence than FedAvg under specific assumptions, and experimentally validate that it significantly outperforms previous FL algorithms, achieving similar test accuracy to ideal centralized cases.
Pros
- The idea of using NTK for model optimization without gradient descent and use of it in the FL setting is both interesting and novel.
- The paper properly discusses and tackles the new challenges posed by the introduction of the new method.
- The paper is well-organized and clearly written, with sufficient discussion of related works and backgrounds.
Cons
- The proposed method puts heavy computational burdens on the server-side.
- The method violates the privacy preserving feature of FL by its nature, and while the proposed compression shuffling alleviates the concern, more discussion is necessary.
- Missing comparison against popular baselines such as FedProx and SCAFFOLD.
- The faster convergence of the proposed method in comparison to FedAvg depends on the learning rate and is not always true.
- There is a gap between the theory and practice, which makes the practicality of the algorithm still questionable.
Although the reviewers found the idea as novel, the proposed techniques for alleviating communication cost and privacy concerns convincing, and considered both the theoretical analysis and experimental validation thorough, all reviewers leaned toward rejection due to critical concerns unanswered. During the discussion period, the authors alleviate many of the minor concerns from the reviewers, but there were still remaining concerns on the gap between the theory and practice on its convergence behavior, and insufficient discussion of the privacy-preserving feature of the proposed method, as well as shifting of computation burdens to the server. Thus, the reviewers reached a consensus that the paper is not yet ready for publication.
Despite the low average score, the novelty of the idea and the quality of the paper is much higher than those of the accepted papers in my batch, and I strongly believe that this will become a high impact paper, if remaining concerns from the reviewers are properly resolved. | test | [
"15Q9QqWI9jK",
"NMubKHT1-sw",
"xjsvEdY1wzE",
"CBBZfUjaFjl",
"5ZYMAUioapF",
"YDQ0YqR86FS2",
"uF8HYKulTX",
"9-GVdqbswm",
"FzcYdyq-WVY4",
"MTwZkx16DHUD",
"Wba3eLq9r-W",
"e9WekRkqAzF",
"fQgIhSJaei",
"-T2HEGVHkny"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer KsCY,\n\nThe discussion deadline is only two days away, and we need to reach a consensus soon to recommend a decision. Please go over the responses from the authors on your comments, and revise your rating or review when necessary. \n\nThanks,\nArea Chair",
" Thank you to the authors for taking th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"e9WekRkqAzF",
"9-GVdqbswm",
"uF8HYKulTX",
"MTwZkx16DHUD",
"iclr_2022_gdWQMQVJST",
"e9WekRkqAzF",
"-T2HEGVHkny",
"FzcYdyq-WVY4",
"fQgIhSJaei",
"Wba3eLq9r-W",
"iclr_2022_gdWQMQVJST",
"iclr_2022_gdWQMQVJST",
"iclr_2022_gdWQMQVJST",
"iclr_2022_gdWQMQVJST"
] |
iclr_2022_MPoQtFC588n | RMNet: Equivalently Removing Residual Connection from Networks | Although residual connection enables training very deep neural networks, it is not friendly for online inference due to its multi-branch topology. This encourages many researchers to work on designing DNNs without residual connections at inference. For example, RepVGG re-parameterizes multi-branch topology to a VGG-like (single-branch) model when deploying, showing great performance when the network is relatively shallow. However, RepVGG can not transform ResNet to VGG equivalently because re-parameterizing methods can only be applied to linear Blocks and the non-linear layers (ReLU) have to be put outside of the residual connection which results in limited representation ability, especially for deeper networks. In this paper, we aim to remedy this problem and propose to remove the residual connection in a vanilla ResNet equivalently by a reserving and merging (RM) operation on ResBlock. Specifically, RM operation allows input feature maps to pass through the block while reserving their information and merges all the information at the end of each block, which can remove residual connection without changing original output. RMNet basically has two advantages: 1) it achieves a better accuracy-speed trade-off compared with ResNet and RepVGG; 2) its implementation makes it naturally friendly for high ratio network pruning. Extensive experiments are performed to verify the effectiveness of RMNet. We believe the ideology of RMNet can inspire many insights on model design for the community in the future. | Reject | This paper propose a reparametrization approach for pruning residual networks. The proposed approach replace the skip layer connections with feedforward layers, and show the equivalence to the original network. However, the current presentation is not very clear on the advantage of the proposed approach for pruning. As two networks are equivalent, pruning the reparameterized network can be transferred to pruning the residual network. The authors need to clarify how their reparametrized network is different from the residual network when being pruned. More ablation studies are also need to better justify their claim. | train | [
"XXTssJMxnBW",
"KlPvJ8BD5v",
"fNIt0loPgwn",
"6IEiyPS97ZF",
"nSgnHK2jdRW",
"6P_marhQKpv",
"0QwgsSmqq-A",
"qxRHdj4NCj",
"ogvfYJO61bG",
"QGs1Ryhm9Yb",
"OD_-Q66Avd",
"G50AvHWs23H",
"E9p2kqCrYcB",
"bl100tl2XT4"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" \n- Summary of the paper\n\nIn this paper, we put forward an RM Operation for equivalent convert ResNet to VGG, MobileNetV2 to MobileNetV1. \n\nAlthough bringing additional channels, the equivalent transformation does not lose the representation capability of ResNet and MobileNetV2. And We propose the following a... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
2,
-1,
-1,
-1,
-1,
5,
5
] | [
"E9p2kqCrYcB",
"E9p2kqCrYcB",
"OD_-Q66Avd",
"nSgnHK2jdRW",
"ogvfYJO61bG",
"iclr_2022_MPoQtFC588n",
"E9p2kqCrYcB",
"iclr_2022_MPoQtFC588n",
"E9p2kqCrYcB",
"6P_marhQKpv",
"bl100tl2XT4",
"qxRHdj4NCj",
"iclr_2022_MPoQtFC588n",
"iclr_2022_MPoQtFC588n"
] |
iclr_2022_i7O3VGpb7qZ | Code Editing from Few Exemplars by Adaptive Multi-Extent Composition | This paper considers the computer source code editing with few exemplars. The editing exemplar, containing the original and modified support code snippets, showcases a certain editorial style and implies the edit intention for a query code snippet. To achieve this, we propose a machine learning approach to adapt the editorial style derived from few exemplars to a query code snippet. Our learning approach combines edit representations extracted from editing exemplars and compositionally generalizes them to the query code snippet editing via multi-extent similarities ensemble. Specifically, we parse the code snippets using language-specific grammar into abstract syntax trees. We apply the similarities measurement in multiple extents from individual nodes to collective tree representations, and ensemble them through a similarity-ranking error estimator. We evaluate the proposed method on two datasets in C\# and Python languages and respectively show 8.0\% and 10.9\% absolute accuracy improvements compared to baselines. | Reject | This paper proposes learning to make stylistic code edits (semantics remains similar) based on information from a few exemplars instead of one. The proposed method first parses the code into abstract syntax trees and then use the multi-extent similarity ensemble. This was compared to a Graph2Edit baseline on C# fixer and pyfixer, which are datasets generated by rule-based transcompilers. The proposed method got around 10% accuracy improvement due to a combination of the method and using more than one examplar.
The reviewers find that any improvement due to more examplars to be expected and suggested that 1) one carefully chosen examplar is enough, and 2) that the need for multiple examplars means more practical difficulties in providing them in an application 3) the targets are all generated by rule-based methods and the benefits may not extend to a realistic case where the edits are not so clear and the reviewers wondered about the application value and the potential need for human evaluations. The authors argued that it is unexpectedly difficult to expand the base method to multiple examplars and users should be able to provide examplars in an application. The authors further provided additional results that addressed some of the reviewer's concerns but the reviewers did not change their evaluation.
Rejection is recommended based on reviewer consensus. | train | [
"14oJpbBtwzA",
"nDxms3lBy6Y",
"BSEbvIimnpx",
"trdb6L4Cd6",
"cYNTl54ZHIZ",
"Q4_Q9tjM-Gl",
"XNbTYpC2xiBD",
"-ai4jZIdObY",
"lmMaGq8Ocb-",
"H8YbOHbp6tt",
"F9xGjncSQEn",
"pSufGY_jgY5",
"fsIBhuRdZYkE",
"XvAeKWYB6X",
"7bM8CF_Cp12",
"abWP0DgSdhy",
"AE4dt_bOjP",
"quq8sE8nDPk"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nThanks again for all the insightful feedback from four reviewers! We take every comment seriously and have posted our point-to-point responses as well as the updated manuscript. We are eager to address any concerns regarding our paper, and are able to reply promptly till the end of the discussi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"iclr_2022_i7O3VGpb7qZ",
"BSEbvIimnpx",
"pSufGY_jgY5",
"iclr_2022_i7O3VGpb7qZ",
"H8YbOHbp6tt",
"AE4dt_bOjP",
"cYNTl54ZHIZ",
"lmMaGq8Ocb-",
"abWP0DgSdhy",
"quq8sE8nDPk",
"Q4_Q9tjM-Gl",
"-ai4jZIdObY",
"XvAeKWYB6X",
"7bM8CF_Cp12",
"iclr_2022_i7O3VGpb7qZ",
"iclr_2022_i7O3VGpb7qZ",
"iclr_... |
iclr_2022_Z7VhFVRVqeU | Neural Bootstrapping Attention for Neural Processes | Neural Processes (NP) learn to fit a broad class of stochastic processes with neural networks. Modeling functional uncertainty is an important aspect of learning stochastic processes. Recently, Bootstrapping (Attentive) Neural Processes (B(A)NP) propose a bootstrap method to capture the functional uncertainty which can replace the latent variable in (Attentive) Neural Processes ((A)NP), thus overcoming the limitations of Gaussian assumption on the latent variable. However, B(A)NP conduct bootstrapping in a non-parallelizable and memory-inefficient way and fail to capture diverse patterns in the stochastic processes. Furthermore, we found that ANP and BANP both tend to overfit in some cases. To resolve these problems, we propose an efficient and easy-to-implement approach, Neural Bootstrapping Attentive Neural Processes (NeuBANP). NeuBANP learns to generate the bootstrap distribution of random functions by injecting multiple random weights into the encoder and the loss function. We evaluate our models in benchmark experiments including Bayesian optimization and contextual multi-armed bandit. NeuBANP achieves state-of-the-art performance in both of the sequential decision-making tasks, and this empirically shows that our method greatly improves the quality of functional uncertainty modeling. | Reject | Overall, the work is borderline with no reviewer feeling strongly for or against the paper.
The paper is well-written and proposes a simple approach, along with code for reproducibility. Criticism stems primarily in the work's technical novelty, being an incremental improvement of ideas from ANP and BANP, and related work like Neural Bootstrapper. In addition, the experimental validation involves regression on 1-to-2D functions, Bayesopt on synthetic functions, and contextual bandits on the synthetic wheel bandit problem. This is fairly toy, and multiple reviewers raise unaddressed concerns on the regression experiments. Ignoring orginality in and of itself (which is overvalued in conferences), the work does not yet provide a sufficiently convincing demonstration of its practical importance.
I recommend the authors use the reviewers' feedback to enhance their preprint should they aim to submit to a later venue. | train | [
"EqkX4uZXKJ4",
"RZdj87i8kuU",
"YBrljifivTW",
"5OxhLHgui6vv",
"ROiZpTwn4d",
"-cATRt-EA9o",
"-FGwozODq5V",
"4_TIuEfvor",
"YqPiQ6AOu0x",
"bCFeQifUzWr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposed a new class of neural process algorithm called neural bootstrapping attention for neural processes (NeuBANP). This method utilizes efficient Neural Bootstrapping (NeuBoots) to improve Bootstrapped Attentive Neural Processes (BANP) in capturing functional uncertainty. Authors show that NeuBANP ac... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_Z7VhFVRVqeU",
"4_TIuEfvor",
"EqkX4uZXKJ4",
"4_TIuEfvor",
"bCFeQifUzWr",
"EqkX4uZXKJ4",
"YqPiQ6AOu0x",
"iclr_2022_Z7VhFVRVqeU",
"iclr_2022_Z7VhFVRVqeU",
"iclr_2022_Z7VhFVRVqeU"
] |
iclr_2022_MTsBazXmX00 | Target Propagation via Regularized Inversion | Target Propagation (TP) algorithms compute targets instead of gradients along neural networks and propagate them backward in a way that is similar yet different than gradient back-propagation (BP). The idea was first presented as a perturbative alternative to back-propagation that may achieve greater accuracy in gradient evaluation when training multi-layer neural networks (LeCun et al., 1989). However, TP has remained more of a template algorithm with many variations than a well-identified algorithm. Revisiting insights of LeCun et al., (1989) and more recently of Lee et al. (2015), we present a simple version of target propagation based on regularized inversion of network layers, easily implementable in a differentiable programming framework. We compare its computational complexity to the one of BP and delineate the regimes in which TP can be attractive compared to BP. We show how our TP can be used to train recurrent neural networks with long sequences on various sequence modeling problems. The experimental results underscore the importance of regularization in TP in practice. | Reject | The paper proposes a new approach to target propagation that performs well when used in RNNs on sequence modelling. The paper falls into something of an uncanny valley, where it is different enough from the original TP to lose some of its motivation ("biological plausibility"), and is now directly competing with backprop. Claims about outperforming backprop require EXTREMELY thorough and rigorous experimental evidence. Without meaning to cast any doubt on the authors work, there have simply been a lot of papers over the years that saw some improvements over backprop in some setting, that have not generalised or even been reproducible. | train | [
"YHy6Rls_bf",
"48Kkr44yY1O",
"bEvaAbboOG2",
"QLb9l6RPlsW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors study a variant of target propagation in which targets are computed by solving a sequence of minimization problems. Instead of resorting to iterative methods the authors propose to use an analytical solution. The algorithm is investigated as a recurrent neural network learning algorithm in a number of ... | [
3,
6,
5,
5
] | [
3,
3,
3,
4
] | [
"iclr_2022_MTsBazXmX00",
"iclr_2022_MTsBazXmX00",
"iclr_2022_MTsBazXmX00",
"iclr_2022_MTsBazXmX00"
] |
iclr_2022_V09OhBn8iR | Mitigating Dataset Bias Using Per-Sample Gradients From A Biased Classifier | The performance of deep neural networks (DNNs) primarily depends on the configuration of the training set. Specifically, biased training sets can make the trained model have unintended prejudice, which causes severe errors in the inference. Although several studies have addressed biased training using human supervision, few studies have been conducted without human knowledge because biased information cannot be easily extracted without human involvement. This study proposes a simple method to remove prejudice from a biased model without additional information and reconstruct a balanced training set based on the biased training set. The novel training method consists of three steps: (1) training biased DNNs, (2) measuring the contribution to the prejudicial training and generating balanced data batches to prevent the prejudice, (3) training de-biased DNNs with the balanced data. We test the training method based on various synthetic and real-world biased sets and discuss how gradients can efficiently detect minority samples. The experiment demonstrates that the detection method based on the gradients helps erase prejudice, resulting in improved inference accuracy by up to 19.58\% compared to the other state-of-the-art algorithm. | Reject | The manuscript proposes a method to adjust a biased model without requiring explicit annotations of biases. The main hypothesis of the manuscript is that there are differences in the direction and magnitude of the loss gradients for underrepresented samples compared to majority patterns in the training data. Based on this hypothesis, the manuscript proposes a rejection sampling method that tries to balance samples in a minibatch. However, a sample with a noisy label can appear to be an underrepresented sample with a correct label which can affect the proposed method. To tackle this, the manuscript also proposes a denoising module that successfully eliminates the effects of noisy labels on the debiasing algorithm proposed. Experiments are performed on various synthetic and real-world biased sets.
Positive aspects of the manuscript includes:
1. The results for varying levels of "bias" as well as the success of the proposed "denoising" setup is remarkable for the datasets tested;
2. An interesting hypothesis about the differences between gradient magnitude and direction (as measured by its proximity to an "average" gradient direction for all samples) look different for underrepresented samples as compared to "regular" data sample.
There are also several major concerns, including:
1. Lack of motivation and analysis on the connections between per-sample gradients and the majority/minority splits in more complex datasets;
2. The key assumption which motivates the proposed method, namely that minority samples have different gradient distributions than majority ones, deserves a more rigorous validation;
3. The "scalability" of the proposed method. One common theme across these datasets is that they can be "learned" (at least the biased version) with a much smaller amount of data than is present in the training set. Hence, a rejection sampling-based method can work even when the minority set diminishes;
4. Assumption about known bias. The proposed method assumes knowledge about which of the factors were biased so that a suitable "bias-only" model can be trained by leveraging only the "bias".
Post-rebuttal, reviewers stayed with borderline ratings, and they have suggested further improvements: more details about Biased MNIST numbers (to address concerns about known bias), and ablation studies on real datasets (e.g. compare to results without denoising, or denoised using FINE) to fully justify the practical importance of proposed denoising module. | train | [
"tP9BoMfbIOh",
"OhLdEYnwskS",
"Cbf8H91Bs8w",
"TO4CIePM9vR",
"yIzuM9WVmT",
"5eQ7jy8a1ld",
"ruYcMNzRqE",
"kAGBz2nV1TN",
"2eRNxAca7oo",
"jHA1P19jxeJV",
"rhPnyfszJ4B",
"37PSjJpvrnW",
"zD15UTP2vz9",
"5kImqqMSRj0",
"VKc9sUU3UY",
"NhC3vJTHoaM",
"90cCHIguL4u",
"YNPmB5BCui3",
"6jKtRI7sqYp... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
"The paper propose to use the per-sample gradients from the biased classifier to tackle the dataset bias. Following prior work, this work adopts a three-step approach: 1) train a classifier on biased dataset 2) identify which samples are from the minority and which are not 3) train a classifier with re-sampling. Th... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_V09OhBn8iR",
"AUWrVG0em5Y",
"5kImqqMSRj0",
"kAGBz2nV1TN",
"AUWrVG0em5Y",
"UWfJPtWYDPW",
"tP9BoMfbIOh",
"jHA1P19jxeJV",
"iclr_2022_V09OhBn8iR",
"rhPnyfszJ4B",
"ruYcMNzRqE",
"AUWrVG0em5Y",
"AUWrVG0em5Y",
"6jKtRI7sqYp",
"UWfJPtWYDPW",
"UWfJPtWYDPW",
"6jKtRI7sqYp",
"tP9BoMfb... |
iclr_2022_GrvigKxc13E | Gradient play in stochastic games: stationary points, convergence, and sample complexity | We study the performance of the gradient play algorithm for stochastic games (SGs), where each agent tries to maximize its own total discounted reward by making decisions independently based on current state information which is shared between agents. Policies are directly parameterized by the probability of choosing a certain action at a given state. We show that Nash equilibria (NEs) and first-order stationary policies are equivalent in this setting, and give a local convergence rate around strict NEs. Further, for a subclass of SGs called Markov potential games (which includes the cooperative setting with identical rewards among agents as an important special case), we design a sample-based reinforcement learning algorithm and give a non-asymptotic global convergence rate analysis for both exact gradient play and our sample-based learning algorithm. Our result shows that the number of iterations to reach an $\epsilon$-NE scales linearly, instead of exponentially, with the number of agents. Local geometry and local stability are also considered, where we prove that strict NEs are local maxima of the total potential function and fully-mixed NEs are saddle points. | Reject | This paper looks at stochastic and Markov potential games. Its different results, including the sample complexity ones, are overall interesting and relevant for the community.
This said, we had an intense discussion as several of the aforementioned results - actually, closely related results, not the exact same one - already appeared elsewhere (in a ArXiv preprint, that has been publicly submitted at a previous conference). We do believe that there is no ethical/plagiarism issue here, however, it remained the question of "paternity" of these results.
We have decided to give the paternity of the sample complexity to the first paper (the ArXiv preprint) that proved it. We can therefore only credit to this paper the improvements in the sample complexity results (as they are not exactly similar).
However, this had an impact on the reviewers (and also mine and the SAC one) evaluation of the paper, when some substantial parts of the paper are "discarded".
Nonetheless, we think that this paper has its merits, although it does not reach the ICLR bar in its current form. We strongly encourage the authors to work on a revised version - incorporating the different comments of the reviewers - and to resubmit it at a further venue. | train | [
"F_grIsEAtxX",
"5EToRm3GRfw",
"kLPf7n-_bCm",
"urzyPUmTc6h",
"_EaS6QEJVyj",
"WZruV0iiYVM",
"273SGuS0ZH6",
"AV_BqOkqB4Z",
"pIT4-HdrIMe",
"1bF0UL_1ppU",
"fy57UohPDwE",
"RGaF-7vZooe",
"RLga-vhcVyy",
"12SSSm5-xR_",
"63HicXFpheS",
"XDfzjFtJMnz",
"F1cEQ-0M9KJ",
"uGMEkY-Xtz34",
"e-_RBPj3... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
... | [
"The paper studies the global convergence of policy gradient for multiagent Markov potential games. The authors define a notion of potential markov games that seems natural and show convergence to Nash policies when the agents use gradient ascent independently on their policies (using direct parametrization). Obser... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_GrvigKxc13E",
"urzyPUmTc6h",
"_EaS6QEJVyj",
"273SGuS0ZH6",
"63HicXFpheS",
"273SGuS0ZH6",
"AV_BqOkqB4Z",
"iclr_2022_GrvigKxc13E",
"1bF0UL_1ppU",
"RLga-vhcVyy",
"12SSSm5-xR_",
"F1cEQ-0M9KJ",
"63HicXFpheS",
"AV_BqOkqB4Z",
"qmN1Fu13p4Y",
"iclr_2022_GrvigKxc13E",
"g1QfEnycMRo-"... |
iclr_2022_Z0XiFAb_WDr | Communicating Natural Programs to Humans and Machines | The Abstraction and Reasoning Corpus (ARC) is a set of procedural tasks that tests an agent's ability to flexibly solve novel problems. While most ARC tasks are easy for humans, they are challenging for state-of-the-art AI. What makes building intelligent systems that can generalize to novel situations such as ARC difficult?
We posit that the answer might be found by studying the difference of \emph{language}: While humans readily generate and interpret instructions in a general language, computer systems are shackled to a narrow domain-specific language that they can precisely execute.
We present LARC, the \textit{Language-complete ARC}: a collection of natural language descriptions by a group of human participants who instruct each other on how to solve ARC tasks using language alone, which contains successful instructions for 88\% of the ARC tasks.
We analyze the collected instructions as `natural programs', finding that while they resemble computer programs, they are distinct in two ways: First, they contain a wide range of primitives; Second, they frequently leverage communicative strategies beyond directly executable codes. We demonstrate that these two distinctions prevent current program synthesis techniques from leveraging LARC to its full potential, and give concrete suggestions on how to build the next-generation program synthesizers. | Reject | The paper presents the Language-complete Abstraction and Reasoning Corpus (LARC): a collection of natural language descriptions by a group of human participants who instruct each other on how to solve tasks in the Abstraction and Reasoning Corpus (ARC).
Overall, the reviewers found the LARC benchmarks to be well-motivated. However, there were concerns about whether the value of the dataset to downstream tasks. Results from additional program synthesis systems (like Codex and GPT-Neo) would also make the paper stronger. I agree with these objections and am recommending rejection this time around. However, I encourage the authors to continue pursuing this line of work and resubmit after incorporating the feedback from this round. | train | [
"fJvn5wzg-SO",
"YRk5ZGGnGG3",
"LMUZnSQ0RsX",
"Bu6-02rosvc",
"HYBGWh_s4eu",
"OZWmc8IS9Io",
"MKLOeV2K_jW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n“The technical contribution of this paper is weak since it mainly uses previous approaches to analyze the program synthesis results. I would like to see a new learning method to tackle this challenge.”\n\n- As stated in the overall response, such a method will require significant work beyond the current synthes... | [
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"HYBGWh_s4eu",
"OZWmc8IS9Io",
"MKLOeV2K_jW",
"iclr_2022_Z0XiFAb_WDr",
"iclr_2022_Z0XiFAb_WDr",
"iclr_2022_Z0XiFAb_WDr",
"iclr_2022_Z0XiFAb_WDr"
] |
iclr_2022_ba81PoR_k1p | One for Many: an Instagram inspired black-box adversarial attack | It is well known that deep learning models are susceptible to adversarial attacks. To produce more robust and effective attacks, we propose a nested evolutionary algorithm able to produce multi-network (decision-based) black-box adversarial attacks based on Instagram inspired image filters. Due to the multi-network training, the system reaches a high transferability rate of attacks and, due to the composition of image filters, it is able to bypass standard detection mechanisms. Moreover, this kind of attack is semantically robust: our filter composition cannot be distinguished from any other filter composition used extensively every day to enhance images; this raises new security issues and challenges for real-world systems. Experimental results demonstrate that the method is also effective against
ensemble-adversarially trained models and it has a low cost in terms of queries to the victim model. | Reject | This work proposed a nested evolutionary algorithm to choose image filters and filter parameters for back-box attacks, with the emphasize of high transferability.
After reading the manuscript, the comments of reviewers and the authors' responses, I think the main issues of this work include:
1. The limited novelty of the main idea, since there have been many filter-based attacks, and this work is very close to an existing work;
2. The solution is not new, since the evolutionary method is also well adopted in adversarial attacks;
3. Many many black-box attack methods are not cited and compared, though the authors argued that their perturbation upper bound are different such that they cannot be compared, which is not convincing;
4. The claimed high transferability is not well explained, maybe due to the model ensemble (as indicated by reviewer eN8o). Besides, many existing works that studied transferability are not cited and compared.
5. Experiments are inadequate. The authors added some results in the revised version, but the current shape is still not ready for publication.
Thus, my recommendation is reject. Hope the reviews can help to improve this work in future. | train | [
"2ThK61iJFLE",
"GJ5jsfw2vzG",
"dQ25OeqPTjj",
"6KFlvGw6C2",
"3eMQoI8wh2-",
"5C1-3BloYyn1",
"PgLp7kmKF_c",
"An90J6oYWwwg",
"qDYdQgSrleH",
"-QQbP1ZMBek",
"6NDjQ6MyTLg",
"pll20rdi2Y5",
"235e9z4SZ8t",
"JDc8UcmH7C",
"_cEXxrjuiL",
"6Sq9-V61LBx"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new way to craft adversarial examples using Instagram inspired filters by means of evolutionary algorithms. The resulting adversarial attacks do not have recognizable artificial patterns as in other methods and show competitive performance including a difficult black-box scenario. \n Strengths... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_ba81PoR_k1p",
"-QQbP1ZMBek",
"iclr_2022_ba81PoR_k1p",
"JDc8UcmH7C",
"An90J6oYWwwg",
"6Sq9-V61LBx",
"6Sq9-V61LBx",
"pll20rdi2Y5",
"iclr_2022_ba81PoR_k1p",
"2ThK61iJFLE",
"_cEXxrjuiL",
"qDYdQgSrleH",
"JDc8UcmH7C",
"iclr_2022_ba81PoR_k1p",
"iclr_2022_ba81PoR_k1p",
"iclr_2022_ba... |
iclr_2022_WXwg_9eRQ0T | MergeBERT: Program Merge Conflict Resolution via Neural Transformers | Collaborative software development is an integral part of the modern software development life cycle, essential to the success of large-scale software projects. When multiple developers make concurrent changes around the same lines of code, a merge conflict may occur.
Such conflicts stall pull requests and continuous integration pipelines for hours to several days, seriously hurting developer productivity.
In this paper, we introduce MergeBERT, a novel neural program merge framework based on the token-level three-way differencing and a transformer encoder model. Exploiting restricted nature of merge conflict resolutions, we reformulate the task of generating the resolution sequence as a classification task over a set of primitive merge patterns extracted from real-world merge commit data.
Our model achieves 63--68\% accuracy of merge resolution synthesis, yielding nearly a 3$\times$ performance improvement over existing structured, and 2$\times$ improvement over neural program merge tools. Finally, we demonstrate that MergeBERT is sufficiently flexible to work with source code files in Java, JavaScript, TypeScript, and C\# programming languages, and can generalize zero-shot to unseen languages. | Reject | This paper is a fair effort, making some headway on a problem of practical importance.
There was some discussion of scoping and whether the contribution was Machine-Learning-y enough.
I'm kind of ambivalent on that particular question: I think the general rule is that the further out-of-scope the paper seems, the better the results need to be for people to overlook it.
I think in this case, unfortunately, even the two most positive reviewers did not evince enough excitement about this paper for it to get accepted in light of the scoping concerns.
Given the various constraints involved, I don't think I can recommend acceptance.
In order to get it accepted into a future conference I would recommend either:
a) Submit to a more Software-Engineering focused venue
b) Really shore up the evaluation such that the reviewers sympathetic to this kind of paper will find it unimpeachable and score it more generously. | train | [
"oQOkOfcsh-H",
"AHus-1usngI",
"nKkkYP4w6B",
"W9p_1doojS2",
"-8IpHVzxK74",
"ZXBDk9akAo",
"ze8GNIr1jFR",
"x9pVWP9QzdH",
"HAKP6cbvvk",
"BP_cbM4Fzt6"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > Where do the `1` and `1000` numbers in your statement come from? I don't know whether you're implying recall or precision by saying that. Of course, the recall is important but the level of importance depends on applications. For conflict resolution, it would easier for a user to resolve the conflict themselves... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"AHus-1usngI",
"ZXBDk9akAo",
"iclr_2022_WXwg_9eRQ0T",
"-8IpHVzxK74",
"BP_cbM4Fzt6",
"x9pVWP9QzdH",
"HAKP6cbvvk",
"iclr_2022_WXwg_9eRQ0T",
"iclr_2022_WXwg_9eRQ0T",
"iclr_2022_WXwg_9eRQ0T"
] |
iclr_2022_VKtGrkUvCR | Only tails matter: Average-Case Universality and Robustness in the Convex Regime | Recent works have studied the average convergence properties of first-order optimization methods on distributions of quadratic problems. The average-case framework allows a more fine-grained and representative analysis of convergence than usual worst-case results, in exchange for a more precise hypothesis over the data generating process, namely assuming knowledge of the expected spectral distribution (e.s.d) of the random matrix associated with the problem. In this work, we show that a problem's asymptotic average complexity is determined by the concentration of eigenvalues near the edges of the e.s.d. We argue that having à priori information on this concentration is a more grounded assumption than complete knowledge of the e.s.d., and that basing our analysis on the approximate concentration is effectively a middle ground between the coarseness of the worst-case convergence and this more unrealistic hypothesis. We introduce the Generalized Chebyshev method, asymptotically optimal under a hypothesis on this concentration, and globally optimal when the e.s.d. follows a Beta distribution. We compare its performance to classical optimization algorithms, such as Gradient Descent or Nesterov's scheme, and we show that, asymptotically, Nesterov's method is universally nearly-optimal in the average-case. | Reject | This paper studies the average convergence rate for first order methods on random quadratic optimization problems. Specifically it is a follow-up to work of Pedregosa and Scieur. They study the expected spectral distribution (e.s.d.) of the objective's Hessian and show asymptotic guarantees that work under some assumptions. In comparison to Pedregosa and Scieur, the main takeaway is that you only need to know the distribution at the edges as opposed to the entire spectrum in order to get the same improved convergence. However some reviewers felt that the contributions were oversold, and for example that Assumption 1 is quite restrictive. | train | [
"wKeyYNCT4UL",
"2QQlP77SR8r",
"P9IMwZkNfkb",
"KzCn6g_ibI",
"yCYCmbmp8TN",
"zv1mvArJXwh",
"2FhRaHSf18i",
"jqSGMFx8RWW",
"T4CLTnVive",
"htkatiMAnfM",
"rDWi_rWWwUa",
"A8UXp9_vW3c"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer,\nWe thank you for the time and effort spent reviewing our paper, it has allowed us to significatively improve our manuscript. We wish to inform you that we have updated our manuscript as well as responded to your main concerns in our rebuttal below. We have also written a general response to all re... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"htkatiMAnfM",
"A8UXp9_vW3c",
"iclr_2022_VKtGrkUvCR",
"2FhRaHSf18i",
"iclr_2022_VKtGrkUvCR",
"A8UXp9_vW3c",
"P9IMwZkNfkb",
"htkatiMAnfM",
"rDWi_rWWwUa",
"iclr_2022_VKtGrkUvCR",
"iclr_2022_VKtGrkUvCR",
"iclr_2022_VKtGrkUvCR"
] |
iclr_2022_gccdzDu5Ur | Combining Diverse Feature Priors | To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of explicit feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other's mistakes, which, in turn, leads to better generalization and resilience to spurious correlations. | Reject | The manuscript proposes a framework for imposing priors on the feature extraction in deep visual processing models. The core contribution of this manuscript is the systematic formulation and investigation of how different, distinct feature priors leads to complementary feature representations that can be combined to provide more robust data representations. The manuscript uses early work on co-training and also the more recent work on self-supervision and self-training. Experiments are performed with classical shape- and texture-biased models, and show that diverse feature priors are able to robustly create a set of complementary data views.
Positive aspects of the manuscript includes:
1. The topic of this paper, creating and combining robust, generalizable and diverse feature representations, is of high relevance;
2. Positive results from co-training of groups image classification models designed to focus on shape but not texture or vice versa.
There are also several major concerns, including:
1. The ensemble results presented in section 3.2 are generated using very primitive ensembling techniques;
2. The absence-of-spurious-correlation-in-unlabelled data assumption be presented more cautiously;
3. Definition of feature prior;
4. Analysis on another domain aside from image classification.
During the rebuttal period, the Authors provided additional experiments using a more sophisticated method (“stacking”), and additional discussion of where spurious correlations are likely to occur. The manuscript has high rating variance. Some reviewers think that the manuscript lacks the technical novelty, and the results presented are the results of an empirical study. The focus of this manuscript is on two natural feature priors (i.e., shape and texture). It would strengthen the manuscript if the Authors can provide further analysis to emphasise the generality of the proposed framework that it could accommodate any two feature priors as long as they are sufficiently diverse. | train | [
"Wk7VWusNiR",
"43Usdck9Ifh",
"m7MjLVvGt7R",
"1dO9Lch9ix",
"3-dY5liTdy5",
"Zs0j0E54nGX",
"Mos2KnHLaSk",
"0ge7KTEiwx",
"I-lkEJIi9-C",
"o6BLRaz_PYN",
"7bC9SxbWvZY",
"aWEDeWw1PnM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thorough rebuttal! I am satisfied with your replies to the other reviewers and would like to maintain my rating.",
" In light of the addition of stacking experiments and also the additional discussion of where spurious correlations are likely to occur, I have raised my score to a 6.",
"The ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
3
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"Zs0j0E54nGX",
"0ge7KTEiwx",
"iclr_2022_gccdzDu5Ur",
"3-dY5liTdy5",
"aWEDeWw1PnM",
"7bC9SxbWvZY",
"o6BLRaz_PYN",
"m7MjLVvGt7R",
"iclr_2022_gccdzDu5Ur",
"iclr_2022_gccdzDu5Ur",
"iclr_2022_gccdzDu5Ur",
"iclr_2022_gccdzDu5Ur"
] |
iclr_2022_EGtUVDm991w | Token Pooling in Vision Transformers | Despite the recent success in many applications, the high computational requirements of vision transformers limit their use in resource-constrained settings. While many existing methods improve the quadratic complexity of attention, in most vision transformers, self-attention is not the major computation bottleneck, e.g., more than 80% of the computation is spent on fully-connected layers. To improve the computational complexity of all layers, we propose a novel token downsampling method, called Token Pooling, efficiently exploiting redundancies in the images and intermediate token representations. We show that, under mild assumptions, softmax-attention acts as a high-dimensional low-pass (smoothing) filter. Thus, its output contains redundancy that can be pruned to achieve a better trade-off between the computational cost and accuracy. Our new technique accurately approximates a set of tokens by minimizing the reconstruction error caused by downsampling. We solve this optimization problem via cost-efficient clustering. We rigorously analyze and compare to prior downsampling methods. Our experiments show that Token Pooling significantly improves the cost-accuracy trade-off over the state-of-the-art downsampling. Token Pooling is a simple and effective operator that can benefit many architectures. Applied to DeiT, it achieves the same ImageNet top-1 accuracy using 42% fewer computations. | Reject | This submission receives mixed ratings initially. Two reviewers lean negatively while one reviewer is positive. The raised issues include
whether the proposed method can be adapted to other vision transformers, the design choice of pooling strategy, the computational time cost, the similarity to an existing work, and the influence of the proposed method on downstream tasks. In the rebuttal, the authors have addressed several issues such as pooling strategy analysis and time consumption.
There are still some issues not completely solved. The proposed method introduces K-mean clustering on tokens between different layers. The K-mean clustering is prevalent and the weighted clustering does not make the technical contribution sufficient. Also as a general token pooling operation, the proposed method shall be integrated into various types of vision transformers (e.g., vanilla ViT [a], ConViT [b]), rather than one single DeiT. Besides, the downstream tasks in DeiT are not conducted in the proposed method.
Overall, the AC feels the proposed method, although interesting, requires a major revision that addresses existing issues. The authors are suggested to further improve the current submission and welcome to submit for the next venue.
[a]. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Dosovitskiy et al. ICLR 2021.
[b]. ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases. Ascoli et al. ICML 2021. | train | [
"8hoyBPSXXI1",
"UdAenHKcWc2",
"nrX897ro8E8",
"6IuMOR93jA",
"ngPJW2HVx-5",
"gOq4Fs8O3Or",
"gTyI53ucBvZ",
"rP3XMx3HImX",
"QSn90hr_KGZ",
"JLNxkaNljN4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comment.\n \nWhile it is not feasible to conduct and report additional experiments at this point, we would like to point out that even our DeiT-based implementation outperforms (or is on par with) several very recent and contemporeneus networks, such as pyramid architectures PVT, HVT, ConViT, S... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"UdAenHKcWc2",
"gOq4Fs8O3Or",
"iclr_2022_EGtUVDm991w",
"rP3XMx3HImX",
"JLNxkaNljN4",
"QSn90hr_KGZ",
"iclr_2022_EGtUVDm991w",
"nrX897ro8E8",
"iclr_2022_EGtUVDm991w",
"iclr_2022_EGtUVDm991w"
] |
iclr_2022_iaqgio-pOv | Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners | Post-hoc explanations for black box models have been studied extensively in classification and regression settings. However, explanations for models that output similarity between two inputs have received comparatively lesser attention. In this paper, we provide model agnostic local explanations for similarity learners applicable to tabular and text data. We first propose a method that provides feature attributions to explain the similarity between a pair of inputs as determined by a black box similarity learner. We then propose analogies as a new form of explanation in machine learning. Here the goal is to identify diverse analogous pairs of examples that share the same level of similarity as the input pair and provide insight into (latent) factors underlying the model's prediction. The selection of analogies can optionally leverage feature attributions, thus connecting the two forms of explanation while still maintaining complementarity. We prove that our analogy objective function is submodular, making the search for good-quality analogies efficient. We apply the proposed approaches to explain similarities between sentences as predicted by a state-of-the-art sentence encoder, and between patients in a healthcare utilization application. Efficacy is measured through quantitative evaluations, a careful user study, and examples of explanations. | Reject | This paper presents two novel approaches to provide explanations for the similarity between two samples based on 1) the importance measure of individual features and 2) some of the other pairs of examples used as analogies. The proposed approach to explain similarity prediction is a relatively less explored area, which makes the problem addressed and the proposed method unique. However, reviewers expressed concerns about evaluation methods and there were some concerns about the design choices that were not well motivated. The major issue is, as pointed out by the majority of the reviewers, the evaluation methods. Given the paper, reviews, and responses of the authors and the reviewers, it appears that there is certainly room for improvement for more convincing evaluation methodologies to convince a cross-section of machine learning researchers that the proposed approach advances the field. Overall, this is a good paper, but appears to be borderline to marginally below the threshold for the acceptance. | val | [
"5lgTObKOx-k",
"DsgzDmXfRPV",
"Vzs9a1uxaxy",
"Nqv1cfm4OMs",
"ZCDYNZBlUZ3",
"BOg0Uyf9rLc",
"tujvNHHBhcZ",
"tIEMF_46IMY",
"twQlSowjLDC",
"sT3nZIKjPzr",
"i0x1QMIkYSI",
"M5HpmdhBbfo",
"17vX_KiBbK",
"xpdig0VlWH0",
"ezA4pc9VIL",
"3eo6Dd8fA9",
"gba_cupxoum",
"m6eZBZpzI5e",
"48uwGO3g0k",... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Thanks to all reviewers for their critical reviews and also for engaging with us. We believe we have addressed most of your concerns. Please let us know if any more clarifications or explanations are required. Thank you.",
" 1) **Evaluating different explanation techniques:** We would like to stress that the me... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
1
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"iclr_2022_iaqgio-pOv",
"Vzs9a1uxaxy",
"BOg0Uyf9rLc",
"i0x1QMIkYSI",
"iclr_2022_iaqgio-pOv",
"tujvNHHBhcZ",
"tIEMF_46IMY",
"BNWxWhH5SxE",
"sT3nZIKjPzr",
"Fk_pJTB7CLm",
"M5HpmdhBbfo",
"17vX_KiBbK",
"gba_cupxoum",
"iclr_2022_iaqgio-pOv",
"-w2o4Ox4mXw",
"rS2uA3ypdgi",
"3eo6Dd8fA9",
"i... |
iclr_2022_K3bGe_-aMV | Semantically Controllable Generation of Physical Scenes with Explicit Knowledge | Deep Generative Models (DGMs) are known for their superior capability in generating realistic data. Extending purely data-driven approaches, recent specialized DGMs satisfy additional controllable requirements such as embedding a traffic sign in a driving scene by manipulating patterns implicitly in the neuron or feature level. In this paper, we introduce a novel method to incorporate domain knowledge explicitly in the generation process to achieve the semantically controllable generation of physical scenes. We first categorize our knowledge into two types, the property of objects and the relationship among objects, to be consistent with the composition of natural scenes. We then propose a tree-structured generative model to learn hierarchical scene representation, whose nodes and edges naturally corresponded to the two types of knowledge, respectively. Consequently, explicit knowledge integration enables semantically controllable generation by imposing semantic rules on the properties of nodes and edges in the tree structure. We construct a synthetic example to illustrate the controllability and explainability of our method in a succinct setting. We further extend the synthetic example to realistic environments for autonomous vehicles and conduct extensive experiments: our method efficiently identifies adversarial physical scenes against different state-of-the-art 3D point cloud segmentation models and satisfies the traffic rules specified as the explicit knowledge.
| Reject | This paper presents a semantically controllable generative framework by integrating explicit knowledge. In particular, a tree-structured generative model is proposed based on knowledge categorization. Reviewers raised concerns about technical details, experiments, and missing references. In the revised paper, the authors provided more justifications and clarifications, such as the definition of adversarial attack. During the discussion, reviewers agreed that the previous concerns have been partially addressed, but there are still concerns on experiments, e.g., more recent work should be considered as baselines.
Overall, I recommend to reject this paper. I encourage the authors to take the review feedback into account and submit a future version to another venue. | train | [
"dmQhB0NLeJL",
"5vvRx33PSRu",
"X01oh_lA94v",
"WunikJcZeRm",
"Sl0GfmIrAr2",
"TcAaEdV_LVT",
"bC20eeddys4",
"G7Y4VV7N-bi",
"eiO4v6rezq5",
"qVG_Slo0W3n",
"WBj831izqj",
"_c4sa-pW0O0",
"Qcpwph6Ucpm",
"PNPMaier24p"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for carefully answering my concerns. However, the target application (adversarial attack) is not convincing in this scenario and hence I cannot assess the results. The paper is about \"scene generation\" and I was expeting to see results on the qualitty of the scene generation. The method is e... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2
] | [
"WBj831izqj",
"iclr_2022_K3bGe_-aMV",
"G7Y4VV7N-bi",
"bC20eeddys4",
"iclr_2022_K3bGe_-aMV",
"PNPMaier24p",
"PNPMaier24p",
"5vvRx33PSRu",
"Qcpwph6Ucpm",
"_c4sa-pW0O0",
"_c4sa-pW0O0",
"iclr_2022_K3bGe_-aMV",
"iclr_2022_K3bGe_-aMV",
"iclr_2022_K3bGe_-aMV"
] |
iclr_2022_tlkMbWBEAFb | Fully Steerable 3D Spherical Neurons | Emerging from low-level vision theory, steerable filters found their counterpart in prior work on steerable convolutional neural networks equivariant to rigid transformations. In our work, we propose a steerable feed-forward learning-based approach that consists of spherical decision surfaces and operates on point clouds. Focusing on 3D geometry, we derive a 3D steerability constraint for hypersphere neurons, which are obtained by conformal embedding of Euclidean space and have recently been revisited in the context of learning representations of point sets. Exploiting the rotational equivariance, we show how our model parameters are fully steerable at inference time. We use a synthetic point set and real-world 3D skeleton data to show how the proposed spherical filter banks enable making equivariant and, after online optimization, invariant class predictions for known point sets in unknown orientations. | Reject | The paper proposed a new kind of neurons for 3D spherical data classification. All the reviewers agreed that the new kind of neurons makes a good contribution. However, all the reviewers also agreed that the experiments are too weak: only at the proof-of-concept level and no comparison with the state of the arts. Only reviewer FkmY advocated accepting the paper because we need new ideas, and all other reviewers leaned towards rejecting the paper. The AC had exactly the same feeling as the reviewers. Particularly, the AC also agreed with reviewer FkmY that we should not look at experimental results only. However, the AC would like to point out that this by no means means that the experiments can be too simple. Note that this paper is to propose a new tool to improve classification performance, rather than a new theory to explain or predict something. So some basic requirements on the experiments are necessary. If the authors could provide comparison with the state of the arts and with reasonably good performance, not necessarily exceeding or even on par with the state of the arts (namely can be inferior but not too inferior so that others can believe adding engineering tricks could fill in the gap), the AC would consider accepting the paper. | val | [
"zHzRiiWZqde",
"t7I0dPvUM9o",
"V4O5kgyem0P",
"FegLRQsTZ4f",
"oSdnIA6rYnL",
"VSiWhL5gpQ",
"F5pXKmqdosG",
"uDo-nwMN6Uo",
"cJig2UjKNV",
"LT0UcC09r8R",
"RAeP7nOFg-Z",
"GrQRGj-HtEw",
"TSlGn_Z0OA",
"vCSOE8KT85u",
"ECH32zMH3Jo",
"XivOiAv9-u",
"snRTmUNXGmn",
"DorI9EGG0-o",
"bbakuwvdq9A",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"au... | [
" Thank you for your additional feedback.\n\nWe agree that requiring ground truth knowledge on the rotation limits the practical use of the method. \n\nHowever, the main point of the experiment in Section 5.3 was to show that the ancestor model with rectified input and the steered model produce the same output. ",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
5,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
2,
4,
3,
2
] | [
"FegLRQsTZ4f",
"oSdnIA6rYnL",
"VftTBZEr6Wz",
"snRTmUNXGmn",
"ECH32zMH3Jo",
"iclr_2022_tlkMbWBEAFb",
"TSlGn_Z0OA",
"7C_abdSeQX",
"cKKTeKKhJu",
"VftTBZEr6Wz",
"XivOiAv9-u",
"gHqluWgtmJR",
"XivOiAv9-u",
"gHqluWgtmJR",
"cKKTeKKhJu",
"iclr_2022_tlkMbWBEAFb",
"7C_abdSeQX",
"VftTBZEr6Wz",... |
iclr_2022_jWaLuyg6OEw | First-Order Optimization Inspired from Finite-Time Convergent Flows | In this paper, we investigate the performance of two first-order optimization algorithms, obtained from forward Euler discretization of finite-time optimization flows. These flows are the rescaled-gradient flow (RGF) and the signed-gradient flow (SGF), and consist of non-Lipscthiz or discontinuous dynamical systems that converge locally in finite time to the minima of gradient-dominated functions. We propose an Euler discretization for these first-order finite-time flows, and provide convergence guarantees, in the deterministic and the stochastic setting. We then apply the proposed algorithms to academic examples, as well as deep neural networks training, where we empirically test their performances on the SVHN dataset. Our results show that our schemes demonstrate faster convergences against standard optimization alternatives. | Reject | This paper starts from the observation that a certain class of rescaled gradient flows - referred to in the paper as RGF and SGF - converge to a solution in finite time (Wibisono et al., 2016; Romero and Benosman, 2020). As a result, it is plausible to ask whether the Euler discretizations of these flows - viewed now as optimization algorithms - enjoy superior convergence properties or not. The authors' main results establish a linear convergence rate under a certain gradient dominance condition, as well as linear convergence to an $\epsilon$-neighborhood of a solution if the algorithms are run with minibatch gradients of size $O(1/\epsilon^\rho)$ for some positive exponent $\rho>0$.
The reviewers raised several concerns regarding the motivation of the authors' work and the comparison of the rates they obtain to other related papers in the literature. The reviewers that raised these concerns were not convinced by the authors' rebuttal and maintained their original assessment during the discussion phase.
From my own reading of the paper, I was perplexed by the fact that the authors did not compare the rates they obtained to existing results in the context of KL optimization, such as the cited paper by Attouch and Bolte and many follow-up works in the area. Also, in the stochastic part, while the authors argue that "utilizing batches with size dependent on $1/\epsilon$ is absolutely reasonable and usual, in both theory and practice", it should be noted that a high accuracy requirement (small $\epsilon$) could lead to completely unreasonable batch sizes (effectively exceeding the size of the dataset, especially when $\psi$ is small). Thus, while it is possible to achieve convergence to arbitrarily high accuracy with a sufficiently small step-size for a _fixed_ batch size, the rate of this convergence cannot be linear overall - in contrast to the way that the authors frame their result.
In view of the above, I concur that the paper does not clear the bar for ICLR, so I am recommending rejection at this stage (but I would encourage the authors to resubmit a suitably revised version of their paper at the next opportunity). | train | [
"Ch56mzqP36K",
"fzu8tPG-Ji_",
"fLMIS5SmXUT",
"3VhA4WKTrgF",
"qAAKU9eU2f",
"MjT0lAZi8v7",
"vcKkBVjb7cR",
"Mo38iI6kAVQ",
"Jw7K33LnpRh",
"pqUmq2P0yPm",
"Nnwg4Qa0I7e",
"mFxXij-S4kD",
"nAjqWZbwvl",
"_kD-PId402s",
"rmnrfD0vUpm",
"yMolMB8EHtm",
"LSRyn_xH0ky",
"61zZ_ukuWKw",
"vbcFWhZ_u_D... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate your opinions in our work, even though there is still some disagreement between us on the contributions in our work, which may be hard to solve at this time. We are very glad that our message seems to be delivered clearly and that we engaged in this discussion. We will make further revisions ... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"fzu8tPG-Ji_",
"fLMIS5SmXUT",
"vcKkBVjb7cR",
"OX6qtKPRG07",
"U7PkT9f-XVD",
"iclr_2022_jWaLuyg6OEw",
"vbcFWhZ_u_D",
"Nnwg4Qa0I7e",
"U7PkT9f-XVD",
"LSRyn_xH0ky",
"iclr_2022_jWaLuyg6OEw",
"OX6qtKPRG07",
"Jw7K33LnpRh",
"rmnrfD0vUpm",
"MjT0lAZi8v7",
"pqUmq2P0yPm",
"61zZ_ukuWKw",
"vbcFWh... |
iclr_2022_ZFIT_sGjPJ | Data-Dependent Randomized Smoothing | Randomized smoothing is a recent technique that achieves state-of-art performance in training certifiably robust deep neural networks. While the smoothing family of distributions is often connected to the choice of the norm used for certification, the parameters of these distributions are always set as global hyper parameters independent from the input data on which a network is certified. In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier. We also propose a simple memory-based approach to certifying the resultant smooth classifier. This new approach is generic, parameter-free, and easy to implement. In fact, we show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches, leading to consistent improved certified accuracy. When this framework is used in the training routine of these approaches followed by a data dependent certification, we achieve 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet. | Reject | The idea to adapt the noise variance in the certification of a base classifier sounds natural and interesting, but unfortunately fundamentally flawed, as correctly pointed out by Reviewer viFi (also acknowledged in the authors' response): the author's main algorithm does not lead to any theoretical certification while the empirical fix (based on memory), however successful in one's experiment, does not rule out the possibility of failure when future test samples flood in. Incidentally, I believe this fallacy may have also answered Reviewer Xsdx's question (why this has not been done before). I agree with Reviewer viFi that the writing of this work is a bit deceptive and will require significant change. In particular, one cannot wave hands at claims on certification: you need to formally prove the memory-based empirical fix will provably certify a region for what classifier and under what assumption. Therefore, the current draft cannot be accepted. Please consider rethinking about the idea and rewriting the paper according to the reviewers' comments. | train | [
"s49Xvxx4Rt",
"rAxjyUNYl7-",
"Ca1kbWgmPE5",
"CJ-WPh-pY7",
"QdUFGlwAdJN",
"cM21Om56NGd",
"Ksy_2GuI0tA",
"oqJDabIii6g",
"lGl8dbPYADd",
"TBgT6AvruWx",
"EDvH0HYnrnn",
"z6lvOAvzwU1",
"lb0KdA0v01X"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply.\n\nLet me elaborate why Eq. (2) still bothers me: As you correctly state in your reply, it \"holds\" when using the same sigma globally. However, this is exactly what you are NOT doing. Thus, for me, Eq. (2) is to some degree arbitrary and one could also use some other function to optimize ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
1,
3
] | [
"lGl8dbPYADd",
"Ca1kbWgmPE5",
"oqJDabIii6g",
"iclr_2022_ZFIT_sGjPJ",
"iclr_2022_ZFIT_sGjPJ",
"lb0KdA0v01X",
"z6lvOAvzwU1",
"EDvH0HYnrnn",
"TBgT6AvruWx",
"iclr_2022_ZFIT_sGjPJ",
"iclr_2022_ZFIT_sGjPJ",
"iclr_2022_ZFIT_sGjPJ",
"iclr_2022_ZFIT_sGjPJ"
] |
iclr_2022_ZAA0Ol4z2i4 | Explaining Off-Policy Actor-Critic From A Bias-Variance Perspective | Off-policy Actor-Critic algorithms have demonstrated phenomenal experimental performance but still require better explanations. To this end, we show its policy evaluation error on the distribution of transitions decomposes into: a Bellman error, a bias from policy mismatch, and a variance term from sampling. By comparing the magnitude of bias and variance, we explain the success of the Emphasizing Recent Experience sampling and 1/age weighted sampling. Both sampling strategies yield smaller bias and variance and are hence preferable to uniform sampling. | Reject | There was some disagreement between reviewers regarding the quality of the paper. Reading the paper, I had difficulty understanding what you were trying to achieve and, similarly to reviewer VgPP, felt the experimental section to be weak. While I can appreciate that compute is expensive, it would have been relevant to design more controllable continuous environments to get cleaner results in addition to those on MuJoCo. As it is, there is a lot of noise (and Table 1 does not contain confidence intervals) which, added to the general brittleness of RL algorithms, makes the experiments lack convincing power.
I encourage the authors to take all the feedback from the authors into account and resubmit an improved version of their work to another conference. | train | [
"2LHFZ9lGawv",
"x73yu2Ds_2h",
"wku011rCtG2",
"tWyoc3l2aPG",
"toxywqaNlG",
"b2qizkRSJ5s",
"Z4kA7bbpRL",
"Lw02O7gbY7y",
"q_LzsG_hJJt"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer b574 for the valuable comments. We’ve revised the paper to strengthen the following important points. We hope this helps resolve the general idea and motivation of this work.\n\n(a) Connection from theory to practice\n\nWe re-write section 5.2 as “policy evaluation error under non-uniform weight... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"q_LzsG_hJJt",
"wku011rCtG2",
"Lw02O7gbY7y",
"Z4kA7bbpRL",
"b2qizkRSJ5s",
"iclr_2022_ZAA0Ol4z2i4",
"iclr_2022_ZAA0Ol4z2i4",
"iclr_2022_ZAA0Ol4z2i4",
"iclr_2022_ZAA0Ol4z2i4"
] |
iclr_2022_TD-5kgf13mH | Sparse MoEs meet Efficient Ensembles | Machine learning models based on the aggregated outputs of submodels, either at the activation or prediction levels, lead to strong performance. We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs). First, we show that these two approaches have complementary features whose combination is beneficial. Then, we present partitioned batch ensembles, an efficient ensemble of sparse MoEs that takes the best of both classes of models. Extensive experiments on fine-tuned vision transformers demonstrate the accuracy, log-likelihood, few-shot learning, robustness, and uncertainty calibration improvements of our approach over several challenging baselines. Partitioned batch ensembles not only scale to models with up to 2.7B parameters, but also provide larger performance gains for larger models. | Reject | This paper experiments with a combination of Sparse MoEs and Ensembles on the Vision Transformer (ViT), showing improved performance. To efficiently combine Sparse MoEs and Ensembles, the paper presents Partitioned Batch Ensembles (PBE), where the parameters of the self-attention layers are shared, and an ensemble of Sparse MOEs are used for the MLP layers of the Transformer blocks.
While reviewers agree that the proposed approach is interesting, they also point out several weaknesses, such as the limited novelty of the proposed method (a simple combination of existing techniques) and small experimental gains. They also pointed out several weakness related to the experimental part. While the authors responded in a very detailed manner to several of these points and presented several additional experiments, I feel this paper will benefit from consolidating all these new results and going through another round of reviews. | train | [
"K1iOLaot5Ye",
"WGI5jOl1CQV",
"mSNZ9iUP0qc",
"_n3iIujQHJ7",
"3vkObnVDGC",
"NIxgttXGRF5",
"XBnudmxJiYY",
"pAQtWH54MvN",
"qTv3-ZiL_kG",
"7hmtHM0veOg",
"CWnReY64NCE",
"EDPAyp_25in",
"UEFfjC2Cr-V",
"hrJK9TWnTP5",
"fvfKZ1xhNUU",
"bfuB_Cnnwm6",
"yLMXq6Qoeyz",
"ENNBBFEvMoC"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Regarding FLOPs values. pBE *does* require more FLOPs than V-MoE. And indeed, this is due to the tiling of the features. However, please note that the tiling only happens towards the end of the network because of the \"Last-$n$\" structure of the underlying V-MoE. Thus, the increase in FLOPs compared to V-MoE is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
"WGI5jOl1CQV",
"qTv3-ZiL_kG",
"pAQtWH54MvN",
"pAQtWH54MvN",
"pAQtWH54MvN",
"XBnudmxJiYY",
"pAQtWH54MvN",
"fvfKZ1xhNUU",
"ENNBBFEvMoC",
"CWnReY64NCE",
"EDPAyp_25in",
"UEFfjC2Cr-V",
"yLMXq6Qoeyz",
"bfuB_Cnnwm6",
"iclr_2022_TD-5kgf13mH",
"iclr_2022_TD-5kgf13mH",
"iclr_2022_TD-5kgf13mH",... |
iclr_2022__2CLeIIYMPd | Discovering Latent Network Topology in Contextualized Representations with Randomized Dynamic Programming | The discovery of large-scale discrete latent structures is crucial for understanding the fundamental generative processes of language. In this work, we use structured latent variables to study the representation space of contextualized embeddings and gain insight into the hidden topology of pretrained language models. However, existing methods are severely limited by issues of scalability and efficiency as working with large combinatorial spaces requires expensive memory consumption. We address this challenge by proposing a Randomized Dynamic Programming (RDP) algorithm for the approximate inference of structured models with DP-style exact computation (e.g., Forward-Backward). Our technique samples a subset of DP paths reducing memory complexity to as small as one percent. We use RDP to analyze the representation space of pretrained language models, discovering a large-scale latent network in a fully unsupervised way. The induced latent states not only serve as anchors marking the topology of the space (neighbors and connectivity), but also reveal linguistic properties related to syntax, morphology, and semantics. We also show that traversing this latent network yields unsupervised paraphrase generation. | Reject | The paper introduces a technique for randomised dynamic programming and uses it to scale a latent variable model that enables interpreting the hidden states of large pre-trained models for text representation and generation.
The current version needs to be improved with regards to scope, which can be seen by the various confusions that it triggered, and which the authors tried to address in the rebuttal phase. It is somewhat unclear to all of us (myself included) whether the paper is about i) randomised dynamic programming (RDP), or ii) RDP's role in a particular LVM (with a CRF posterior approximation), or iii) RDP+LVM's ability to interpret deep Transformer models? Empirically, the paper is much more about (iii), somewhat about (ii, e.g. Table 1), very little about (i, e.g. Figure 2).
*Because the scope is now confusing*, the current version sometimes comes across as relatively incremental or even incomplete:
* Should the authors embrace interpretation. The overall strategy is *very interesting*, and it scales a neat model precisely in the way it needs to be scaled to do what it's meant to do, but this would change the focus of the paper, RDP would be all but a means to an end, and perhaps other techniques for interpretation would be needed.
* Should the authors embrace RDP itself (disentangled from its application to model interpretation). Some of us felt like the randomisation technique on its own is not too surprising (given the work of [Liu et al](http://proceedings.mlr.press/v97/liu19c/liu19c.pdf), for example), and, regardless of that, to push for RDP's significance, the paper would need more comparisons. The only alternative to RDP investigated in the paper is a heuristic top-K gradient. There are deterministic gradients that are less heuristic, and which may become unbiased eventually as training progresses, see for example [[1]](https://aclanthology.org/D18-1108/) and [[2]](https://papers.nips.cc/paper/2020/hash/887caadc3642e304ede659b734f79b00-Abstract.html).
In the first round of reviews there were some comments that questioned the paper's fitness to ICLR, I would like to remark that this has been clarified, and the paper targets a problem of clear relevance to the conference.
I would personally like to add a minor comment: it would be nice to acknowledge some older literature on randomised DPs (see for example [[3]](https://papers.nips.cc/paper/2009/hash/e515df0d202ae52fcebb14295743063b-Abstract.html) and [[4]](https://aclanthology.org/N10-1028/)). | train | [
"4KMXckJD_yB",
"LuTWi5MdiPc",
"JCaAYJ6Ft9G",
"cDxuqZG4Kfu",
"Vhje29N3CHJ",
"EW4BO2n8oll",
"yMXBqibXXk7",
"bIPqOIQipHb",
"Lu86vLnVmu",
"uP-4UDS8yI8",
"nuD2stAbIa2",
"mHZM-2V1oOP",
"Cut1Tv54zVi",
"fY9JMXSf7c0",
"1L9jR3k06tD",
"NIzGF6no2Iu",
"R-D6wQ787Ow",
"EhxzSZe81jV",
"fgLDnX2Ys8... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the response from the the authors, however my score recommendation remains the same. As other reviewers have mentioned, the contributions while interesting are not presented clearly to show novelty and significance. I would recommend that the authors make structural changes to the paper to emphasize ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"LufvAwlmTNI",
"iclr_2022__2CLeIIYMPd",
"EhxzSZe81jV",
"fgLDnX2Ys8r",
"LuTWi5MdiPc",
"LufvAwlmTNI",
"LuTWi5MdiPc",
"EhxzSZe81jV",
"EhxzSZe81jV",
"iclr_2022__2CLeIIYMPd",
"EhxzSZe81jV",
"EhxzSZe81jV",
"fgLDnX2Ys8r",
"fgLDnX2Ys8r",
"fgLDnX2Ys8r",
"fgLDnX2Ys8r",
"LufvAwlmTNI",
"iclr_2... |
iclr_2022_FqRHeQTDU5N | Learning to Give Checkable Answers with Prover-Verifier Games | Our ability to know when to trust the decisions made by machine learning systems has not kept up with the staggering improvements in their performance, limiting their applicability in high-stakes applications. We propose Prover-Verifier Games (PVGs), a game-theoretic framework to encourage neural networks to solve decision problems in a verifiable manner. The PVG consists of two learners with competing objectives: a trusted verifier network tries to choose the correct answer, and a more powerful but untrusted prover network attempts to persuade the verifier of a particular answer, regardless of its correctness. The goal is for a reliable justification protocol to emerge from this game. We analyze several variants of the basic framework, including both simultaneous and sequential games, and narrow the space down to a subset of games which provably have the desired equilibria. We then develop practical instantiations of the PVG for several algorithmic tasks, and show that in practice, the verifier is able to receive useful and reliable information from an untrusted prover. Importantly, the protocol still works even when the verifier is frozen and the prover's message is directly optimized to convince the verifier. | Reject | The reviewers are split about this paper and did not come to a consensus: on one hand they agreed that the paper has valuable theoretical contributions and addresses an important problem in current ML literature, on the other hand they would have liked to see empirical results on a real-world problem setting. After going through the paper and the discussion I have decided to vote to reject for the following reason: I believe the reviewers' concerns about empirical results is not just a request for applying this to more datasets (which is easy to satisfy and I don't think is grounds for rejection), but is actually for a clearer connection for how this work would be used in the machine learning problems described in the introduction and related work sections. What would really help this paper is a real-world running example, in place of the blue plus example, in Figure 1 (I think the blue plus problem is still a useful experimental tool and should be evaluated, but it doesn't clarify the real-world use-cases of this work. This led the reviewers to look to the experimental section for clarification on this, but this wasn't clarified there either. The authors' response to these concerns was an out-of-scope argument: the goal of this paper is to derive/test theoretical results, and there are a number of possible use cases we could apply this to. The authors argue that the current work sends 'a strong signal to the ICLR community that the Prover-Verifier Game is interesting and promising'. I'm sorry but I disagree here: the authors need to do more to convince the ICLR community that this is a framework that will solve outstanding problems in ML. This is solved if the authors (a) run their approach on a real-world dataset in a paper they cite in the related work, (b) they include baselines in this experiment, and (c) if they add this as a running example throughout with a figure that explains this real-world example. With these additions the paper will be a much stronger submission. | train | [
"Er8jcogfgS6",
"C16zDgCfNAC",
"RzWdwWPu2p0",
"ajO1Z1x2vg2",
"3PwSTOPJi7O",
"igwuwBf3I6A",
"rgA4jI_fLUR",
"zmQuhVOmQ0x",
"orBXBh9MgW7",
"McS-yFhBRnZ",
"zPXH7G0Tub2",
"4NpV5C7w6y",
"A0Sv_zyVLCO",
"T1_DwxrvMMq",
"unwDEW0K7hg"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a framework for training networks, such that justification of the answers can automatically emerge from it. Specifically, they propose a training framework where a prover's objective is to persuade the verifier network while the verifier network's objective is answering correctly based on both ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"iclr_2022_FqRHeQTDU5N",
"RzWdwWPu2p0",
"zmQuhVOmQ0x",
"zmQuhVOmQ0x",
"orBXBh9MgW7",
"McS-yFhBRnZ",
"zPXH7G0Tub2",
"Er8jcogfgS6",
"unwDEW0K7hg",
"T1_DwxrvMMq",
"A0Sv_zyVLCO",
"iclr_2022_FqRHeQTDU5N",
"iclr_2022_FqRHeQTDU5N",
"iclr_2022_FqRHeQTDU5N",
"iclr_2022_FqRHeQTDU5N"
] |
iclr_2022_wv6g8fWLX2q | TAMP-S2GCNets: Coupling Time-Aware Multipersistence Knowledge Representation with Spatio-Supra Graph Convolutional Networks for Time-Series Forecasting | Graph Neural Networks (GNNs) are proven to be a powerful machinery for learning complex dependencies in multivariate spatio-temporal processes. However, most existing GNNs have inherently static architectures, and as a result, do not explicitly account for time dependencies of the encoded knowledge and are limited in their ability to simultaneously infer latent time-conditioned relations among entities. We postulate that such hidden time-conditioned properties may be captured by the tools of multipersistence, i.e, a emerging machinery in topological data analysis which allows us to quantify dynamics of the data shape along multiple geometric dimensions.
We make the first step toward integrating the two rising research directions, that is, time-aware deep learning and multipersistence, and propose a new model, Time-Aware Multipersistence Spatio-Supra Graph Convolutional Network (TAMP-S2GCNets). We summarize inherent time-conditioned topological properties of the data as time-aware multipersistence Euler-Poincar\'e surface and prove its stability. We then construct a supragraph convolution module which simultaneously accounts for the extracted intra- and inter- spatio-temporal dependencies in the data. Our extensive experiments on highway traffic flow, Ethereum token prices, and COVID-19 hospitalizations demonstrate that TAMP-S2GCNets outperforms the state-of-the-art tools in multivariate time series forecasting tasks. | Accept (Spotlight) | The authors introduce the Time-Aware Multiperistence Spatio-Supra Graph CN that uses
multiparameter persistence to capture the latent time dependencies in spatio-temporal data.
This is a novel and experimentally well-supported work. The novelty is achieved by combining research in topological analysis (multipersistence) and neural networks. Technically sound. Clear presentation and extensive experimental section.
Reviewers were uniformly positive, agreeing that the approach was interesting and well-motivated, and the experiments convincing. Some concerns that were raised were successfully addressed by the authors and revised in the manuscript.
Happy to recommend acceptance. A veru nice paper! | train | [
"BN0Wf2DM7K_",
"9h8ZxntsO4L",
"X-BVcg3sIt2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors suggest the use of multiparameter persistence for \nexplicitly capturing he latent time dependencies in spatio-temporal data. \nThey propose the Time-Aware Multipersistence Spatio-Supra Graph \nConvolutional Network that uses the mutlipersistence as extracted by the Euler-Poincare \nsurface and allows ... | [
8,
8,
8
] | [
4,
4,
3
] | [
"iclr_2022_wv6g8fWLX2q",
"iclr_2022_wv6g8fWLX2q",
"iclr_2022_wv6g8fWLX2q"
] |
iclr_2022_N3KYKkSvciP | Understanding Square Loss in Training Overparametrized Neural Network Classifiers | Deep learning has achieved many breakthroughs in modern classification tasks. Numerous architectures have been proposed for different data structures but when it comes to the loss function, the cross-entropy loss is the predominant choice. Recently, several alternative losses have seen revived interests for deep classifiers. In particular, empirical evidence seems to promote square loss but a theoretical justification is still lacking. In this work, we contribute to the theoretical understanding of square loss in classification by systematically investigating how it performs for overparametrized neural networks in the neural tangent kernel (NTK) regime. Interesting properties regarding the generalization error, robustness, and calibration error are revealed. We consider two cases, according to whether classes are separable or not. In the general non-separable case, fast convergence rate is established for both misclassification rate and calibration error. When classes are separable, the misclassification rate improves to be exponentially fast. Further, the resulting margin is proven to be lower bounded away from zero, providing theoretical guarantees for robustness. We expect our findings to hold beyond the NTK regime and translate to practical settings. To this end, we conduct extensive empirical studies on practical neural networks, demonstrating the effectiveness of square loss in both synthetic low-dimensional data and real image data. Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration. | Reject | The paper contributes a theoretical understanding of training over-parametrized deep neural networks using gradient descent with respect to square loss in the NTK regime. Besides giving guarantees on the classification accuracy using square loss, authors reveal several interesting properties in this regime including robustness and calibration.
The problem studied here is exciting and very relevant. The current version, unfortunately, has some shortcomings. For example, under a margin assumption, the authors show that the least-squares solution finds something with the margin and, therefore, it yields “robustness.” There is no quantification of how “robust” is the trained model, what is the threat model, what if the noise budget is larger than the attained margin. In general, the analysis lacks any careful finer characterization or quantification of the claimed properties. Besides, as was pointed out, the setting of the neural tangent kernel regime is somewhat limited and to some extent impractical. The assumptions under which the results hold further make the setting of the paper significantly restrictive.
The writing can be improved with more emphasis on the novelty and significance of the contributions. Currently, all of the assumptions are buried in the appendix and the main paper is not even self-contained. I believe the comments from the reviewers have already helped improve the quality of the paper. I encourage the authors to further incorporate the feedback and work towards a stronger submission. | train | [
"8mTW4TAWXxa",
"_Z6gwXDTE7R",
"MV-U-E8--jV",
"zEkNtOfeRYH",
"RpE2MEKesC",
"SiAVRYBvhqp",
"2PZTDPGraph",
"HFYHhRWK7qh",
"DH4PQhiAdda",
"oEHVgky-09D",
"Sh-hdZfjaeb",
"DTFvlYlEIf",
"36PDenRWnTW",
"SlEN_-9Gtk",
"DJouix8wry",
"_Ush70O7c8t",
"o-SJwGo7tEr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the non-parametric convergence rate of neural network (NTK regime) under tsybakov's noise condition (low noise with large margin) and discussed the potential application of the theoretical result on the robustness of neural network and calibration results. Pros:\n- The non-parametric fast rat... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
2
] | [
"iclr_2022_N3KYKkSvciP",
"SiAVRYBvhqp",
"2PZTDPGraph",
"RpE2MEKesC",
"o-SJwGo7tEr",
"8mTW4TAWXxa",
"_Ush70O7c8t",
"DJouix8wry",
"DJouix8wry",
"SlEN_-9Gtk",
"iclr_2022_N3KYKkSvciP",
"iclr_2022_N3KYKkSvciP",
"iclr_2022_N3KYKkSvciP",
"iclr_2022_N3KYKkSvciP",
"iclr_2022_N3KYKkSvciP",
"iclr... |
iclr_2022_Mh40mAxxAUz | Bounding Membership Inference | Differential Privacy (DP) is the de facto standard for reasoning about the privacy guarantees of a training algorithm. Despite the empirical observation that DP reduces the vulnerability of models to existing membership inference (MI) attacks, a theoretical underpinning as to why this is the case is largely missing in the literature. In practice, this means that models need to be trained with differential privacy guarantees that greatly decrease their accuracy. In this paper, we provide a tighter bound on the accuracy of any membership inference adversary when a training algorithm provides $\epsilon$-DP. Our bound informs the design of a novel privacy amplification scheme, where an effective training set is sub-sampled from a larger set prior to the beginning of training, to greatly reduce the bound on MI accuracy. As a result, our scheme enables $\epsilon$-DP users to employ looser differential privacy guarantees when training their model to limit the success of any MI adversary; this, in turn, ensures that the model's accuracy is less impacted by the privacy guarantee. Finally, we discuss the implications of our MI bound on machine unlearning. | Reject | As the public post indicates, significant deliberation went into this decision. However, the core criticism remains: the primary contribution of this paper, Theorem 1, is somewhat incremental. It is acknowledged that MI is an important problem and understanding its intricacies is worthwhile, but the present paper's contributions in this space remains narrow. A more thorough exploration of the points brought up in the latest discussion and author response might help strengthen the paper. In particular, more careful discussion and systematic discussion and exploration of relationships between various MI attack efficacy measures (accuracy vs positive accuracy) and privacy notions (pure vs approx DP -- it wasn't totally clear how broad the result in Section 5.2.1 is without a precise theorem statement) would strengthen the paper. Additionally, while it indeed seems that the positive accuracy bound given also suffices to protect against the type of attack mentioned (where 1% of the datapoints are highly vulnerable), it is unclear if this is necessary. This feeds into the previous point: it would be valuable to get a more systematic understanding of the various MI efficacy measures and how they interact with DP. Finally, it is now appreciated that the Sablayrolles et al (2019) result worked under an unusual model restriction, though deficiencies of their result does not necessarily make this result stronger (as an aside, I believe their restriction is so that they can get a tight understanding of behavior in other settings, and DP protections were somewhat of an afterthought). The authors are encouraged to further build on this work, potentially in the directions suggested, to get a more thorough understanding of the relationships between DP and MI attacks | train | [
"kD0nVZBvPUM",
"G81iYQQpzZ",
"0NzHHXF0esV",
"-VqQNRYty15",
"TXxu_1Fe4q",
"Bpmi_KvxXr",
"HRscxTeOQYi",
"S4zoawaA1W",
"iC-TgDgP1MX",
"7fNMuEw5Rr",
"fv5XyMBijnl",
"p7ZNxkZr75F",
"vzmFWTsTZnA",
"aaolMbsZJwN",
"Ntm87HQFRCO",
"maABcXBZw-C",
"BGEJRO6nX78",
"VSX4BMjRceI",
"SdeN61XhNzS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for the thorough response. It will be taken into account by the reviewers and myself in the final decision for your paper. Again, apologies for the last-minute nature of all of this (and on a weekend at that): I made the decision that it would be more fair to the authors to give them a chance to underst... | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
2,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"G81iYQQpzZ",
"0NzHHXF0esV",
"-VqQNRYty15",
"TXxu_1Fe4q",
"S4zoawaA1W",
"iclr_2022_Mh40mAxxAUz",
"S4zoawaA1W",
"iclr_2022_Mh40mAxxAUz",
"iclr_2022_Mh40mAxxAUz",
"fv5XyMBijnl",
"aaolMbsZJwN",
"iclr_2022_Mh40mAxxAUz",
"iclr_2022_Mh40mAxxAUz",
"Bpmi_KvxXr",
"SdeN61XhNzS",
"p7ZNxkZr75F",
... |
iclr_2022_NCwIM2Q8ah6 | MDFL: A UNIFIED FRAMEWORK WITH META-DROPOUT FOR FEW-SHOT LEARNING | Conventional training of deep neural networks usually requires a substantial amount of data with expensive human annotations. In this paper, we utilize the idea of meta-learning to integrate two very different streams of few-shot learning, i.e., the episodic meta-learning-based and pre-train finetune-based few-shot learning, and form a unified meta-learning framework. In order to improve the generalization power of our framework, we propose a simple yet effective strategy named meta-dropout, which is applied to the transferable knowledge generalized from base categories to novel categories. The proposed strategy can effectively prevent neural units from co-adapting excessively in the meta-training stage. Extensive experiments on the few-shot object detection and few-shot image classification datasets, i.e., Pascal VOC, MS COCO, CUB, and mini-ImageNet, validate the effectiveness of our method.
| Reject | This work proposes an approach to unify pre-training-based and meta-learning-based few-shot learning, inspired by dropout.
None of the reviewers support the acceptance of this work, despite the authors' detailed rebuttals, with the majority of reviewers confirming their preference for rejection following the author response.
I unfortunately could not find a good reason to dissent from the reviewers majority opinion, and therefore also recommend rejection at this time. | train | [
"Ub1AFbaQH2v",
"jRHtMefYJkb",
"A5E86kyfctw",
"SQgIVueF8kR",
"nyf6ISisG_h",
"5jFN9alYSZA",
"R4JW5VHwKqc",
"5mF8AVzZxwq",
"VUACwYHcFWg",
"HMMTmfUPvQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you authors for the feedback. I don't see significant contributions in this current version. It is actually a good direction to explore the better generalization of meta-models using regularization methods. Hope to see an improved version in the future.",
" Thanks the authors for providing the response. U... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nyf6ISisG_h",
"SQgIVueF8kR",
"iclr_2022_NCwIM2Q8ah6",
"5mF8AVzZxwq",
"HMMTmfUPvQ",
"VUACwYHcFWg",
"A5E86kyfctw",
"iclr_2022_NCwIM2Q8ah6",
"iclr_2022_NCwIM2Q8ah6",
"iclr_2022_NCwIM2Q8ah6"
] |
iclr_2022_2DJn3E7lXu | What to expect of hardware metric predictors in NAS | Modern Neural Architecture Search (NAS) focuses on finding the best performing architectures in hardware-aware settings; e.g., those with an optimal tradeoff of accuracy and latency. Due to many advantages of prediction models over live measurements, the search process is often guided by estimates of how well each considered network architecture performs on the desired metrics. Typical predic-
tion models range from operation-wise lookup tables over gradient-boosted trees and neural networks, with little known information on how they compare. We evaluate 18 different performance predictors on ten combinations of metrics, devices, network types, and training tasks, and find that MLP models are the most promising. We then simulate and evaluate how the guidance of such prediction models affects the subsequent architecture selection. Due to inaccurate predictions, the selected architectures are generally suboptimal, which we quantify as
an expected reduction in accuracy and hypervolume. We show that simply verifying the predictions of just the selected architectures can lead to substantially improved results. Under a time budget, we find it preferable to use a fast and inaccurate prediction model over accurate but slow live measurements. | Reject | This paper comprehensively evaluated 18 different performance predictors on ten combinations of metrics, devices, network types, and training tasks for NAS. While evaluating and comparing different prediction models is not itself novel, the authors provided many insights that are potentially interesting to future NAS developments.
Reviewer reactions to this paper are rather mediocre and lukewarm. It is in general consensus that this work gives a good empirical analysis on hardware metric predictors for NAS, but the novelty is low and it is perhaps a bit incremental (e.g., nothing "shockingly new" was revealed, and observations are mostly "as expected"). Despite the authors improving the paper during rebuttal with new plots/tables, there remain to be unaddressed comments, e.g., adding experiments that run BO / evolution / etc with different hardware predictors and comparing the quality of the Pareto front. Those missed points were also raised in the private discussion.
After personally reading this paper, AC sides with most reviewers that this paper lacks true novelty nor technical excitement. While the empirical study is valuable, it perhaps suits venues other than ICLR, e.g., the NeurIPS benchmark track. | train | [
"K6j2Uangp39",
"Butg48DVIti",
"v7CfgNnCPlE",
"fbbGi7jpeXW",
"8a9_RtzPAow",
"2jEN9OSreOP",
"9VyYo1EB28I",
"Fz0VhxYi-Xe",
"c-JQhoFjMPe",
"Y_LIScPrE8v",
"D7FoAzFvYZY",
"BYYiEw9b4aF",
"0z9qg5c7z-0",
"6kriizjcuSw",
"FvwvxtYgcC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper systematically evaluates a wide range of hardware performance predictors across different networks/devices/tasks and analyzes the influence of such predictors to the architecture selection process. It provides insights for the community about how to select a proper hardware performance predictor in diff... | [
6,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
5,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"iclr_2022_2DJn3E7lXu",
"iclr_2022_2DJn3E7lXu",
"BYYiEw9b4aF",
"iclr_2022_2DJn3E7lXu",
"2jEN9OSreOP",
"9VyYo1EB28I",
"Fz0VhxYi-Xe",
"fbbGi7jpeXW",
"K6j2Uangp39",
"D7FoAzFvYZY",
"FvwvxtYgcC",
"0z9qg5c7z-0",
"Butg48DVIti",
"iclr_2022_2DJn3E7lXu",
"iclr_2022_2DJn3E7lXu"
] |
iclr_2022_K3uRhaKJuZg | Video Forgery Detection Using Multiple Cues on Fusion of EfficientNet and Swin Transformer | The rapid development of video processing technology makes it easy for people to forge videos without leaving visual artifacts. The spread of forged videos may lead to moral and legal consequences and pose a potential threat to people's lives and social stability. So it is significant to identify deepfake video information. Although the previous detection methods have achieved high accuracy, the generalization is poor when facing unprecedented data in the real scene. There are three fundamental reasons. The first is that capturing the general clue of artifacts is difficult. The second is that selecting the appropriate model is challenging in specific feature extraction. The third is that exploiting fully and effectively the extracted features is hard. We find that the high-frequency information in the image and the texture in the shallow layer of the model expose the subtle artifacts. The optical flow of the real video has variations while the optical flow of the deepfake video has rarely variations. Furthermore, consecutive frames in the real video have temporal consistency. In this paper, we propose a dual-branch video forgery detection model named ENST, which integrates parallelly and interactively EfficientNet-B5 and Swin Transformer. Specifically, EfficientNet-B5 extracts the artifacts information of high frequency and texture in the shallow layer of the model. Swin Transformer captures the subtle discrepancies between optical flows. To extract more robust face features, we design a new loss function for EfficientNet-B5. In addition, we also introduce the attention mechanism into EfficientNet-B5 to enhance the extracted features. We conduct test experiments on FaceForensics++ and Celeb-DF (v2) datasets, and comprehensive results show that ENST has higher accuracy and generalization, which is superior to the most advanced methods. | Reject | The authors propose a new method for deepfake detection (ENST) which relies on high-frequency information, low-level/shallow features, and optical flow. In particular, EfficientNet-B5 is used to extract the high frequency info and shallow features, and a Swin Transformer to capture discrepancies between optical flows. Empirical validation on FaceForensics++ and Celeb-DF shows some improvements over the baselines.
The reviewers found this to be a relevant and timely topic. The reviewers also found that integrating information from the frequency domain, the spatial domain, and optical flow is a promising approach. There were three reviewers suggesting rejection, and one suggesting acceptance. After the rebuttal and discussion phase, the following remaining issues were highlighted:
- **Limited technical novelty** (nearly all components used in this work were already expired in other work).
- Underwhelming empirical improvements given the fact that the model uses EfficientNet-B5 and the SwinTransformer.
- Many claims are still not supported by empirical evidence. For instance, to claim generalisation, an extensive analysis, including more datasets as well as competing methods should be carried out. | train | [
"YXYUcxs4aOu",
"PyYxHbYhYzG",
"iaEnEDv5OjZ",
"bX-jn2BLBhd",
"EZXGTRGTiBq",
"sV-SGx3vcdd",
"kLAJA3egg1z",
"6eJli_bTgZf",
"IZFHw-DpzX_",
"0GTZaBttbL",
"LZe2IkKWTL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I’ve carefully read the authors’ responses and would appreciate their effort.\nWhile there are several interesting points in this paper, I think the main contribution that the authors claimed in the rebuttal, i.e., combining multiple clues, has been explored in previous works, e.g., [A].\nMoreover, the authors cl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"kLAJA3egg1z",
"EZXGTRGTiBq",
"sV-SGx3vcdd",
"LZe2IkKWTL",
"0GTZaBttbL",
"IZFHw-DpzX_",
"6eJli_bTgZf",
"iclr_2022_K3uRhaKJuZg",
"iclr_2022_K3uRhaKJuZg",
"iclr_2022_K3uRhaKJuZg",
"iclr_2022_K3uRhaKJuZg"
] |
iclr_2022_6uu1t8jQ-M | Generating Novel Scene Compositions from Single Images and Videos | Given a large dataset for training, GANs can achieve remarkable performance for the image synthesis task. However, training GANs in extremely low data regimes remains a challenge, as overfitting often occurs, leading to memorization or training divergence. In this work, we introduce SIV-GAN, an unconditional generative model that can generate new scene compositions from a single training image or a single video clip. We propose a two-branch discriminator architecture, with content and layout branches designed to judge internal content and scene layout realism separately from each other. This discriminator design enables synthesis of visually plausible, novel compositions of a scene, with varying content and layout, while preserving the context of the original sample. Compared to previous single image GANs, our model generates more diverse, higher quality images, while not being restricted to a single image setting. We show that SIV-GAN successfully deals with a new challenging task of learning from a single video, for which prior GAN models fail to achieve synthesis of both high quality and diversity. | Reject | The authors consider the problem of unconditional image generation in the low-data regime, such as learning from frames of a single video or even from a single image. The main idea is to apply GANs with a specific two-branch discriminator architecture such that the content features and layout features are handled independently. Secondly, to improve the variability of generated images the authors apply diversity regularization. The authors show that the proposed model is able to, to a certain extent, generate diverse high quality samples.
The paper is well-written, the authors described their method and the evaluation protocol thoroughly and clearly. The reviewers felt that this submission was borderline, with questionable novelty and significance. In an extensive rebuttal and discussion phase the authors addressed several raised challenges and improved their paper. However, two points remain:
- **Technical novelty**: content and layout separation as well as diversity regularisation previously appeared in many contexts and papers.
- **Motivation and practicality**: One of the main arguments for the utility of the proposed method is to use it for data augmentation. While it may indeed result in content-based augmentations, it nevertheless necessitates training of a GAN for every single image, which is severely limiting in practice.
After reading the manuscript, reviews, and the rebuttals, my view is that the paper is below the acceptance bar and I agree with the points on novelty and significance. In particular, the main application to data augmentation seems to be "unexciting" and the proposed method impractical. At the same time the proposed method is a combination of already known techniques, albeit in a different setting. I suggest the authors condense the arguments in the extensive rebuttal to improve the points raised above and resubmit. | val | [
"vwi2qgfNi9G",
"mALfiM57IAE",
"PH3-jvPDhZ3",
"t3bJlCYWm_x",
"T0tA8eNTIcR",
"0OQvtPs3yoZ",
"tQpEpXLWy7F",
"_4-tDJAcpw6",
"DAcHOtppBr",
"L_c8clSafiP",
"W7CRaOjev5N",
"FbXKHpaeYm2",
"IGCEQWIyG-e",
"94k1hVWP2y7",
"1IpEpXWtmGw",
"eWXYSiKcJuB",
"HAT25LfDOlQ",
"Zz2KIyUBbCV",
"W4gm7uJGoE... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"... | [
" Thanks for your feedback on our answers. We, unfortunately, could not agree with your points. Especially, we could not understand the reasons for which the points 1 and 3 are considered as shortcomings of our work.\n\t\n1. We did not claim that one setting is harder than another. We just argued that the video set... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"mALfiM57IAE",
"HAT25LfDOlQ",
"iclr_2022_6uu1t8jQ-M",
"T0tA8eNTIcR",
"tQpEpXLWy7F",
"iclr_2022_6uu1t8jQ-M",
"L_c8clSafiP",
"uDeCx56gj6b",
"uDeCx56gj6b",
"uDeCx56gj6b",
"gPf18gXb8uT",
"gPf18gXb8uT",
"gPf18gXb8uT",
"gPf18gXb8uT",
"z_IES6HoIBG",
"z_IES6HoIBG",
"z_IES6HoIBG",
"PMSESX_e... |
iclr_2022_JKRVarUs3A1 | Distributed Optimal Margin Distribution Machine | Optimal margin Distribution Machine (ODM), a newly proposed statistical learning framework rooting in the novel margin theory, demonstrates better generalization performance than the traditional large margin based counterparts. Nonetheless, the same with other kernel methods, it suffers from the ubiquitous scalability problem in terms of both computation time and memory. In this paper, we propose a Distributed solver for ODM (DiODM), which leads to nearly ten times speedup for training kernel ODM. It exploits a novel data partition method to make the local ODM trained on each partition has a solution close to the global one. When linear kernel used, we extend a communication efficient distributed SVRG method to further accelerate the training. Extensive empirical studies validate the superiority of our proposed method compared to other off-the-shelf distributed quadratic programming solvers for kernel methods. | Reject | While some of the reviewers find that the paper proposes a solid contribution to a problem, I will tend
to agree with other ones that the proposed approach has limited novelty and limited potential for improvement over baselines. In addition, simulations are pretty weak due to lack of comparisons to strong baselines and to lack of clarity. | train | [
"GPn4J-sw6d-",
"aF1mF2EKRpK",
"rOxO0Vnel43",
"8OWfQNYRXnB",
"Z9A6JKWViXS",
"x2oufZmdIt9",
"8E8EA2uiPny",
"Lx5MjuQPX1R",
"YnYaydZ_QAs",
"bXrZS_daEYy"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your appreciation of our work. For your question about certain number of iterations in Alg. 1, we will supplement detailed experiments in our final edition.",
" I want to thank the authors for their response and confirm that I have read their response. The authors partially answer my questions; wh... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"aF1mF2EKRpK",
"8OWfQNYRXnB",
"bXrZS_daEYy",
"YnYaydZ_QAs",
"8E8EA2uiPny",
"Lx5MjuQPX1R",
"iclr_2022_JKRVarUs3A1",
"iclr_2022_JKRVarUs3A1",
"iclr_2022_JKRVarUs3A1",
"iclr_2022_JKRVarUs3A1"
] |
iclr_2022_ETiaOyNwJW | Revisiting Virtual Nodes in Graph Neural Networks for Link Prediction | It is well known that the graph classification performance of graph neural networks often improves by adding an artificial virtual node to the graphs, which is connected to all nodes in the graph. Intuitively, the virtual node provides a shortcut for message passing between nodes along the graph edges. Surprisingly, the impact of virtual nodes with other problems is still an open research question.
In this paper, we adapt the concept of virtual nodes to the link prediction scenario, where we usually have much larger, often dense, and more heterogeneous graphs. In particular, we use multiple virtual nodes per graph and graph-based clustering to determine the connections to the graph nodes. We also investigate alternative clustering approaches (e.g., random or more advanced) and compare to the original model with a single virtual node. We conducted extensive experiments over different datasets of the Open Graph Benchmark (OGB) and analyze the results in detail. We show that our virtual node extensions yield rather stable performance increases and allow standard graph neural networks to compete with complex state-of-the-art models, as well as with the models leading the OGB leaderboards. | Reject | This paper considers GNNs for link-prediction (predicting which links are likely to appear next). An idea that has been used before is to add virtual nodes to improve the ``under-reaching” problem in shallow GNNs; this paper considers this systematically in the context of link prediction. Specifically, one approach developed is to cluster the graph into clusters C(i), I = 1, 2, …, k for some k and to add a virtual node u(i) for each index i, which is made adjacent to each node in C(i). This can ease information exchange, particularly in message-passing GNNs.
Link prediction is an important problem. However, there seem to be at least three issues with this work: the performance gains obtained are not strong enough, it is not conceptually clear why virtual nodes should help with link prediction, and the analysis is quite a bit about repeating existing analyses on nodes alone. I recommend the authors to address these issues thoroughly in the next version of the paper. | train | [
"f2BrXSA24Ep",
"PfsO0E6V3zI",
"0qpr_79Cu72",
"iyeHxTrHa4g",
"8cZaGBMHtYS",
"ZZfIEMddg6b",
"KanRpx58Kt",
"1w1_BD4cTd",
"ArvXrNYm1zQ",
"N1VX8br1Tgq",
"Mqzh4QMHFaZ",
"a9IzYiwCRHU",
"lMinbF5VNk",
"HJrhH7SG92",
"GFTgXbP7Gp7",
"P61aMxArFlK",
"2vuzvCnLK81",
"lvmu121n-up",
"SuL0JSz74RX",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_r... | [
" Dear Reviewer gjKs,\n\nPlease let us know in case anything remains unclear or concerning. We have concluded the discussion with reviewer mmjp and addressed your other comments.\nThank you!",
" We sincerely thank you for taking the time for this extended and fair discussion.\n\nWe totally agree on the point that... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"1w1_BD4cTd",
"iyeHxTrHa4g",
"iclr_2022_ETiaOyNwJW",
"8cZaGBMHtYS",
"ZZfIEMddg6b",
"KanRpx58Kt",
"N1VX8br1Tgq",
"Mqzh4QMHFaZ",
"iclr_2022_ETiaOyNwJW",
"SuL0JSz74RX",
"2vuzvCnLK81",
"lMinbF5VNk",
"HJrhH7SG92",
"GFTgXbP7Gp7",
"lvmu121n-up",
"4eMqvS_3tWk",
"xE5DE7AclB",
"Ih3vWejunAE",... |
iclr_2022_wgR0BQfG5vi | Adaptive Label Smoothing with Self-Knowledge | Overconfidence has been shown to impair generalization and calibration of a neural network. Previous studies remedy this issue by adding a regularization term to a loss function, preventing a model from making a peaked distribution. Label smoothing smoothes target labels with a predefined prior label distribution; as a result, a model is learned to maximize the likelihood of predicting the soft label. Nonetheless, the amount of smoothing is the same in all samples and remains fixed in training. In other words, label smoothing does not reflect the change in probability distribution mapped by a model over the course of training. To address this issue, we propose a regularization scheme that brings dynamic nature into the smoothing parameter by taking model probability distribution into account, thereby varying the parameter per instance. A model in training self-regulates the extent of smoothing on the fly during forward propagation. Furthermore, inspired by recent work in bridging label smoothing and knowledge distillation, our work utilizes self-knowledge as a prior label distribution in softening target labels, and presents theoretical support for the regularization effect by knowledge distillation. Our regularizer is validated comprehensively on various datasets in machine translation and outperforms strong baselines not only in model performance but also in model calibration by a large margin. | Reject | The paper proposes an approach to performing label smoothing, with the amount of smoothing being sample-dependent and guided by the model's prediction (similar to self-distillation). While the reviewers find the studied problem relevant and important, they find the contributions (in their current state) to be borderline, mainly on the basis of lack of novelty and missing discussion with some related papers. While authors' response was able to partially resolve these concerns, at the end none of the reviewers was a strong advocate for accepting the paper and all scores remained at the borderline (although on the positive side). In concordance with the reviewers, I believe this submission can be made much stronger by digging a bit deeper into the problem, and also making broader connections with the existing literature.
As a concrete example/suggestion (among many other possibilities for strengthening this work), the authors may want to go a bit deeper into the theoretical analysis. Currently, their analysis shows the approach is able to reduce model's confidence, which is what happens in label smoothing and self-distillation. However, self-distillation is more than confidence reduction, and the information contained in the "dark knowledge" can provide a much stronger regularization than a sole confidence reduction argument. There are already some papers in the literature on the regularization/generalization effects of self-distillation, which the authors might want to use as a stepping-stone. | train | [
"BImYFN7X3wH",
"NbZ2fprb0dW",
"CKeL8unLtWb",
"32b5MyaTxxr",
"O_AgUYLm-K3",
"Wuz6apW6gnM",
"jUhBXYzUDYg",
"xbeGi5PZ6M",
"4pe_cpm4YdN",
"6Ug4NMQ8Znv",
"hzWVB7Q8vV",
"QklMZTPQwWL",
"VFiBZ03LTgt",
"qIQhwuFWzd0",
"8XCwfWOb1OY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a simple yet effective way of smoothing the labels for each data point. The smoothing parameter alpha is dynamically adapted for each data point based on the (normalized) entropy of the predicted label-distribution for this data point. The distribution that is used for label-smoothing originates... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_wgR0BQfG5vi",
"iclr_2022_wgR0BQfG5vi",
"VFiBZ03LTgt",
"NbZ2fprb0dW",
"xbeGi5PZ6M",
"8XCwfWOb1OY",
"BImYFN7X3wH",
"iclr_2022_wgR0BQfG5vi",
"xbeGi5PZ6M",
"iclr_2022_wgR0BQfG5vi",
"xbeGi5PZ6M",
"NbZ2fprb0dW",
"8XCwfWOb1OY",
"BImYFN7X3wH",
"iclr_2022_wgR0BQfG5vi"
] |
iclr_2022_-FP1-bBxOzv | Self Reward Design with Fine-grained Interpretability | Transparency and fairness issues in Deep Reinforcement Learning may stem from the black-box nature of deep neural networks used to learn its policy, value functions etc. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks (NN) with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. With deliberate design, we show that lavaland problems can be solved using NN model with few parameters. Furthermore, we introduce the Self Reward Design (SRD), inspired by the Inverse Reward Design, so that our interpretable design can (1) solve the problem by pure design (although imperfectly) (2) be optimized via SRD (3) perform avoidance of unknown states by recognizing the inactivations of neurons aggregated as the activation in \(w_{unknown}\). | Reject | The paper proposes a design of interpretable neural networks where each neuron is hand-designed to serve a task-specific role, and the network weights can be optimized via a few interactions with the environment. The reviewers acknowledged that the interpretability of neural networks is an important research direction. However, the reviewers pointed out several weaknesses in the paper, and there was a clear consensus that the work is not ready for publication. The reviewers have provided detailed and constructive feedback to the authors. We hope that the authors can incorporate this feedback when preparing future revisions of the paper. | train | [
"inqZzp-d58J",
"WO33_i469Vj",
"yjVV5eBX-C",
"4CTV6-BDlJx",
"b2_lXr28D85",
"uvh9VmK0lPm",
"ltBOVGHjp-r",
"9YXKvbQS6KE",
"_pzsqWiEBDzH",
"drYIzuLfwWY",
"RlKbV1v6fJG3",
"RqTJeuEE7Zyp",
"IoKU1Y1_wJJ",
"ygtNWccAuI",
"gesEyVLmdbM",
"QrNHlk-W3dU",
"STxVFU25lxl"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The feedback is very much appreciated and we will consider them in the near future.\n\nSome conflicting points that we believe are irresolvable:\n\n1. Definition of interpretability. We are actually just surprised that there is still an issue with interpretability even when we attempt to construct network with su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
1,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3,
2
] | [
"yjVV5eBX-C",
"RlKbV1v6fJG3",
"uvh9VmK0lPm",
"iclr_2022_-FP1-bBxOzv",
"iclr_2022_-FP1-bBxOzv",
"QrNHlk-W3dU",
"IoKU1Y1_wJJ",
"_pzsqWiEBDzH",
"drYIzuLfwWY",
"ygtNWccAuI",
"gesEyVLmdbM",
"STxVFU25lxl",
"iclr_2022_-FP1-bBxOzv",
"iclr_2022_-FP1-bBxOzv",
"iclr_2022_-FP1-bBxOzv",
"iclr_2022_... |
iclr_2022_Vvb-eicR8N | Learning-Augmented Sketches for Hessians | Sketching is a dimensionality reduction technique where one compresses a matrix by linear combinations that are typically chosen at random. A line of work has shown how to sketch the Hessian to speed up each iteration in a second order method, but such sketches usually depend only on the matrix at hand, and in a number of cases are even oblivious to the input matrix. One could instead hope to learn a distribution on sketching matrices that is optimized for the specific distribution of input matrices. We show how to design learned sketches for the Hessian in the context of second order methods. We prove that a smaller sketching dimension of the column space of a tall matrix is possible, assuming the knowledge of the indices of the rows of large leverage scores. This would lead to faster convergence of the iterative Hessian sketch procedure. We also design a new objective to learn the sketch, whereby we optimize the subspace embedding property of the sketch. We show empirically that learned sketches, compared with their "non-learned" counterparts, do improve the approximation accuracy for important problems, including LASSO and matrix estimation with nuclear norm constraints. | Reject | This paper proposes a new contribution in the recent literature on learning distributions of sketches. While all reviewers have recognized the overall good quality of the presentation, two factors seem to weight heavily on a negative decision: clarifications on the contribution's scope (presented as a tool for general Hessians in the introduction, but ultimately only applied to least-square errors of linear predictors, to recover an explicit factorization of the Hessian matrix) and links with existing literature; weakness of experiments whose small scale does not justify using sketches in the first place. Since this is a "learning" approach, I am particularly sensitive to the latter point, and therefore am inclined to reject, but I encourage the authors to address these two issues with the current draft. | train | [
"OeLuP0l2Kw6",
"eKIBd8gNmkU",
"vBJ4dq4DCc1",
"Ux8FJrA0ldO",
"hM45137s-ts",
"IDQsv-mBjF",
"7kuiiLD1Irt",
"G2Bz5SYLHUk",
"PKE8zwWZ12d",
"5cAD4g_FUxv",
"JTb_6xra10g",
"MSS3GYjGdoG",
"JsIxqefbCYD",
"KNUuUpBoy5O"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > 1. Why is that? If we do not need an explicit , how would Line 7 and 8 of Algorithm 2 be implemented?\n\nIt is not necessary to generate or store the entire matrix $T$ explicitly since Line 8 depends only on $TA$. The algorithm works as long as we can generate $TA$ efficiently. For example, we can take $T$ to b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"vBJ4dq4DCc1",
"Ux8FJrA0ldO",
"G2Bz5SYLHUk",
"PKE8zwWZ12d",
"IDQsv-mBjF",
"JsIxqefbCYD",
"KNUuUpBoy5O",
"JTb_6xra10g",
"MSS3GYjGdoG",
"JsIxqefbCYD",
"iclr_2022_Vvb-eicR8N",
"iclr_2022_Vvb-eicR8N",
"iclr_2022_Vvb-eicR8N",
"iclr_2022_Vvb-eicR8N"
] |
iclr_2022_ZWykq5n4zx | Boosting the Confidence of Near-Tight Generalization Bounds for Uniformly Stable Randomized Algorithms | High probability generalization bounds of uniformly stable learning algorithms have recently been actively studied with a series of near-tight results established by~\citet{feldman2019high,bousquet2020sharper}. However, for randomized algorithms with on-average uniform stability, such as stochastic gradient descent (SGD) with time decaying learning rates, it still remains less well understood if these deviation bounds still hold with high confidence over the internal randomness of algorithm. This paper addresses this open question and makes progress towards answering it inside a classic framework of confidence-boosting. To this end, we first establish an in-expectation first moment generalization error bound for randomized learning algorithm with on-average uniform stability, based on which we then show that a properly designed subbagging process leads to near-tight high probability generalization bounds over the randomness of data and algorithm. We further substantialize these generic results to SGD to derive improved high probability generalization bounds for convex or non-convex optimization with natural time decaying learning rates, which have not been possible to prove with the existing uniform stability results. Specially for deterministic uniformly stable algorithms, our confidence-boosting results improve upon the best known generalization bounds in terms of a logarithmic factor on sample size, which moves a step forward towards resolving an open question raised by~\citet{bousquet2020sharper}. | Reject | Confidence boosting via aggregating multiple run of algorithms has been used before. The main result of the paper relies on a generic confidence boosting trick. The authors for instance cite Shalev-Schwartz et al 2010 theorem 26 in remark 4 of their paper and correctly point out that for deterministic algorithms like ERM one can use that for confidence boosting. While that theorem there is proved for excess risk and for deterministic algorithms, the main idea there to me seems like what is used in the authors paper as well.
The basic idea:
Property A holds in expectation, Hence use Markov inequality to get a low grade probability version of it in each of the K pieces
Now probability that at least one of the pieces is good is high since each piece is independent of the other
Finally aggregate with simple concentration with union bound.
In Shalev-Schwartz et al 2010 this is done with property being excess risk, here it is done with generalization error.
(Oh and I should add, the fact that the algorithm is randomized does not affect this line of reasoning as long as we use fresh randomness for each of the K blocks).
Now the missing piece covered is that on-average stability implies generalization in expectation. But isn’t this already known to be true in the stability literature?
To me it seems like the main technical contribution of the paper is not as novel. Further, as one of the reviewers points out, the main goal should be to prove high probability guarantee for the algorithm popularly used like SGD not the confidence boosted version of it.
None the less, it seems like the application of the result to SGD seems interesting and somewhat new.
I am reluctant to propose an accept here. | val | [
"t98YWP5for",
"QyyU39Xb_HI",
"wj4wWKTaRQG",
"l3nXhbDONf",
"gTvNSzRxeuR",
"9d8L6WB7Jei",
"fMAy3TQEdEU",
"c3g5-ee-JlK",
"gZDYrGOKqyx",
"hM2Ygyg4M3_"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the detailed response. I'm happy with the changes made in the introduction, and will keep my original score of 8.",
" Thank you for your very helpful comments and positive evaluation of our work.\n\n> **Your comment:** In Corollary 1 2 and 3, Consider Algorithm 1 specified to $A_{SGD-w}$... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
5
] | [
"gTvNSzRxeuR",
"hM2Ygyg4M3_",
"gZDYrGOKqyx",
"c3g5-ee-JlK",
"fMAy3TQEdEU",
"iclr_2022_ZWykq5n4zx",
"iclr_2022_ZWykq5n4zx",
"iclr_2022_ZWykq5n4zx",
"iclr_2022_ZWykq5n4zx",
"iclr_2022_ZWykq5n4zx"
] |
iclr_2022_30nbp1eV0dJ | Tight lower bounds for Differentially Private ERM | We consider the lower bounds of differentially private ERM for general convex functions. For approximate-DP, the well-known upper bound of DP-ERM is $O(\frac{\sqrt{p\log(1/\delta)}}{\epsilon n})$, which is believed to be tight. However, current lower bounds are off by some logarithmic terms, in particular $\Omega(\frac{\sqrt{p}}{\epsilon n})$ for constrained case and $\Omega(\frac{\sqrt{p}}{\epsilon n \log p})$ for unconstrained case.
We achieve tight $\Omega(\frac{\sqrt{p \log(1/\delta)}}{\epsilon n})$ lower bounds for both cases by introducing a novel biased mean property for fingerprinting codes. As for pure-DP, we utilize a novel $\ell_2$ loss function instead of linear functions considered by previous papers, and achieve the first (tight) $\Omega(\frac{p}{\epsilon n})$ lower bound. We also introduce an auxiliary dimension to simplify the computation brought by $\ell_2$ loss. Our results close a gap in our understanding of DP-ERM by presenting the fundamental limits. Our techniques may be of independent interest, which help enrich the tools so that it readily applies to problems that are not (easily) reducible from one-way marginals. | Reject | This paper gives sample complexity lower bounds for differentially private empirical risk minimization (ERM). While the reviewers agreed that the results are non-trivial, the general consensus was that the proofs are tweaks of previously developed techniques and that the main result is actually new in a rather narrow setting (specifically, for unconstrained ERM and sub-constant error parameter). Another concern was that one of the proofs (the one on pure differential privacy) was incorrect in the submission; a different proof was provided subsequently (which also closely follows prior work). Finally, the reviewers pointed out several issues with the clarity of the presentation and comparison to prior work. Given the above, this work is below the acceptance threshold. | test | [
"FAh5_pK0Ycb",
"tOBVJ7rGDBC",
"UqFQLXqlXa",
"317zmjKpPvE",
"I2HhQEMhSkZ",
"HaBr64hLiS",
"pXx4lt3MZ8",
"fMiDaJWBBdt",
"ZtJhEWvwtoW",
"X9Z3YBrX55y",
"8mU3_PFrtl2",
"u9wdYRnCKED",
"TbH9iCwK8Ns",
"-NuAPgqVHiG",
"OsgsP0wJT7E"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for these additional details. It seems that the paper can be made significantly stronger than the initial submission if the content from the comments here is added to it and fleshed out. I am, however, generally reluctant to recommend acceptance based on such significant changes to the original submissi... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
8
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"tOBVJ7rGDBC",
"TbH9iCwK8Ns",
"HaBr64hLiS",
"iclr_2022_30nbp1eV0dJ",
"HaBr64hLiS",
"X9Z3YBrX55y",
"fMiDaJWBBdt",
"8mU3_PFrtl2",
"OsgsP0wJT7E",
"-NuAPgqVHiG",
"TbH9iCwK8Ns",
"317zmjKpPvE",
"iclr_2022_30nbp1eV0dJ",
"iclr_2022_30nbp1eV0dJ",
"iclr_2022_30nbp1eV0dJ"
] |
iclr_2022_xwAw8QZkpWZ | SAFER: Data-Efficient and Safe Reinforcement Learning Through Skill Acquisition | Though many reinforcement learning (RL) problems involve learning policies in settings that are difficult to specify safety constraints and sparse rewards, current methods struggle to rapidly and safely acquire successful policies. Behavioral priors, which extract useful policy primitives for learning from offline datasets, have recently shown considerable promise at accelerating RL in more complex problems. However, we discover that current behavioral priors may not be well-equipped for safe policy learning, and in some settings, may promote unsafe behavior, due to their tendency to ignore data from undesirable behaviors. To overcome these issues, we propose SAFEty skill pRiors (SAFER), a behavioral prior learning algorithm that accelerates policy learning on complex control tasks, under safety constraints. Through principled contrastive training on safe and unsafe data, SAFER learns to extract a safety variable from offline data that encodes safety requirements, as well as the safe primitive skills over abstract actions in different scenarios. In the inference stage, SAFER composes a safe and successful policy from the safety skills according to the inferred safety variable and abstract action. We demonstrate its effectiveness on several complex safety-critical robotic grasping tasks inspired by the game Operation, in which SAFER not only out-performs baseline methods in learning successful policies but also enforces safety more effectively. | Reject | The paper provides an algorithmic framework to accelerate RL through Behavioral Priors, while having some notion of safety incorporated. The reviewers are divided about this paper:
On the positive side, some of the reviewers consider the problem important, and the experimental results reasonable and promising.
On the negative side, reviewers raised issues such as
1) The paper is on a heuristic side.
2) No formal guarantee on the safety is provided.
3) The paper is not as self-contained as it should be, as it relies much on Singh et al. (2021).
4) The algorithm requires access to unsafe offline data.
I do not give the same weights to all these concerns. For example, even though (d) is an issue in some applications, it is alright for others. What concern me most are (1) and (2).
A method for safety that is only evaluated empirically and does not have any formal guarantee cannot be used for safety critical tasks. I realize that some other published papers may have the same issue. But given that this is a real concern, and that two out of four reviewers believe that the paper should not be accepted, unfortunately I cannot recommend acceptance of this paper, especially given the competitiveness of this conference.
P.S: I also noticed that in the proof of Proposition 3.1, an expectation term $E[p_\phi(a|s,c)]$ in Eq. (9) is replaced by a $\log p_\phi(a|s,c)$. This requires more justifications. | train | [
"QSB4QfmDSB4",
"kpaqEE_0jl",
"InPrluRvY8f",
"7U_fsN4AF5r",
"Z-DL11XMKki",
"LoE4dzdVvf",
"HTWahvTwYxl",
"6pBcZU2Nz46",
"twpwLGxPhnO",
"fOSLNRuClfE",
"tsQlMLQvA6H",
"wZ6xKFfoUOw",
"ZXjJFQbQUcu",
"bWWdTCuY1N",
"FyGhTyGq7F",
"vRRHPoRGVJ",
"2tq3k4RkKCW",
"cOE2_hD7Xke",
"nVqb4RXzGjg",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" I appreciate the authors' response. The concerns I pointed out in my review and response still remain, unfortunately. \n\n- I understand that the referenced prior works evaluate safety in simulation, and the authors advise using the method only in settings where prior data is already available. Note that this is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"HTWahvTwYxl",
"iclr_2022_xwAw8QZkpWZ",
"Gkr8PfmUc_v",
"5OxrF4AwtON",
"fOSLNRuClfE",
"twpwLGxPhnO",
"6pBcZU2Nz46",
"vRRHPoRGVJ",
"iclr_2022_xwAw8QZkpWZ",
"tsQlMLQvA6H",
"wZ6xKFfoUOw",
"FyGhTyGq7F",
"fG839w-0Q_b",
"Gkr8PfmUc_v",
"Be8LEqn-MBZ",
"2tq3k4RkKCW",
"ZXjJFQbQUcu",
"nVqb4RXz... |
iclr_2022_aUoV6qhY_e | Specialized Transformers: Faster, Smaller and more Accurate NLP Models | Transformers have greatly advanced the state-of-the-art in Natural Language Processing (NLP) in recent years, but are especially demanding in terms of their computation and storage requirements. Transformers are first pre-trained on a large dataset, and subsequently fine-tuned for different downstream tasks. We observe that this design process leads to models that are not only over-parameterized for downstream tasks, but also contain elements that adversely impact accuracy of the downstream tasks.
We propose a Specialization framework to create optimized transformer models for a given downstream task. Our framework systematically uses accuracy-driven pruning, i.e., it identifies and prunes parts of the pre-trained Transformer that hinder performance on the downstream task. We also replace the dense soft-attention in selected layers with sparse hard-attention to help the model focus on the relevant parts of the input. In effect, our framework leads to models that are not only faster and smaller, but also more accurate. The large number of parameters contained in Transformers presents a challenge in the form of a large pruning design space. Further, the traditional iterative prune-retrain approach is not applicable to Transformers, since the fine-tuning data is often very small and re-training quickly leads to overfitting. To address these challenges, we propose a hierarchical, re-training-free pruning method with model- and task- specific heuristics. Our experiments on GLUE and SQUAD show that Specialized models are consistently more accurate (by up to 4.5\%), while also being up to 2.5$\times$ faster and up to 3.2$\times$ smaller than the conventional fine-tuned models. In addition, we demonstrate that Specialization can be combined with previous efforts such as distillation or quantization to achieve further benefits.
For example, Specialized Q8BERT and DistilBERT models exceed the performance of BERT-Base, while being up to 3.7$\times$ faster and up to 12.1$\times$ smaller.
| Reject | The authors propose a simple and effective technique for task-specific pruning of transformer models that identifies which model components to prune by minimizing validation loss. Weaknesses of the paper include (1) related work reads more like a list and doesn’t compare and contrast the proposed approach with related work, (2) authors don’t compare to other structured pruning methods (that use different objectives) (3) lack of novelty — main difference with existing work is using validation loss to optimize and (4) one reviewer was unconvinced that the results should be possible given the approach. I share these concerns, and, in particular, I think they might be related. Given that the models are pruned using the development set (essentially equivalent to training on the development set), it seems infeasible that this approach could have been developed without looking at the testing data, and I’m concerned that this explains the unprecedentedly high accuracy compared to previous pruning approaches. At the very least, comparing to a baseline that trains on development data would be prudent in order to understand the result. | train | [
"AvrHGbwLgVH",
"E9HOWXHxKT_",
"p5irek8syT",
"YKUEgp8pmbO",
"Ihm9F9Un2OP",
"vPqPhGC0Mc",
"vT1iEK6z_yb",
"oy1Y5Nr5mF4",
"53MDQr8F0hY",
"7ywPFpyDd0G",
"TOVmp6odxWa",
"Tv27kPw15fk",
"vlvsSxTH9D9",
"2RO63KVpavb",
"P_aQARv0Cgt",
"m0eIQocWljc",
"pm4us-O-gMJ",
"Tcwa1VVjqK",
"cq3TbyISjbw"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" The authors' response partly addressed my concerns, so I raised my score.\nLooking forward to their implementation to reproduce their results.",
"This paper proposes Specialized Transformer that identifies harmful parts and prunes a pre-trained transformer in a greedy and hierarchical manner to boost accuracy i... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"YKUEgp8pmbO",
"iclr_2022_aUoV6qhY_e",
"Ihm9F9Un2OP",
"vPqPhGC0Mc",
"vT1iEK6z_yb",
"oy1Y5Nr5mF4",
"7ywPFpyDd0G",
"53MDQr8F0hY",
"E9HOWXHxKT_",
"WuKEuBejIFs",
"vlvsSxTH9D9",
"iclr_2022_aUoV6qhY_e",
"Tv27kPw15fk",
"P_aQARv0Cgt",
"E9HOWXHxKT_",
"pm4us-O-gMJ",
"Tcwa1VVjqK",
"WuKEuBejIF... |
iclr_2022_a_nR4BPPJF1 | Blessing of Class Diversity in Pre-training | This paper presents a new statistical analysis aiming to explain the recent superior achievements of the pre-training techniques in natural language processing (NLP).
We prove that when the classes of the pre-training task (e.g., different words in masked language model task) are sufficiently diverse, in the sense that the least singular value of the last linear layer in pre-training is large, then pre-training can significantly improve the sample efficiency of downstream tasks. Inspired by our theory, we propose a new regularization technique that targets the multi-class pre-training: a \emph{diversity regularizer only to the last linear layer} in the pre-training phase.
Our empirical results show that this technique consistently boosts the performance of the pre-trained BERT model on different downstream tasks. | Reject | This paper aims to explain the pretraining effectiveness of masked language model, based on the concept of diversity of classes. They empirically study how a diversity regularizer, based on this theory can improve model performance, as an empirical support.
Before rebuttal, reviewers consistently found the empirical study rather preliminary, while authors, through rebuttals, argue the theoretical study should be highlighted as their main contribution, and expressed concerns that the lack of empirical rigor should not be a ground to reject. We agree with these concerns, but rebuttals and discussions failed to convince reviewers that assumptions and evaluations are proper for connecting the proposed theory to potential impacts in pretrained language model scenarios. Revising to make this connection clearer would address the reviewer disagreements in the future. | val | [
"56PNKKU0Nh0",
"ABSJkB-KHWC",
"S2sI2X6JEN9",
"j-gsx4VHuj2",
"lAAERBSK3U7",
"8Au513rYhJW",
"UzQSGpZmJN",
"1ajMq1mTNW",
"VM8rry4-Axg",
"oVSNf3aSKSK",
"O3zQ2_cquE8",
"iOXwXA5_HRj",
"4IzxW01N-10",
"G2BEqyZRXFq",
"YzdFYsp-E1d"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply!\n\nRegarding BART, to clarify, the decoder can be viewed as solving a structured multi-class prediction problem (tokens are predicted auto-regressively) whereas masked language modeling can be viewed as an unstructured multiclass prediction problem.",
" We thank you for your suggestion... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
5
] | [
"j-gsx4VHuj2",
"S2sI2X6JEN9",
"lAAERBSK3U7",
"VM8rry4-Axg",
"8Au513rYhJW",
"UzQSGpZmJN",
"YzdFYsp-E1d",
"G2BEqyZRXFq",
"4IzxW01N-10",
"iOXwXA5_HRj",
"iclr_2022_a_nR4BPPJF1",
"iclr_2022_a_nR4BPPJF1",
"iclr_2022_a_nR4BPPJF1",
"iclr_2022_a_nR4BPPJF1",
"iclr_2022_a_nR4BPPJF1"
] |
iclr_2022_HmFBdvBkUUY | SpecTRA: Spectral Transformer for Graph Representation Learning | Transformers have recently been applied in the more generic domain of graphs. For the same, researchers proposed various positional and structural encoding schemes to overcome the limitation of transformers in modeling the positional invariance in graphs and graph topology. Some of these encoding techniques use the spectrum of the graph. In addition to graph topology, graph signals could be multi-channeled and contain heterogeneous information. We argue that transformers cannot model multichannel signals inherently spread over the graph spectrum. To this end, we propose SpecTRA, a novel approach that induces a spectral module into the transformer architecture to enable decomposition of graph spectrum and selectively learn useful information akin to filtering in the frequency domain. Results on standard benchmark datasets show that the proposed method performs comparably or better than existing transformer and GNN based architectures. | Reject | This paper proposes a novel approach to include graph information into Transformers. Reviewers expressed concerns on 2 main issues -
1) The exact architecture proposed in the paper is not well motivated. In words of one of the reviewers 'I still do not understand why the authors learn the spectral GCN filter weights from the attention matrix of the transformer, which can have a completely different sparsity pattern than the input graph, instead of learning the filter weights from the graph itself, e.g., by using a GNN. '. Authors tried to provide an explanation in the response however I think it needs to be made much more rigorous for it to be well motivated.
2) The interplay between existing position encoding schemes and the proposed method. This point also confused couple of reviewers as the empirical results seem to be strongly influenced by the choices of position encoding. Authors, I think did a great job in addressing this concern by providing additional results during the response period.
Given the weak experimental results and lack of clear motivations I think the paper is not currently ready for acceptance. | test | [
"_fsu1O5oFdn",
"isgxSIOnOIK",
"nnc8sTe0PDy",
"foq6Z8zYphd",
"bQJ_gmKgl53",
"6fUVugYB60",
"5a3QUr4Sg8B",
"-wYBYQZPBor",
"hHndaA6SiHF",
"ORafWYpvMcd",
"eR4wu0eOCTk",
"l97LjshnxTU",
"iNBBXV8QtIa",
"O3x0o3-drei"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer VYTd,\n\nWe thank you again for taking the time to review this work. We put our best efforts to prepare the rebuttal to your questions. We would very much appreciate it if you could engage with us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any co... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
4,
5,
4
] | [
"O3x0o3-drei",
"iNBBXV8QtIa",
"-wYBYQZPBor",
"l97LjshnxTU",
"-wYBYQZPBor",
"hHndaA6SiHF",
"iclr_2022_HmFBdvBkUUY",
"iclr_2022_HmFBdvBkUUY",
"O3x0o3-drei",
"iNBBXV8QtIa",
"l97LjshnxTU",
"iclr_2022_HmFBdvBkUUY",
"iclr_2022_HmFBdvBkUUY",
"iclr_2022_HmFBdvBkUUY"
] |
iclr_2022_MAYipnUpHHD | Reinforcement Learning for Adaptive Mesh Refinement | Large-scale finite element simulations of complex physical systems governed by partial differential equations (PDE) crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement {\it policies} directly from simulation. AMR poses a new problem for RL as both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence can be deployed on larger simulations than those used at train time. We demonstrate in comprehensive experiments on static function estimation and time-dependent equations that RL policies can be trained on problems without using ground truth solutions, are competitive with a widely-used error estimator, and generalize to larger, more complex, and unseen test problems. | Reject | This work formulates the Adaptive Mesh Refinement (AMR) problem used in solving Finite Element Method (FEM) as an MDP, and suggests an RL-based solution for it. Most reviewers agree that this is a novel problem and the solution is promising. There are, however, several issues raised by our reviewers, who have expertise ranging from ML to computational methods to solve PDEs. Some of the concerns are:
- As this is not a theoretical work, the burden of proof is on the empirical evaluations. Some reviewers found the experiments very small and not convincing enough.
- The paper does not compare with the state of the art AMR methods.
- The detail of how the problem is formulated as an MDP can be improved.
Given that four out of five reviewers are on the negative side, unfortunately I cannot recommend acceptance of this paper in its current form. Nevertheless, I believe this is a promising application of RL. I'd like to encourage the authors to consider the reviews in order to improve their work, and resubmit it to another venue. | test | [
"8PQ8ciuzgeU",
"n2kBJiWGx7D",
"P4o9CBAMWCH",
"j60eeGX8zAg",
"iFLpXLajcD",
"g7XyskXHYKd",
"uSrzQCOQ5Lv",
"piU6KxjTMKt",
"MrKNfXsA_Y",
"mHyGgxczkS",
"b2scqlTkWic",
"6kJ_TZuODz7",
"5ijqn6ZqY6t",
"00pe-ESDDnJ",
"5P5JpXdKMsd",
"IiqgYA_2sdh",
"tbirxs8i2b-",
"PyjilJnWWXI",
"_xBP9kf3ibm"... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
"This paper proposes using reinforcement learning algorithms to train a few variants of neural network-based models to perform adaptive mesh refinement. These models predict for each node in a mesh where a refinement operation should be performed. This paper is mostly clearly written, and has a conceptually simple ... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_MAYipnUpHHD",
"iclr_2022_MAYipnUpHHD",
"g7XyskXHYKd",
"uSrzQCOQ5Lv",
"iclr_2022_MAYipnUpHHD",
"piU6KxjTMKt",
"5P5JpXdKMsd",
"MrKNfXsA_Y",
"mHyGgxczkS",
"6kJ_TZuODz7",
"00pe-ESDDnJ",
"BQ1tLGE5KN",
"iclr_2022_MAYipnUpHHD",
"PyjilJnWWXI",
"f04avUDEY1F",
"iclr_2022_MAYipnUpHHD",... |
iclr_2022_KmNHWX9H7Kf | Uniform Generalization Bounds for Overparameterized Neural Networks | An interesting observation in artificial neural networks is their favorable generalization error despite typically being extremely overparameterized. It is well known that the classical statistical learning methods often result in vacuous generalization errors in the case of overparameterized neural networks. Adopting the recently developed Neural Tangent (NT) kernel theory, we prove uniform generalization bounds for overparameterized neural networks in kernel regimes, when the true data generating model belongs to the reproducing kernel Hilbert space (RKHS) corresponding to the NT kernel. Importantly, our bounds capture the exact error rates depending on the differentiability of the activation functions. In order to establish these bounds, we propose the information gain of the NT kernel as a measure of complexity of the learning problem. Our analysis uses a Mercer decomposition of the NT kernel in the basis of spherical harmonics and the decay rate of the corresponding eigenvalues. As a byproduct of our results, we show the equivalence between the RKHS corresponding to the NT kernel and its counterpart corresponding to the Matérn family of kernels, showing the NT kernels induce a very general class of models. We further discuss the implications of our analysis for some recent results on the regret bounds for reinforcement learning and bandit algorithms, which use overparameterized neural networks. | Reject | The paper provides a uniform generalization bound for overparameterized neural networks using the notion of maximal information gain. The analysis relies on the eigendecay of the eigenvalues of the NTK, which has recently been the object of a lot of work in the literature, including the work of Bietti and Bach (the proof actually uses one of their key lemma).
The paper originally received a set of reviews with a large disagreement between the reviewers (including two reviewers with a negative opinion and three reviewers being more positive). After the discussion period, two reviewers kept a very negative opinion, while other reviewers slightly lowered their score. Some of the problems raised by the reviewers include the restrictions imposed on the data, a missing proof (which was eventually added by the authors), the discussion of prior work being inadequate (including for instance the differences with more classical generalization bounds), and the novelty of the analysis.
Overall, the paper clearly has some merits but some of the concerns above are too important at this stage to accept the paper. I recommend the authors address the concerns mentioned in the reviews before re-submission. | test | [
"w_pJA2AHBp8",
"ymDJLvuWwIx",
"BiqMF8N2Et1",
"pJ_pUW53HTh",
"bhIPOu_NbK",
"iBeNQMv9qcR",
"YbW2fhSbrpP",
"1B3hBrw0J8S",
"c5mO3pLX0Pk",
"NtdUAoWaAnz",
"qMBn61-HRFv"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The main contribution of this paper is to characterise the eigenvalue decay of the Neural Tangent Kernels associated with fully-connected neural networks evaluated on the hypersphere when the activation functions of the form $Relu^{s}$ where $s$ is a parameter that regulates the smoothness of the activation functi... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
1,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5,
3
] | [
"iclr_2022_KmNHWX9H7Kf",
"qMBn61-HRFv",
"NtdUAoWaAnz",
"w_pJA2AHBp8",
"1B3hBrw0J8S",
"1B3hBrw0J8S",
"c5mO3pLX0Pk",
"iclr_2022_KmNHWX9H7Kf",
"iclr_2022_KmNHWX9H7Kf",
"iclr_2022_KmNHWX9H7Kf",
"iclr_2022_KmNHWX9H7Kf"
] |
iclr_2022_faMcf0MDk0f | BoolNet: Streamlining Binary Neural Networks Using Binary Feature Maps | Recent works on Binary Neural Networks (BNNs) have made promising progress in narrowing the accuracy gap of BNNs to their 32-bit counterparts, often based on specialized model designs using additional 32-bit components. Furthermore, most previous BNNs use 32-bit values for feature maps and residual shortcuts, which helps to maintain the accuracy, but is not friendly to hardware accelerators with limited memory, energy, and computing resources. Thus, we raise the following question: How can accuracy and energy consumption be balanced in a BNN design? We extensively study this fundamental problem in this work and propose BoolNet: an architecture without most commonly used 32-bit components that uses 1-bit values to store feature maps. Experimental results on ImageNet demonstrate that BoolNet can achieve 63.0% Top-1 accuracy coupled with an energy reduction of 2.95x compared to recent state-of-the-art BNN architectures. Code and trained models are available at: (URL in the final version). | Reject | ### Description
The authors note that the most recent and successful in terms of accuracy binary networks are in fact combining binary and floating-point computations, in particular have residual full-precision paths, with their parameters, that connect all the way to the output. Although such paths are made lightweight, they can be a bottleneck with respect to the energy consumption, memory and latency. The paper proposes a novel binary neural network model that uses only binary operations (except in the first and last layers). It is proposed to estimate the energy efficiency of binary networks more accurately, using hardware design compilers.
### Decision
Reviewers came to a consensus that the proposed BNN architecture claimed in the paper to be the main novelty, while it is indeed quite distinct from the mainstream and best performing BNNs, does not propose novel solutions with respect to the total state of the art. This is our main reason for rejection. The paper makes a great effort in steering the development of BNNs towards more energy efficient models, by carefully estimating the potential energy consumption of different models, using specialized software for design and simulation of hardware needed to run particular models. This is proposed not as a practical solution for industry but rather as a way to measure the potential efficiency of different models. While this was recognized as a great effort, some questions remained regarding fairness of comparison and possibility to reproduce the results by non-experts in order that other developers could estimate and compare potential efficiency of their models. Additionally, it is unclear how power efficiency of a model with $k$ slices (BoolNet) differs from that of a $k$-bit quantized network.
### Details
First of all, I very much like the motivation for the work: to aim at a design of BNN that would be more efficient in terms of speed and energy and to measure that efficiency more precisely. I am not an expert in the hardware, however I do share the concern in this paper that full precision residual paths and blocks would incur a larger latency and higher amount of computation (more chip area, more energy,...). The fact that residual connections in BiRealNet make it necessary to read and write full precision feature maps to the global memory was non-obvious to me. The current state of the art reports binary and floating point operations and largely ignores the necessary memory operations, which as authors argue are the real performance bottleneck.
The main claimed contribution of the paper is the design of a novel model. This was the main point of concert. It is indeed innovative with respect to the current trend of the best performing binary networks to make a "pure" binary network. However it is hard to agree that removing some of the components from BiRealNet or models based on it and going back to simpler architectures which were used before can be a contribution. Specifically, plain non-residual BNNs were the very well-known pioneering works [r1, r2]. The fact that BN in front of binarization can be simplified is trivial and well known, e.g. [r3]. Using multiple binary activations through power-of-two coding or uniform thresholds has been considered multiple times [r2, r6, ABCNet]. Same for group-wise convolutions. I have not seen residual connections in the form of shuffle-net so far, but it can also be considered as a rather standard trick. Using stride instead of max pooling or average pooling is another well known trick. Many of these solutions are in fact used in recent works, e.g. [r4] has no skip-connections, combines BN with sign at the inference time and uses strided convolutions for downsampling. To summarize, the submission does not appear to propose novel modelling solutions relative to the total state of the art.
Furthermore, [r3] is closely related in that they investigated the energy efficiency (however looking at the energy consumption of individual operations only) and with a similar motivation developed binary networks without 32 bit residual connections. Please see also other references pointed out by reviewers.
### Unclear in the paper:
-What is the meaning of $F$ column in Fig4a for BaseNet / Boolnet?
-Why grouped convolution with input channels 256k and k groups has output of 256 channels in fig 3a.
If the outputs from each group are summed, isn't it equivalent to a full convolution 256k x 256?
-What do the authors mean by 3x3 depth-wise convolution?
- The local adaptive shifting module is discussed inside the paragraph describing MS-BConv. Is it a part of MS-BConv or not?
- It seems that with $k$ slices, there is $k$ times more bits per activation used and $k$ times more bits per weight used (because the channel width is multiplied by $k$).
It must be therefore equivalent in terms of the power consumption to a network that uses $k$ bits per activation and weight.
Using these $k$ bits to represent uniform slices rather than powers of two slices appears inferior in terms of quantization error and accuracy. Indeed, many works have successfully quantization different models down to $4$ bits without a loss in accuracy.
### Related work:
Rather than reviewing different methods for making networks more efficient, a deeper review of BNNs would be more helpful, in particular
looking at the works that are closer to the hardware, such as "in-memory computing", "neuromorphic computing", [r5, r6].
### Discussion
A well-justified way to measure the potential energy efficiency of BNNs would be an excellent contribution that could standardize comparison and drive the development of BNNs in the energy-efficient direction. Unfortunately,
it does not appear at the moment that non-experts in hardware and design compilers could repeat the compilation and simulation of accelerators as the authors proposed. A simplified estimation method is needed that can be used in python for any model composed of some standard blocks. The authors seem to be in a good position to propose and validate such a simplified estimator. To start with may I suggest to clarify the following questions:
- Do we need power for the operation pipelines for different operation types (cache, global memory) that are not currently used?
- Are the arithmetic operations implemented in hardware to optimize energy or the throughput?
- Can we assume that all latencies can be masked by parallelism?
- Is it a good approximation to assume that a convolution (with an efficient implementation and large enough cache) needs to read the input only once?
- Can any coordinate-wise transform be appended to the preceding transform on the fly, canceling a read-write in between?
- What is a reasonable estimate of a cost for float32 operations? It seems from the quantization literature that all such operations can be safely quantized to e.g. 8 bit representations without a loss of accuracy.
[r1] Courbariaux et al. 2016, Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to +1 or −1]
[r2 Hubara 2018: Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations, JMLR]
[r3 Ding et al. 2019: Regularizing Activation Distribution for Training Binarized Deep Networks]
[r4 Livochka et al. Initialization and Transfer Learning of Stochastic Binary Networks From Real-Valued Ones]
[r5 Baskin et al. Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform]
[r6 Umuroglu et al. FINN: A Framework for Fast, Scalable Binarized Neural Network Inference] | train | [
"rBCsI2RDpjr",
"qcaaa2eOBwo",
"I_o8VRg4LK5",
"DdbyjZihITv",
"mM-F3hc1ZY2",
"tIGYCmhI5FU",
"cHfpA8LPhU",
"8OnqCr43Q_l",
"O9Lblfktx0y",
"0sViv-jCGan",
"JHZLKRw4N1x",
"C_PIJcMB3Bw",
"3xKjZ2hMYh",
"p6bdxaYGiia"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the additional comments and clarifications. I appreciate the time the authors and the reviewers spent to discuss this manuscript. I will keep my current score.",
" >Q3:\nBaseNet/BoolNet with 32-but features can be comparable networks for ablation study to show the overhead of 32-bit features. Sinc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
5
] | [
"I_o8VRg4LK5",
"3xKjZ2hMYh",
"C_PIJcMB3Bw",
"p6bdxaYGiia",
"3xKjZ2hMYh",
"3xKjZ2hMYh",
"JHZLKRw4N1x",
"JHZLKRw4N1x",
"iclr_2022_faMcf0MDk0f",
"iclr_2022_faMcf0MDk0f",
"iclr_2022_faMcf0MDk0f",
"iclr_2022_faMcf0MDk0f",
"iclr_2022_faMcf0MDk0f",
"iclr_2022_faMcf0MDk0f"
] |
iclr_2022_Yp4sR6rmgFt | Transductive Universal Transport for Zero-Shot Action Recognition | This work addresses the problem of recognizing action categories in videos for which no training examples are available. The current state-of-the-art enables such a zero-shot recognition by learning universal mappings from videos to a shared semantic space, either trained on large-scale seen actions or on objects. While effective, universal action and object models are biased to their seen categories. Such biases are further amplified due to biases between seen and unseen categories in the semantic space. The amplified biases result in many unseen action categories simply never being selected during inference, hampering zero-shot progress. We seeks to address this limitation and introduce transductive universal transport for zero-shot action recognition. Our proposal is to re-position unseen action embeddings through transduction, \ie by using the distribution of the unlabelled test set. For universal action models, we first find an optimal mapping from unseen actions to the mapped test videos in the shared hyperspherical space. We then define target embeddings as weighted Fr\'echet means, with the weights given by the transport couplings. Finally, we re-position unseen action embeddings along the geodesic between the original and target, as a form of semantic regularization. For universal object models, we outline a weighted transport variant from unseen action embeddings to object embeddings directly. Empirically, we show that our approach directly boosts universal action and object models, resulting in state-of-the-art performance for zero-shot classification and spatio-temporal localization. | Reject | This paper was reviewed by four experts in the field and received mixed scores (1 borderline accept, 3 borderline reject). The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper. AC feels that this work has great potential, but needs more work to better clarify the contribution and include additional ablated study. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere. | train | [
"a9d76asqecd",
"Nt4MDYUQo6y",
"aor7VeNYAwO",
"ZvJAklIZ41o",
"Y13krlh7-I",
"ozHhAqBZYhc",
"Ak1q4Y2_zJ",
"gBeaxz10uR"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their positive comments on the novelty and effectiveness of the proposed approach. Below, we have addressed the raised concerns:\n\n**Practicality of transductive action recognition**\n\nWe agree that certain applications require evaluating one testing example at a time. In practice howe... | [
-1,
-1,
-1,
-1,
6,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
2,
2,
4,
2
] | [
"gBeaxz10uR",
"Ak1q4Y2_zJ",
"ozHhAqBZYhc",
"Y13krlh7-I",
"iclr_2022_Yp4sR6rmgFt",
"iclr_2022_Yp4sR6rmgFt",
"iclr_2022_Yp4sR6rmgFt",
"iclr_2022_Yp4sR6rmgFt"
] |
iclr_2022_xOHuV8s7Yl | Two Instances of Interpretable Neural Network for Universal Approximations | This paper proposes two bottom-up interpretable neural network (NN) constructions for universal approximation, namely Triangularly-constructed NN (TNN) and Semi-Quantized Activation NN (SQANN). The notable properties are (1) resistance to catastrophic forgetting (2) existence of proof for arbitrarily high accuracies on training dataset (3) for an input x, users can identify specific samples of training data whose activation "fingerprints" are similar to that of x's activations. Users can also identify samples that are out of distribution. | Reject | This paper propose two new neural network (NN) architectures, namely TNN, and SQANN. The paper claims that these networks are resistant to catastrophic forgetting, are interpretable, and are highly accurate. While the reviewers agree that the idea of making neurons reflect training data is novel, some concerns remain post rebuttal. Most of the reviewers opine that the statements of theorems are unclear, confusing, and hard to interpret (even after the rebuttal and update), thus making it hard to appreciate the contributions of this work. Given this, we are unable to recommend this paper for acceptance at this time. We hope the authors find reviewer feedback useful. | train | [
"GFpMhU7GuDM",
"38NTdQFiRA_",
"h3HhdxIBiTk",
"M8Doci48DbD",
"xZbUJo0SIF",
"huwe5moEzHr",
"WezA71qjFv1",
"gFmOecgQeFoP",
"H7Lz9UE3OAz",
"ISNRaBVYhV",
"jawUNZNk6Jk",
"c_5aYI55Dh1",
"QtFxGSjIa2k"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ## We understand that our review scores are very low, but here is the revision anyway.\n\nMany thanks to the reviewers. We believe our revisions and clarifications can answer most of your doubts. Apart from this post, we respond to other comments in the respective author’s responses; do read them, since they are ... | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_xOHuV8s7Yl",
"jawUNZNk6Jk",
"M8Doci48DbD",
"H7Lz9UE3OAz",
"huwe5moEzHr",
"gFmOecgQeFoP",
"iclr_2022_xOHuV8s7Yl",
"QtFxGSjIa2k",
"WezA71qjFv1",
"c_5aYI55Dh1",
"iclr_2022_xOHuV8s7Yl",
"iclr_2022_xOHuV8s7Yl",
"iclr_2022_xOHuV8s7Yl"
] |
iclr_2022_y7tKDxxTo8T | Zero-Shot Recommender Systems | Performance of recommender systems (RecSys) relies heavily on the amount of training data available. This poses a chicken-and-egg problem for early-stage products, whose amount of data, in turn, relies on the performance of their RecSys. In this paper, we explore the possibility of zero-shot learning in RecSys, to enable generalization from an old dataset to an entirely new dataset. We develop an algorithm, dubbed ZEro-Shot Recommenders (ZESRec), that is trained on an old dataset and generalize to a new one where there are neither overlapping users nor overlapping items, a setting that contrasts typical cross-domain RecSys that has either overlapping users or items. Different from previous methods that use categorical item indices (i.e., item ID), ZESRec uses items' generic features, such as natural-language descriptions, product images, and videos, as their continuous indices, and therefore naturally generalizes to any unseen items. In terms of users, ZESRec builds upon recent advances on sequential RecSys to represent users using their interactions with items, thereby generalizing to unseen users as well. We study three pairs of real-world RecSys datasets and demonstrate that ZESRec can successfully enable recommendations in such a zero-shot setting, opening up new opportunities for resolving the chicken-and-egg problem for data-scarce startups or early-stage products. | Reject | The authors propose zero-shot recommendations, a scenario in which knowledge from a recommender system enables a second recommender system to provide recommendations in a new domain (i.e. new users & new items). The idea developed by the authors is to transfer knowledge through the item content information and the user behaviors.
The initial assessment of the reviewers indicated that this paper was likely not yet ready for publication. The reviewers all recognized the potential usefulness of zero-shot recommendations but argued that the implications of the proposed setup were somewhat unclear. Most notably, the reviewers raised the issue of how widely applicable this was in terms of distance between source and target domains (presumably the quality of the zero-shot recommendations depends on the distance).
The reviewers also noted that this was an application paper. This is of course within the CFP, and recommender systems papers have been published at ICLR in the past (for example one of the initial Session-based RecSys paper w. RNNs) but the potential audience for this work is somewhat lower at ICLR. I should also add that I agree with the authors that their model is novel, but it's very much tailored to this application and it was unclear to me how it might be impactful on its own. All in all, this did not play a significant role in my recommendation.
During the discussion, there were significant, yet respectful, disagreements between the authors and the reviewers. It also seems like perhaps the authors missed an important reply from reviewer hJB8 made available through their updated review (see "Reply to rebuttal"). So the discussion between reviewers and authors did not converge. Having said that, even the two most positive reviewers have scores that would make this paper a very borderline one (a 6 and a 5).
Further, I do find that reviewer's hJB8 arguments have merit and require another round of review. In particular, I think the role and effect of your simulated online scenario should be further discussed (note that I did read the new paragraph on it from your latest manuscript). For example, comparing to a baseline that can train with the data from this new domain would be useful even if at some point it ends up being an upper bound on the performance of your approach. I also found the question raised by the reviewer around the MIND results to be pertinent. Further characterizing pairs of domains in which the approach/works fails (even if empirically) would add depth to this paper.
All in all, this paper has interesting ideas and I strongly encourage the authors to provide a more thorough experimental setup that fully explores the benefits and limitations of their zero-shot approach. | train | [
"TxziRwR2POI",
"MssmJmJRqf",
"RE4kvQrmEa_",
"SXpDlD5FYte",
"vONpNcrNdCl",
"riODyMdZ1zs",
"uHBNYiouK7a",
"yIY7d8TSosL",
"4KFhxj-0NwN",
"w4XRvPgVCzt",
"y8e9vQ_6miy",
"fOvHt2ZAQw",
"H0Wj-8i9iY"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers, we are looking forward to further discussion on how to improve our paper, and we would be grateful for any suggestions/criticisms.",
"This paper studies \"zero shot recommendation\" where source and target domain have no overlap in terms of user and items. The paper proposes to use item content ... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_y7tKDxxTo8T",
"iclr_2022_y7tKDxxTo8T",
"MssmJmJRqf",
"H0Wj-8i9iY",
"fOvHt2ZAQw",
"iclr_2022_y7tKDxxTo8T",
"fOvHt2ZAQw",
"fOvHt2ZAQw",
"MssmJmJRqf",
"MssmJmJRqf",
"H0Wj-8i9iY",
"iclr_2022_y7tKDxxTo8T",
"iclr_2022_y7tKDxxTo8T"
] |
iclr_2022_FZyZiRYbdK8 | Distributionally Robust Learning for Uncertainty Calibration under Domain Shift | We propose a framework for learning calibrated uncertainties under domain shifts. We consider the case where the source (training) distribution differs significantly from the target (test) distribution. We detect such domain shifts through the use of binary domain classifier and integrate it with the task network and train them jointly end-to-end. The binary domain classifier yields a density ratio that reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to adjust the uncertainty of prediction in the task network. This idea of using the density ratio is based on the distributionally robust learning (DRL) framework, which accounts for the domain shift through adversarial risk minimization. We demonstrate that our method generates calibrated uncertainties that benefit many downstream tasks, such as unsupervised domain adaptation (UDA) and semi-supervised learning (SSL). In these tasks, methods like self-training and FixMatch use uncertainties to select confident pseudo-labels for re-training. Our experiments show that the introduction of DRL leads to significant improvements in cross-domain performance. We also demonstrate that the estimated density ratios show agreement with the human selection frequencies, suggesting a match with a proxy of human perceived uncertainties. | Reject | This paper investigates the problem of uncertainty calibration under distribution shift. Based on a distributionally robust learning (DRL) framework, the paper estimates the density ratio between the source and target domains to achieve well-calibrated predictions under domain shift.
As a plug-in module, the proposed method benefits downstream tasks of unsupervised domain adaptation and semi-supervised learning in experiments on Office31, Office-Home, and VisDA-2017, demonstrating the superiority over empirical risk minimization (ERM) and the temperature scaling method measured by expected calibration error (ECE), Brier Score, and reliability plots.
After extensive interactions and discussions on the paper, the final scores were 6/5/5/5. AC considered the paper itself, and all reviews, author responses, and discussions, and reject the paper from the following concerns:
+ *Overclaimed Novelty*: This paper is mainly based on the well-established competitive distributionally robust learning (DRL) framework. The contribution of the newly-proposed regularization form that can further promote smoothed prediction and improve the calibration performance is relatively trivial. The designs of the resulting predictive form and the learning using new gradients, mentioned by the authors in the rebuttal, need further exploration and elaboration to verify its contributions.
+ *Lack of Clarifications*: Some key points mentioned by several reviewers are still not clear. For example, how well the density ratio is estimated, especially in high dimensions? Further, a positive correlation between HSF and density ratio is not enough to prove the main argument of the paper.
+ *Some statements are not well-supported*: For example, the statement that "the harder the examples are, the farther away the examples are from the source domain", claimed by the author in the rebuttal, is untenable.
In summary, this paper studies a promising research direction of uncertainty estimation, but the work cannot be accepted before addressing the reviewers' comments. I suggest the authors to substantially revise their work by incorporating all rebuttal material as well as addressing the remaining concerns. | train | [
"wGkH3U7nYAT",
"Iyi-A9Qe2Kn",
"Lc1UBZHX1Qh",
"RoPBuG6x0Km",
"rIwXoUM13RT",
"UybstqFHjja",
"lyP84yiQLaR",
"9oYxp4OzQUv",
"aQj__HvgZb5",
"TLbobNmec6v",
"tAWr_yIm08Y",
"6CAGmDlTbFo",
"CcAd4T4a9N",
"PNRFnZ-JlQY",
"rRHgkAIWfmq",
"O3N5tjy1XLf",
"9vyJ_apDG5F"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing such a detailed response. The authors addressed some of my concerns, except for the novelty part. I still think the contribution of this paper cannot meet the high bar of ICLR, though it may be a bit subjective. After reading reviews from other reviewers and taking a second look at the revise... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"TLbobNmec6v",
"O3N5tjy1XLf",
"rRHgkAIWfmq",
"UybstqFHjja",
"iclr_2022_FZyZiRYbdK8",
"tAWr_yIm08Y",
"iclr_2022_FZyZiRYbdK8",
"iclr_2022_FZyZiRYbdK8",
"9vyJ_apDG5F",
"O3N5tjy1XLf",
"rIwXoUM13RT",
"9vyJ_apDG5F",
"O3N5tjy1XLf",
"rRHgkAIWfmq",
"iclr_2022_FZyZiRYbdK8",
"iclr_2022_FZyZiRYbdK... |
iclr_2022_z2zmSDKONK | Exploring the Robustness of Distributional Reinforcement Learning against Noisy State Observations | In real scenarios, state observations that an agent observes may contain measurement errors or adversarial noises, misleading the agent to take suboptimal actions or even collapse while training. In this paper, we study the training robustness of distributional Reinforcement Learning~(RL), a class of state-of-the-art methods that estimate the whole distribution, as opposed to only the expectation, of the total return. Firstly, we propose State-Noisy Markov Decision Process~(SN-MDP) in the tabular case to incorporate both random and adversarial state observation noises, in which the contraction of both expectation-based and distributional Bellman operators is derived. Beyond SN-MDP with the function approximation, we theoretically characterize the bounded gradient norm of histogram-based distributional loss, accounting for the better training robustness of distribution RL. We also provide stricter convergence conditions of the Temporal-Difference~(TD) learning under more flexible state noises, as well as the sensitivity analysis by the leverage of influence function. Finally, extensive experiments on the suite of games show that distributional RL enjoys better training robustness compared with its expectation-based counterpart across various state observation noises. | Reject | Description of paper content:
A mixed theoretical and experimental paper that investigates the robustness of distributional RL to perturbations of state observations as compared to expectation-based value function learning. They provide sufficient conditions for TD’s convergence and prove the Lipschitz continuity of the loss of a histogram-based KL version of distributional RL with respect to the state features, whereas this is not true for expected RL. This continuity indicates a certain robustness of the loss with respect to perturbations of the state. The theory’s tie to experiment is weak in the sense that it is not predictive of the actual performance of any algorithm. The theoretical methods are based on a previously published paper SA-MDP.
Summary of paper discussion:
The reviewers raised concerns about the statistical significance of the experimental results, the clarity and organization of the writing, the novelty of the theoretical setting, and its usefulness for describing a real problem setting. The majority of reviewers rejected the paper and did not lift the scores after the rebuttal.
(I personally wonder if the community would not benefit from conducting some of these kinds of theoretical analyses and experiments on LQR systems rather than Atari (etc.) environments.) | train | [
"0DsrNJ9LCw",
"EJnWIx-xMA3",
"pBEa2BgwTl",
"9ymgvxM-VNb",
"ggcqWbJRFFi",
"vc4ujgU4I3Y",
"PX6zkjv4hT8",
"hgn7MHCEaXV",
"aPntTpjuMd",
"WPlMaNBoRQ",
"XS_RQRkl0Va",
"Hiujprwqefj",
"tDdPwVjcOZr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for the further valuable comment.\n\nFirstly, we would say we truly understand your concern as there is indeed still some gap between theoretical analysis and real distributional RL algorithms, including analysis in the linear case but experiments on non-linear, and KL divergence to real Wasserstein ... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"pBEa2BgwTl",
"iclr_2022_z2zmSDKONK",
"aPntTpjuMd",
"EJnWIx-xMA3",
"Hiujprwqefj",
"PX6zkjv4hT8",
"WPlMaNBoRQ",
"iclr_2022_z2zmSDKONK",
"EJnWIx-xMA3",
"tDdPwVjcOZr",
"Hiujprwqefj",
"iclr_2022_z2zmSDKONK",
"iclr_2022_z2zmSDKONK"
] |
iclr_2022_cKTBRHIVjy9 | SubMix: Practical Private Prediction for Large-scale Language Models | Recent data-extraction attacks have exposed that language models can memorize some training samples verbatim. This is a vulnerability that can compromise the privacy of the model’s training data. In this work, we introduce SubMix a practical protocol for private next-token prediction designed to prevent privacy violations by language models that were fine-tuned on a private corpus after pre-training on a public corpus. We show that SubMix limits the leakage of information that is unique to any individual user in the private corpus via a relaxation of group differentially private prediction. Importantly, SubMix admits a tight, data-dependent privacy accounting mechanism, which allows it to thwart existing data-extraction attacks while maintaining the utility of the language model. SubMix is the first protocol that maintains privacy even when publicly releasing tens of thousands of next-token predictions made by large transformer-based models such as GPT-2. | Reject | This paper develops a new mechanism SubMix that provides next-token prediction under a variation of the differential privacy constraint. There is disagreement among the reviewers when assessing the quality of this work. Even though the study of private predictions in large language models is new, the reviewers raised several issues in the proposed approach. First, the formulation of partition-level DP created confusion about the privacy guarantees provided by the mechanism. Given the similarity to PATE, it might be useful to articulate if there is any difference between the privacy guarantee in this paper and the one of PATE. Second, the authors might want to further clarify the reason for having two sub-parts, which has also created some confusion. Even after reading the author's response and the updated revision, the AC still could not understand the relevant privacy argument. In summary, the paper may require further clarification and revision before it is ready for publication. | val | [
"T423q8pxuf",
"IqHs5i13hG",
"F0ASM-k2Ct8",
"5J1zRnlLtAd",
"-yipGj7JG1",
"MUjryZf1kT_",
"R4xJGijxACT",
"cCUQw53DXiX",
"JmzGFNsjxSw",
"R77rriEPo3u",
"hzk_FxaSWBU"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to read and consider our responses. We’d like to respond again below. \n\n“Faulty assumptions in the experiments…”: We would like to emphasize that the dataset GPT2 was pre-trained on is an 40GB internet corpus (https://cdn.openai.com/better-language-models/language_models_are_unsupe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
3
] | [
"IqHs5i13hG",
"5J1zRnlLtAd",
"hzk_FxaSWBU",
"R77rriEPo3u",
"JmzGFNsjxSw",
"cCUQw53DXiX",
"iclr_2022_cKTBRHIVjy9",
"iclr_2022_cKTBRHIVjy9",
"iclr_2022_cKTBRHIVjy9",
"iclr_2022_cKTBRHIVjy9",
"iclr_2022_cKTBRHIVjy9"
] |
iclr_2022_RNnKhz25N1O | Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions | The problem of identifying algorithmic recourse for people affected by machine learning model decisions has received much attention recently. Existing approaches for recourse generation obtain solutions using properties like diversity, proximity, sparsity, and validity. Yet, these objectives are only heuristics for what we truly care about, which is whether a user is satisfied with the recourses offered to them. Some recent works try to model user-incurred cost, which is more directly linked to user satisfaction. But they assume a single global cost function that is shared across all users. This is an unrealistic assumption when users have dissimilar preferences about their willingness to act upon a feature and different costs associated with changing that feature. In this work, we formalize the notion of user-specific cost functions and introduce a new method for identifying actionable recourses for users. By default, we assume that users' cost functions are hidden from the recourse method, though our framework allows users to partially or completely specify their preferences or cost function. We propose an objective function, Expected Minimum Cost (EMC), based on two key ideas: (1) when presenting a set of options to a user, it is vital that there is at least one low-cost solution the user could adopt; (2) when we do not know the user's true cost function, we can approximately optimize for user satisfaction by first sampling plausible cost functions, then finding a set that achieves a good cost for the user in expectation. We optimize EMC with a novel discrete optimization algorithm, Cost-Optimized Local Search (COLS), which is guaranteed to improve the recourse set quality over iterations. Experimental evaluation on popular real-world datasets with simulated user costs demonstrates that our method satisfies up to 25.89 percentage points more users compared to strong baseline methods. Using standard fairness metrics, we also show that our method can provide more fair solutions across demographic groups than comparable methods, and we verify that our method is robust to misspecification of the cost function distribution. | Reject | This paper makes an interesting contribution to the literature on algorithmic recourse. More specifically, while existing literature assumes that there is a global cost function that is applicable to all the users, this work addresses this limitation and models user specific cost functions. While the premise of this paper is interesting and novel, there are several concerns raised by the reviewers in their reviews and during the discussion: 1) While the authors allow flexibility to model user specific cost functions, they still make assumptions about the kind of cost functions. E.g., they consider three hierarchical cost sampling distributions, each of which model percentile shift, linear shift, and a mixture of these two shifts. The authors do not clearly justify why these shifts and a mixture of these shifts is reasonable. Prior work already considers lot more flexible ways of modeling cost functions (in a global fashion). For example, Rawal et. al. 2020 actually learns costs by asking users for pairwise feature comparisons. Isn't this kind of modeling allowing more flexibility than sticking to percentile/linear shifts and their mixture? 2) Several reviewers pointed out that the main paper does not clearly explain all the key contributions. While the authors have updated their draft to address some part of this concern, reviewers opine that the methods section of the paper still does not discuss the approach and the motivation for the various design choices (e.g., why a mixture of percentile and linear shifts?) clearly. 3) Reviewers also opine that some of the evaluation metrics also need more justification. For instance, Why is fraction satisfied measured at k = 1 i.e, FS@1 measured and why not FS@2 or FS@3? Will the results look different for other values of k here? 4) Given that Rawal et. al. 2020 is a close predecessor of this work, it would be important to compare with that baseline to demonstrate the efficacy of the proposed approach. This comparison is missing.
Given all the above, we are unable to recommend acceptance at this time. We hope the authors find the reviewer feedback useful. | train | [
"4hhCswxryUE",
"Ie7SL2K7Bc9",
"7LYSrKfwg06",
"a87WA-3RFrg",
"k6GS2aQdibA",
"B98bjBZz3MD",
"6JuCm6Zf6NJ",
"bZwwprIOv1",
"grctFibhksI",
"D2IUo2ijrCg",
"DmPOV4HnQiQ",
"oCbVi2w_otA",
"u-0M4E_V9XF",
"5N3Ba-AcXT3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comment. We are not quite sure if we understand your current concern, and it would be greatly appreciated if you can elaborate on it a bit further. We are listing down a few things we are not clear about.\n\n- What do you mean by the model family? Do you mean the distribution used to sample cost fu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"Ie7SL2K7Bc9",
"B98bjBZz3MD",
"a87WA-3RFrg",
"u-0M4E_V9XF",
"DmPOV4HnQiQ",
"oCbVi2w_otA",
"u-0M4E_V9XF",
"5N3Ba-AcXT3",
"iclr_2022_RNnKhz25N1O",
"iclr_2022_RNnKhz25N1O",
"iclr_2022_RNnKhz25N1O",
"iclr_2022_RNnKhz25N1O",
"iclr_2022_RNnKhz25N1O",
"iclr_2022_RNnKhz25N1O"
] |
iclr_2022_a3mRgptHKZd | Faster No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium | A recent emerging trend in the literature on learning in games has been concerned with providing accelerated learning dynamics for correlated and coarse correlated equilibria in normal-form games. Much less is known about the significantly more challenging setting of extensive-form games, which can capture sequential and simultaneous moves, as well as imperfect information. In this paper, we develop faster no-regret learning dynamics for \textit{extensive-form correlated equilibrium (EFCE)} in multiplayer general-sum imperfect-information extensive-form games. When all agents play $T$ repetitions of the game according to the accelerated dynamics, the correlated distribution of play is an $O(T^{-3/4})$-approximate EFCE. This significantly improves over the best prior rate of $O(T^{-1/2})$. One of our conceptual contributions is to connect predictive (that is, optimistic) regret minimization with the framework of $\Phi$-regret. One of our main technical contributions is to characterize the stability of certain fixed point strategies through a refined perturbation analysis of a structured Markov chain, which may be of independent interest.
Finally, experiments on standard benchmarks corroborate our findings. | Reject | This paper builds upon existing works to prove that learning (correlated) equilibrium can be fast, i.e., faster than \sqrt{n} even in extensive form games.
Three reviewers are rather lukewarm, and one reviewer is more positive (but seems less confident in his score). The two major criticisms is that this paper is very difficult to read and that the results might seem rather incremental with respect to the literature.
I tend to agree with both points but the paper still as merits: the reason is that extensive form games are intrinsically way harder than normal form games and they more or less all have a burden of notations. We agreed that the authors actually did some efforts to make it fit within the page limit. but another a conference or a journal would have been better suited than ICLR.
Our final conclusion is that the result is interesting yet maybe not breathtaking for the ICLR community; we are fairly certain that another venue for this paper will be more appropriate and that it will be accepted in the near future (I can only suggest journals based on the large amount of content and notations, such as OR, MOR, or GEB - yet, conferences such as EC should be more scoped too) . It does not, unfortunately, reach the ICLR bar. | train | [
"LIgfCwc9d_K",
"sna1Rzqk5Jw",
"Z9C4WJ29YOy",
"Jd5QSXsCFj2",
"wydFBdjqcb1",
"f3UkfjEiL6T",
"bhZTIMtozbJ",
"6sjcbL4iXsC",
"9uk9p32L17Y",
"9oBYH-pOAkb",
"Igkm0_64Fx_",
"DApjgf5KD0k",
"guvHr6CBYfC",
"vJlhBTz9Jou"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proves a faster no-regret learning dynamics for extensive-form correlated equilibrium (EFCE) in multiplayer general-sum imperfect-information extensive-form games. When the game is played for T repetitions according to the accelerated dynamics, the correlated distribution of play for all players is an $... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"iclr_2022_a3mRgptHKZd",
"Z9C4WJ29YOy",
"9uk9p32L17Y",
"wydFBdjqcb1",
"f3UkfjEiL6T",
"bhZTIMtozbJ",
"6sjcbL4iXsC",
"guvHr6CBYfC",
"vJlhBTz9Jou",
"LIgfCwc9d_K",
"DApjgf5KD0k",
"iclr_2022_a3mRgptHKZd",
"iclr_2022_a3mRgptHKZd",
"iclr_2022_a3mRgptHKZd"
] |
iclr_2022_n1BMcctC12 | Randomized Primal-Dual Coordinate Method for Large-scale Linearly Constrained Nonsmooth Nonconvex Optimization | The large-scale linearly constrained nonsmooth nonconvex optimization finds wide applications in machine learning, including non-PSD Kernel SVM, linearly constrained Lasso with nonsmooth nonconvex penalty, etc. To tackle this class of optimization problems, we propose an efficient algorithm called Nonconvex Randomized Primal-Dual Coordinate (N-RPDC) method. At each iteration, this method only randomly selects a block of primal variables to update rather than updating all the variables, which is suitable for large-scale problems. We provide two types of convergence results for N-RPDC. We first show that any cluster point of the sequence of iterates generated by N-RPDC is almost surely (i.e., with probability 1) a stationary point. In addition, we also provide an almost sure asymptotic convergence rate of $O(1/\sqrt{k})$. Next, we establish the expected $O(\varepsilon^{-2})$ iteration complexity of N-RPDC in order to drive a natural stationarity measure below $\varepsilon$ in expectation. The fundamental aspect to establishing the aforementioned convergence results is a \emph{surrogate stationarity measure} we discovered for analyzing N-RPDC. Finally, we conduct a set of experiments to show the efficacy of N-RPDC. | Reject | Dear Authors,
The paper was received nicely and discussed during the rebuttal period. However, the current consensus suggests the paper requires another round of revisions before it gets accepted.
In particular:
- it is not clear if the new method with randomization improves over the deterministic methods, either in theory and practice.
- it is not clear how the assumptions made in this work compare to the existing ones and what the implications are, in terms of applications.
Reviewers were not satisfied by the replies received during the rebuttal period.
One reviewer stated that the argument "first coordinate method for this setting" is valid, but not sufficient to justify publication at this stage.
Best
AC | train | [
"tNKUDlWMxo8",
"HqPJZ7fH1jE",
"iY_oRM8MfqR",
"6sOIODAqx4D",
"w2USOYlrdQ",
"bTztC629Ay",
"rVVR9txWFQg",
"20z8grBFJT",
"zusZ8xrjiV8",
"1g9pI6huejU",
"fyJNv4PiBxE",
"5essjkkgvs",
"PnLfo-qltm",
"1NocKzs37t",
"H5iWdWidGr2",
"1lZ-6F_AMcf",
"SEaaPg02_MK"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your time and insightful comments.\n\n$\\mbox{\\textbf{1. Complexity comparison}}$\n\nIn order to compare the complexity results between N-RPDC and that of [1], let us assume $g \\equiv 0$, otherwise the result in [1] does not apply. In this case, on the one hand, the work [1] can choose the dual s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"HqPJZ7fH1jE",
"iY_oRM8MfqR",
"w2USOYlrdQ",
"w2USOYlrdQ",
"5essjkkgvs",
"SEaaPg02_MK",
"1lZ-6F_AMcf",
"H5iWdWidGr2",
"1NocKzs37t",
"1lZ-6F_AMcf",
"SEaaPg02_MK",
"H5iWdWidGr2",
"1NocKzs37t",
"iclr_2022_n1BMcctC12",
"iclr_2022_n1BMcctC12",
"iclr_2022_n1BMcctC12",
"iclr_2022_n1BMcctC12"... |
iclr_2022_7gRvcAulxa | A Frequency Perspective of Adversarial Robustness | Adversarial examples pose a unique challenge for deep learning systems. Despite recent advances in both attacks and defenses, there is still a lack of clarity and consensus in the community about the true nature and underlying properties of adversarial examples. A deep understanding of these examples can provide new insights towards the development of more effective attacks and defenses. Driven by the common misconception that adversarial examples are high-frequency noise, we present a frequency-based understanding of adversarial examples, supported by theoretical and empirical findings. Our analysis shows that adversarial examples are neither in high-frequency nor in low-frequency components, but are simply dataset dependent. Particularly, we highlight the glaring disparities between models trained on CIFAR-10 and ImageNet-derived datasets. Utilizing this framework, we analyze many intriguing properties of training robust models with frequency constraints, and propose a frequency-based explanation for the commonly observed accuracy vs. robustness trade-off. | Reject | The paper investigates adversarial examples in deep neural networks from a frequency-based perspective. Their main conclusion is that adversarial examples are neither in high- or low-frequency components, but instead depend on data. The topic is clearly important and the paper is overall clearly written and makes some interesting observations, backed up by empirical evidence.
However, the reviewers raised a number of critical concerns, including:
- Discussion of prior work is not adequate. The paper should better explain their contribution in contrast to prior work. Specifically, the authors mention Bernhard et al. (2021) as concurrent work, although the reviewers note that the work was published 5 months before. I realize the authors most likely develop their own line of work without knowing about Bernhard et al. (2021), but I would still suggest focusing more on the differences between them. I did not take this factor into account in the final decision.
- Novelty. Prior work has already shown adversarial examples are data-dependent
- Concerns about experimental setup (only investigate one particular attack, measure of average noise gradient not completely justified, ...)
After discussion, one reviewer downgraded their score and two others kept a more negative score. Only one reviewer was more positive with somewhat low confidence.
Overall, the paper is more on the reject side for now. Further work is needed and I strongly encourage the authors to clearly highlight the contributions of the paper in contrast to prior work. On the plus side, the work clearly has some potential and addresses an interesting topic. | train | [
"y7StEjtyF_t",
"2xkVouSwaOd",
"pE1hHKaDNZS",
"2whSGmOXLMl",
"zHyuEAgJZ70",
"d8386JzmJNQ",
"7ooukufumL1",
"BcEgk4EUbyA",
"_Ji5lli1cs",
"gYD3ul5suO5",
"G8LoqMphkM_",
"5L_nQD-xzx_",
"NhYX805OuA",
"Qx3MS1RsgZu"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank R2 for the response. \n\n* We kindly request R2 to denote *specific concerns* from other reviewers which R2 feels are not well addressed in rebuttal and why they were not addressed. We believe we have adequately responded to all queries and would be more than happy to continue this discussion over any po... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"pE1hHKaDNZS",
"iclr_2022_7gRvcAulxa",
"_Ji5lli1cs",
"zHyuEAgJZ70",
"G8LoqMphkM_",
"G8LoqMphkM_",
"BcEgk4EUbyA",
"5L_nQD-xzx_",
"2xkVouSwaOd",
"Qx3MS1RsgZu",
"NhYX805OuA",
"iclr_2022_7gRvcAulxa",
"iclr_2022_7gRvcAulxa",
"iclr_2022_7gRvcAulxa"
] |
iclr_2022_8svLJL54sj8 | Automatic prior selection for meta Bayesian optimization with a case study on tuning deep neural network optimizers | The performance of deep neural networks can be highly sensitive to the choice of a variety of meta-parameters, such as optimizer parameters and model hyperparameters. Tuning these well, however, often requires extensive and costly experimentation. Bayesian optimization (BO) is a principled approach to solve such expensive hyperparameter tuning problems efficiently. Key to the performance of BO is specifying and refining a distribution over functions, which is used to reason about the optima of the underlying function being optimized. In this work, we consider the scenario where we have data from similar functions that allows us to specify a tighter distribution a priori. Specifically, we focus on the common but potentially costly task of tuning optimizer parameters for training neural networks. Building on the meta BO method from Wang et al. (2018), we develop practical improvements that (a) boost its performance by leveraging tuning results on multiple tasks without requiring observations for the same meta-parameter points across all tasks, and (b) retain its regret bound for a special case of our method. As a result, we provide a coherent BO solution for iterative optimization of continuous optimizer parameters. To verify our approach in realistic model training setups, we collected a large multi-task hyperparameter tuning dataset by training tens of thousands of configurations of near-state-of-the-art models on popular image and text datasets, as well as a protein sequence dataset. Our results show that on average, our method is able to locate good hyperparameters at least 3 times more efficiently than the best competing methods. | Reject | This paper claims a practical improvement over one of earlier meta BO methods. Warm-starting BO or HPO by making use of data from past experiments or tasks seems to be interesting and useful for some applications. In fact, there are a large amount of work on this topic, but a lot of relevant prior work is ignored in this paper unfortunately. I appreciate the authors for making efforts in responding to reviewers’ comments. However, after the discussion period, most of reviewers had serious concerns in this work, pointing out that the proposed method is rather trivial and the comparison is made only against a simple baseline. It was also suggested to improve the experiments. While the idea is interesting, the paper is not ready for publication at the current stage. | train | [
"d1IEzpn0Fen",
"PqPTl_MfL2O",
"xgngFKZvUD0",
"m3xFrLYx05T",
"SlbKfSmACQo",
"62GoMBV2NzC",
"FdRlCQnZMS",
"EOYKLug5pCF",
"o2LIYqB9OjR",
"wu1zi97BW3w",
"UXxirx0Wk9P",
"pXJJFt8vzvV",
"9GOaIhix4m_",
"DsgJW48s8ud"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper is concerned with speeding up Bayesian optimization by using evaluation data from previous, related tasks defined over the same configuration space. The authors propose to model the data from each experiment (or \"task\") by independent Gaussian processes, which all share the same mean and covariance fu... | [
3,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_8svLJL54sj8",
"wu1zi97BW3w",
"iclr_2022_8svLJL54sj8",
"SlbKfSmACQo",
"d1IEzpn0Fen",
"9GOaIhix4m_",
"iclr_2022_8svLJL54sj8",
"o2LIYqB9OjR",
"DsgJW48s8ud",
"xgngFKZvUD0",
"pXJJFt8vzvV",
"d1IEzpn0Fen",
"iclr_2022_8svLJL54sj8",
"iclr_2022_8svLJL54sj8"
] |
iclr_2022_EQmAP4F859 | The Three Stages of Learning Dynamics in High-dimensional Kernel Methods | To understand how deep learning works, it is crucial to understand the training dynamics of neural networks. Several interesting hypotheses about these dynamics have been made based on empirically observed phenomena, but there exists a limited theoretical understanding of when and why such phenomena occur.
In this paper, we consider the training dynamics of gradient flow on kernel least-squares objectives, which is a limiting dynamics of SGD trained neural networks. Using precise high-dimensional asymptotics, we characterize the dynamics of the fitted model in two “worlds”: in the Oracle World the model is trained on the population distribution and in the Empirical World the model is trained on an i.i.d finite dataset. We show that under mild conditions on the kernel and $L^2$ target regression function the training dynamics have three stages that are based on the behaviors of the models in the two worlds. Our theoretical results also mathematically formalize some interesting deep learning phenomena. Specifically, in our setting we show that SGD progressively learns more complex functions and that there is a "deep bootstrap" phenomenon: during the second stage, the test error of both worlds remain close despite the empirical training error being much smaller. Finally, we give a concrete example comparing the dynamics of two different kernels which shows that faster training is not necessary for better generalization. | Accept (Poster) | *Summary:* Study gradient flow dynamics of empirical and population square risk in kernel learning.
*Strengths:*
- Empirical results studying several cases in MSE curves.
- Explaining / solving certain phenomena in DL using kernels.
*Weaknesses:*
- More motivations would be appreciated.
- Technical innovation not so high.
*Discussion:*
Ud7D found that the main strength of this paper is the take-home message rather than innovations. They concluded 7 might be appropriate for the evaluation. This opinion was seconded by WyHh who considered 7 the most appropriate rating. 5uQz also found that 7 would be the most appropriate rating. qXRH maintained concerns about the novelty of the work and rating 5. Nonetheless, they agreed the study is valuable and would not oppose acceptance.
*Conclusion:*
Three reviewers found this paper is definitely above the acceptance threshold (suggesting rating 7) and one more reviewer found it marginally below the acceptance threshold however not opposing acceptance. I found the general impressions from the discussion well described in a comment from Ud7D, who indicates that although this is not a breakthrough paper, it is a nice paper showing that a lot of DL phenomena are can be explained by Kernels. I conclude that the paper makes a sufficiently valuable contribution and hence I am recommending accept. I suggest the authors take the reviewers’ comments carefully into account when preparing the final version of the manuscript. | train | [
"yICSHXyhhLE",
"tw7Aikr3UAo",
"d_bm4x8y-P1",
"96f_0MBX9l2",
"h7BlvfchLq7",
"Yh0D1hoavkC",
"DvKe59Cek8",
"M3ArFt6TD71",
"goHTG2TRiKl",
"NeUFdqtVc23",
"OkeQH6BEuIT",
"ur1_7zlMZho",
"G46EF-oCf49"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! We would like to address some of the comments made.\n\n>I would find the paper much more interesting if the authors studied the differences in the learned functions for the oracle vs. the empirical world (in the spirit of Figure 2). However, the paper compares these two functions only... | [
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"tw7Aikr3UAo",
"DvKe59Cek8",
"iclr_2022_EQmAP4F859",
"goHTG2TRiKl",
"iclr_2022_EQmAP4F859",
"M3ArFt6TD71",
"ur1_7zlMZho",
"h7BlvfchLq7",
"G46EF-oCf49",
"d_bm4x8y-P1",
"iclr_2022_EQmAP4F859",
"iclr_2022_EQmAP4F859",
"iclr_2022_EQmAP4F859"
] |
iclr_2022_cWlMII1LwTZ | Task-aware Privacy Preservation for Multi-dimensional Data | Local differential privacy (LDP), a state-of-the-art technique for privacy preservation, has been successfully deployed in a few real-world applications. In the future, LDP can be adopted to anonymize richer user data attributes that will be input to more sophisticated machine learning (ML) tasks. However, today's LDP approaches are largely task-agnostic and often lead to sub-optimal performance - they will simply inject noise to all data attributes according to a given privacy budget, regardless of what features are most relevant for an ultimate task. In this paper, we address how to significantly improve the ultimate task performance for multi-dimensional user data by considering a task-aware privacy preservation problem. The key idea is to use an encoder-decoder framework to learn (and anonymize) a task-relevant latent representation of user data, which gives an analytical near-optimal solution for a linear setting with mean-squared error (MSE) task loss. We also provide an approximate solution through a learning algorithm for general nonlinear cases. Extensive experiments demonstrate that our task-aware approach significantly improves ultimate task accuracy compared to a standard benchmark LDP approach while guaranteeing the same level of privacy. | Reject | This work considers the problem of how to predict on sensitive user points while preserving their privacy. It proposes a fairly straightforward way to create a local randomizer that optimizes loss for a given model subject to preserving LDP. The work also gives theoretical analysis of the randomizer for least squares linear regression.
The problem formulation is different from the standard LDP framework where privacy of training data points needs to be preserved. The submission does not motivate this setting and I don't see a good motivation for this problem either. More importantly, it does not sufficiently emphasize that the problem is entirely different from prior work. Indeed all reviewers were confused about various aspects of comparison with previous work. Therefore, in my opinion, the submission is not sufficiently well motivated and clearly presented to be accepted. | val | [
"Ztq8bIGcJEG",
"Gf2o7TOQwAz",
"KUnaZ0NLd3y",
"_cG-kIV2zng",
"umBvVWFRCqc",
"Vw4tosOROEj",
"Pr1WHEQgRm",
"uEbTjT12vii",
"G4MKjTlWo2v"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the insightful comments.\n\n1. Analysis for approximate LDP will be our future work, as we have mentioned in our \"Conclusion and Future Work\" section.\n\n2. Please see our overall comment for comparisons with papers provided by other reviewers.\n\n3. High-dimensional experiment is add... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
4
] | [
"uEbTjT12vii",
"Pr1WHEQgRm",
"G4MKjTlWo2v",
"Vw4tosOROEj",
"iclr_2022_cWlMII1LwTZ",
"iclr_2022_cWlMII1LwTZ",
"iclr_2022_cWlMII1LwTZ",
"iclr_2022_cWlMII1LwTZ",
"iclr_2022_cWlMII1LwTZ"
] |
iclr_2022_Bc4fwa76mRp | Head2Toe: Utilizing Intermediate Representations for Better OOD Generalization | Transfer-learning methods aim to improve performance in a data-scarce target domain using a model pretrained on a data-rich source domain. A cost-efficient strategy, linear probing, involves freezing the source model and training a new classification head for the target domain. This strategy is outperformed by a more costly but state-of-the-art method---fine-tuning all parameters of the source model to the target domain---possibly because fine-tuning allows the model to leverage useful information from intermediate layers which is otherwise discarded by the later pretrained layers. We explore the hypothesis that these intermediate layers might be directly exploited by linear probing. We propose a method, Head-to-Toe probing (Head2Toe), that selects features from all layers of the source model to train a classification head for the target-domain. In evaluations on the VTAB, Head2Toe matches performance obtained with fine-tuning on average, but critically, for out-of-distribution transfer, Head2Toe outperforms fine-tuning. | Reject | The paper explores the usefulness of intermediate layers for linear probing, aiming at improving out-of-distribution transfer with significantly less cost than fine-tuning. Two reviewers recommended borderline acceptance, while two others recommended borderline rejection as final rating. The main concerns raised by the reviewers were the limited novelty of the proposed method (e.g., compared to Elmo), unconvincing results in the natural and structure categories of VTAB, and lack of experiments to justify the claims, as well as the demonstration of the method in other tasks beyond image classification. The rebuttal has clarified several other questions. The AC really likes the simplicity of the approach, and also finds the problem of improving the efficiency of transfer learning very important. In addition, the paper is very well-written and easy to follow, as acknowledged by all reviewers. However, the AC agrees with R2 and R3 that the paper, in its current form, does not pass the bar of ICLR, unfortunately. First, the novelty is limited, as pointed out by R1, R2, and R3. In addition to the related works mentioned in the reviews like Elmo, note that the idea of selecting intermediate features, concatenating them, and running a linear classifier for OOD transfer has also been explored in [Yunhui Guo et al, A broader study of cross-domain few-shot learning, ECCV 2020]. Second, while the approach has advantages in terms of efficiency, the accuracy drop (compared to fine-tuning) for in-domain tasks limits its applicability. Finally, even though the AC agrees with the authors this is not a requirement, a more comprehensive set of experiments on more tasks would make the paper stronger, especially given that the novelty is incremental. The authors are encouraged to improve the paper for another top conference. | test | [
"RqiQ7IpKTot",
"Vzxa8KUseeD",
"sYcce_RYj9X",
"Z6mGxqqVA1y",
"7eb1tthEiYY",
"1pkKugFMkAj",
"g2wJBSZuwKR",
"1GtYzIDObAG",
"2_szqvRRfTU",
"UVvvsBeYit2",
"kerCe-Cuatj",
"intX_2KzYe",
"YBGkGwjwGEf",
"7TOBPY13xS3",
"9LgsX85MPUB",
"VO4tsQH_wbe",
"5Z4DG3U8qFt",
"jggGLik1rB",
"J4855ExjhgQ... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" (1) We agree that it would be a strength to show results in different domains, but given 9 page limit; it normal for conference papers to demonstrate improvements on a single domain. Adapters and DiffPruning papers only study NLP transfer, many other papers also only study image classification. Again we have resu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
5,
4,
3
] | [
"Vzxa8KUseeD",
"sYcce_RYj9X",
"Z6mGxqqVA1y",
"7eb1tthEiYY",
"1pkKugFMkAj",
"UVvvsBeYit2",
"1GtYzIDObAG",
"2_szqvRRfTU",
"DA5FmAnIhQ",
"GPUsaX7-9q9",
"intX_2KzYe",
"9LgsX85MPUB",
"j07J8JedTYS",
"ZKIs-5CAz2v",
"m69Ws-uIVf",
"GPUsaX7-9q9",
"VO4tsQH_wbe",
"DA5FmAnIhQ",
"ajXHtaoG5T7",... |
iclr_2022_ZzwfldvDLpC | Let Your Heart Speak in its Mother Tongue: Multilingual Captioning of Cardiac Signals | Cardiac signals convey a significant amount of information about the health status of a patient. Upon recording these signals, cardiologists are expected to manually generate an accompanying report to share with physicians and patients. Generating these reports, however, can be time-consuming and error-prone, while also exhibiting a high degree of intra- and inter-physician variability. To address this, we design a neural, multilingual, cardiac signal captioning framework. In the process, we propose a discriminative multilingual representation learning method, RTLP, which randomly replaces tokens with those from a different language and tasks a network with identifying the language of all tokens. We show that RTLP performs on par with state-of-the-art pre-training methods such as MLM and MARGE, while generating more clinically accurate reports than MLM. We also show that, with RTLP, multilingual fine-tuning can be preferable to its monolingual counterpart, a phenomenon we refer to as the \textit{blessing of multilinguality}. | Reject | The paper addresses the problem of generating captions for ECG signals where they extend the task in the literature from monolingual to multilingual captions. The model proposed in this work is a variant of mask language model where they augment the target by switching the words from one language to another. In addition to predicting the actual words, they also predict the language associated of the words.
Pros
+ The problem is motivated by real world application and need.
+ The presentation is clear and the authors compare the performance of the model with appropriate recent models.
Cons
- The model has higher complexity in both training and parameters, yet achieves only comparable performance to simpler models.
- Empirical evaluation of the multi-lingual output is inadequate since the ground-truth is derived from Google Translate.
- There are also a number of other specific concerns raised by the reviewers (e.g., VuBG, HBxE).
Reviewers raised questions about several shortcomings. In response the authors updated the paper and were very engaged in the discussion with the reviewers. However, at the end of the day, the paper in its current form has serious shortcomings and left the reviewers unconvinced. I suggest the authors take advantage of the feedback from the reviewers and address them fully in their next iteration. | train | [
"7roMeddZj5",
"R5SqYYmumRF",
"utWMpvLb0Ow",
"PcyZFaj0K1h",
"E7sO0gHjqVF",
"R-lAsIj-eFF",
"EEOtzsSkJe",
"hUG9c3DJbcP",
"E6pr-RW0sN",
"yfTOFU7ti0t",
"12Pn2LsvGct",
"ekRSXGXbYwm",
"No8x_RPCDoD",
"MPEaJFq8orr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thorough author response, especially the additional clinical utility experiments in sections 6.3 and 6.4. The highlighted text changes were especially helpful in evaluating what new experiments were run. I do think that the paper is significantly improved, but I think that it would b... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"EEOtzsSkJe",
"iclr_2022_ZzwfldvDLpC",
"R-lAsIj-eFF",
"iclr_2022_ZzwfldvDLpC",
"MPEaJFq8orr",
"R5SqYYmumRF",
"hUG9c3DJbcP",
"E6pr-RW0sN",
"No8x_RPCDoD",
"ekRSXGXbYwm",
"yfTOFU7ti0t",
"iclr_2022_ZzwfldvDLpC",
"iclr_2022_ZzwfldvDLpC",
"iclr_2022_ZzwfldvDLpC"
] |
iclr_2022_vdbidlOkeF0 | Scaling Densities For Improved Density Ratio Estimation | Estimating the discrepancy between two densities ($p$ and $q$) is central to machine learning. Most frequently used methods for the quantification of this discrepancy capture it as a function of the ratio of the densities $p/q$. In practice, closed-form expressions for these densities or their ratio are rarely available. As such, estimating density ratios accurately using only samples from $p$ and $q$ is of high significance and has led to a flurry of recent work in this direction. Among these, binary classification based density ratio estimators have shown great promise and have been extremely successful in specialized domains. However, estimating the density ratio using a binary classifier, when the samples from the densities are well separated, remains challenging. In this work, we first show that the state-of-the-art solutions for such well-separated cases have limited applicability, may suffer from theoretical inconsistencies or lack formal guarantees and therefore perform poorly in the general case. We then present an alternative framework for density ratio estimation that is motivated by the scaled-Bregman divergence. Our proposal is to scale the densities $p$ and $q$ by another density $m$ and estimate $\log p/q$ as $\log p/m - \log q/m$. We show that if the scaling measures are constructed such that they overlap with $p$ and $q$, then a single multi-class logistic regression can be trained to accurately recover $p/m$ and $q/m$ on samples from $p, q$ and $m$. We formally justify our method with the scaled-Bregman theorem and show that it does not suffer from the issues that plague the existing solutions. | Reject | The paper introduces a technique to improve density ratio estimation. This is an important problem and very relevant to the ICLR conference. The main idea is to consider density ratios with respect to intermediate distributions to “scale” the densities and make the ratios easier to estimate by training a suitable discriminative model (classifier). Reviewers found the idea interesting but there was a consensus the paper is not ready for publication. | train | [
"XwgvMKL8F85",
"IrLdiRsGc-h",
"CaXSeZEqGUV",
"OtWX7WVjwe1",
"QaoN9riKIuM",
"_fCUN5zJKAz",
"rdnBxK2FN1P",
"jfDkarre8Mu",
"nQGCLBIRI5i",
"Eb6EtC6Oste",
"XNHYO5jqhGX",
"ElA4Bnq4u7b"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for making an effort to address the concerns. I updated my score accordingly.\n\nI still think some of the points that you raised for answering reviewers' questions can be integrated better in the write up of the paper.",
"Given samples from two distirbutions, how can we estimate the KL divergence betwee... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"jfDkarre8Mu",
"iclr_2022_vdbidlOkeF0",
"iclr_2022_vdbidlOkeF0",
"_fCUN5zJKAz",
"XNHYO5jqhGX",
"Eb6EtC6Oste",
"ElA4Bnq4u7b",
"IrLdiRsGc-h",
"iclr_2022_vdbidlOkeF0",
"iclr_2022_vdbidlOkeF0",
"iclr_2022_vdbidlOkeF0",
"iclr_2022_vdbidlOkeF0"
] |
iclr_2022_LBvk4QWIUpm | Tighter Sparse Approximation Bounds for ReLU Neural Networks | A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width $n$ of a ReLU two-layer neural network needed to approximate a function $f$ over the ball $\mathcal{B}_R(\mathbb{R}^d)$ up to error $\epsilon$, when the Fourier based quantity $C_f = \int_{\mathbb{R}^d} \|\xi\|^2 |\hat{f}(\xi)| \ d\xi$ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based $\mathcal{R}$-norms and show that a function defined on $\mathbb{R}^d$ can be represented as an infinite-width two-layer neural network if and only if its $\mathcal{R}$-norm is finite. In this work, we extend the framework of Ongie et al. (2019) and define similar Radon-based semi-norms ($\mathcal{R}, \mathcal{U}$-norms) such that a function admits an infinite-width neural network representation on a bounded open set $\mathcal{U} \subseteq \mathbb{R}^d$ when its $\mathcal{R}, \mathcal{U}$-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. | Accept (Spotlight) | The authors extend the result of Ongie et al. (2019) and derive sparseneural network approximation bounds that refine previous results. The reuslts are quite ineteresting and relevant to ICLR. All the reviewers were positive about this paper. | train | [
"TyXjeuwCXNO",
"4JNFNbvublk",
"5OE1UmLpsKe",
"0QbMhh8MaSk",
"Uk0bBy7YZs",
"euUWES78Jcx",
"GnHKpiQPrvs",
"nGLwCzwyq39"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their suggestions and are pleased about their favorable opinion of our work.\n\n**“Are there examples where $||_{R,U}$ is substantially smaller than $||_R$ (e.g. is this true for Example 1)? Should we expect it to be much smaller for many reasonable functions?”:** \n\nThis is a very good... | [
-1,
-1,
-1,
-1,
6,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"GnHKpiQPrvs",
"nGLwCzwyq39",
"euUWES78Jcx",
"Uk0bBy7YZs",
"iclr_2022_LBvk4QWIUpm",
"iclr_2022_LBvk4QWIUpm",
"iclr_2022_LBvk4QWIUpm",
"iclr_2022_LBvk4QWIUpm"
] |
iclr_2022__kJXRDyaU0X | What Would the Expert $do(\cdot)$?: Causal Imitation Learning | We develop algorithms for imitation learning from data that was corrupted by unobserved confounders. Sources of such confounding include (a) persistent perturbations to actions or (b) the expert responding to a part of the state that the learner does not have access to. When a confounder affects multiple timesteps of recorded data, it can manifest as spurious correlations between states and actions that a learner might latch onto, leading to poor policy performance. By utilizing the effect of past states on current states, we are able to break up these spurious correlations, an application of the econometric technique of instrumental variable regression. This insight leads to two novel algorithms, one of a generative-modeling flavor ($\texttt{DoubIL}$) that can utilize access to a simulator and one of a game-theoretic flavor ($\texttt{ResiduIL}$) that can be run offline. Both approaches are able to find policies that match the result of a query to an unconfounded expert. We find both algorithms compare favorably to non-causal approaches on simulated control problems. | Reject | This paper studies imitation learning from a causal inference perspective. The authors propose a method to remove the effects of confounders on expert action a using instrumental variable regression, which presumably leads to better estimation of P(a|s), and hence better imitation. The reviews were negative overall at the start. After the discussions, one reviewer stated that he would change his recommendation to accept, although his score is not changed on the review form. However, another reviewer is still not convinced that the causal formalism introduced in the paper improves over the existing RL literature. | train | [
"cTxgGsNodc_",
"qGMA1k7fDOe",
"XNkezvoDjMW",
"SL8dffqTKxK",
"dFffiEX0YLQ",
"DiGYHuzwtOK",
"U5W1o92PLCk",
"gpWWFh4RfXs",
"Yd7E2tHeeY",
"d7Qx8Nq3WVQ",
"wDwMk1EMu9",
"jOAnH-4_hwH",
"taETptvbRUA",
"4IYMB6KgtH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors claim that the proposed methods are not applicable if there exists a mismatch in the input states between the learner and the expert. That is, there is no unobserved confounder affecting the expert's action. However, it has been shown in (Zhang et al., 2020) that in such cases, standard imitation lear... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"dFffiEX0YLQ",
"iclr_2022__kJXRDyaU0X",
"d7Qx8Nq3WVQ",
"U5W1o92PLCk",
"DiGYHuzwtOK",
"Yd7E2tHeeY",
"gpWWFh4RfXs",
"4IYMB6KgtH",
"taETptvbRUA",
"jOAnH-4_hwH",
"qGMA1k7fDOe",
"iclr_2022__kJXRDyaU0X",
"iclr_2022__kJXRDyaU0X",
"iclr_2022__kJXRDyaU0X"
] |
iclr_2022_lP11WtZwquE | Language Model Pre-training on True Negatives | Discriminative pre-trained language models (PrLMs) learn to predict original texts from intentionally corrupted ones. Taking the former text as positive and the latter as negative samples, the discriminative PrLM can be trained effectively for contextualized representation. However, though the training of such a type of PrLMs highly relies on the quality of the automatically constructed samples, existing PrLMs simply treat all corrupted texts as equal negative without any examination, which actually lets the resulting model inevitably suffer from the false negative issue where training is carried out on wrong data and leads to less efficiency and less robustness in the resulting PrLMs.
Thus in this work, on the basis of defining the false negative issue in discriminative PrLMs that has been ignored for a long time, we design enhanced pre-training methods to counteract false negative predictions and encourage pre-training language models on true negatives, by correcting the harmful gradient updates subject to false negative predictions. Experimental results on GLUE and SQuAD benchmarks show that our counter-false-negative pre-training methods indeed bring about better performance together with stronger robustness. | Reject | This paper gets decent performance gains (~2% on GLUE) by soft regularization to make negatives closer to positives in contrastive learning and hard correction of too-close negatives to at least avoid synonyms. These are useful ideas which to some extent build on the simple technique of ELECTRA (controlling the size of the generator MLM in Electra encourages the negatives to in general be "close but not too close", right?). As such, the paper is correct and provides potentially useful gains, but it appears a quite small adjustment of existing techniques, and in addition the use of WordNet is fairly brittle (and its similarity calculations do not consider context at all).
The authors should be commended for the thorough job they did at updating their paper to address particular questions and concerns of reviewers, and useful new information emerged. Relative to the question of whether this method can be applied with other MLMs, the new Appendix A results do show that the answer is Yes, but the gains turn out to be much more modest (~0.5% on GLUE). However, ultimately, while this is all useful information and side experiments, these improvements just can't fix the key problem that all the reviewers felt that this paper does not provide sufficient "Technical Novelty and Significance". As such without bigger new ideas, this improved paper would probably be best as a good workshop paper.
My recommendation is that this paper not be accepted to ICLR 2022 on the basis of its limited technical novelty and significance. | train | [
"jUpC5icSO3",
"YaSstBZqO_Z",
"9qjsyoXzIJa",
"50YKJjWiG4Y",
"CmyuRdxYCoh",
"q1UFT3Qdfvs",
"LeR1Pvpi74s",
"nQYHj6U7I0b",
"bIY-GHMPSrV",
"0E68RgAUaX",
"njBSuwAez0-",
"1OJM0qvr14",
"ueLIztFF2dG"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers so much for the valuable comments on improving the quality of this work. We have updated the paper according to the feedback and our latest evaluations. The main modifications are highlighted with blue color in the newly updated PDF.\n\nThe revision primarily includes:\n\n1. We add a di... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4,
4
] | [
"iclr_2022_lP11WtZwquE",
"nQYHj6U7I0b",
"ueLIztFF2dG",
"1OJM0qvr14",
"njBSuwAez0-",
"0E68RgAUaX",
"bIY-GHMPSrV",
"iclr_2022_lP11WtZwquE",
"iclr_2022_lP11WtZwquE",
"iclr_2022_lP11WtZwquE",
"iclr_2022_lP11WtZwquE",
"iclr_2022_lP11WtZwquE",
"iclr_2022_lP11WtZwquE"
] |
iclr_2022_QmKblFEgQJ | DIGRAC: Digraph Clustering Based on Flow Imbalance | Node clustering is a powerful tool in the analysis of networks. We introduce a graph neural network framework to obtain node embeddings for directed networks in a self-supervised manner, including a novel probabilistic imbalance loss, which can be used for network clustering. Here, we propose directed flow imbalance measures, which are tightly related to directionality, to reveal clusters in the network even when there is no density difference between clusters. In contrast to standard approaches in the literature, in this paper, directionality is not treated as a nuisance, but rather contains the main signal. DIGRAC optimizes directed flow imbalance for clustering without requiring label supervision, unlike existing GNN methods, and can naturally incorporate node features, unlike existing spectral methods. Experimental results on synthetic data, in the form of directed stochastic block models, and real-world data at different scales, demonstrate that our method, based on flow imbalance, attains state-of-the-art results on directed graph clustering, for a wide range of noise and sparsity levels and graph structures and topologies. | Reject | The authors propose a new algorithm for clustering direct networks. The key idea behind the paper is to introduce a new flow imbalance measures and a new self-supervised GNN model to solve the task.
Overall, the paper is interesting and it introduces some new ideas although it needs additional work before being published.
In particular,
- the experiments could be improved by emphasizing more the evaluation on vol_sum/vol_max/etc metrics and by adding additional results on them
- the clarity of the experimental results should also be improved(for example, metrics / claims around Figure 4 still a bit hard to parse)
- finally, the paper would benefit by some theoretical results on the guarantees of the algorithm(most previous work in the area present interesting theoretical guarantees) | train | [
"8XqGZIKyUzr",
"9lX7JtGuiJ",
"3isYnj7Ts7N",
"z_aNmULWI6W",
"ZUlnx23nGgL",
"LmHHTJbJuLR",
"3OCcI_5y0JI",
"__F6El8mRTh",
"em9eaSzsN2",
"y6OyQXle-6Y",
"7VsYT96AKhm",
"PS2WBPO1_1F",
"sEvS17oE3tR",
"EFH8Vq8vdgj"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nFirst of all, linking to recently published papers is trying to convince you DIGRAC is not addressing a made-up problem and this imbalance aspect is praised. Further, this validates the practicality and applicability, as we also mentioned in our initial response. Comparing with recently publishe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"9lX7JtGuiJ",
"3isYnj7Ts7N",
"ZUlnx23nGgL",
"ZUlnx23nGgL",
"em9eaSzsN2",
"EFH8Vq8vdgj",
"iclr_2022_QmKblFEgQJ",
"sEvS17oE3tR",
"sEvS17oE3tR",
"EFH8Vq8vdgj",
"PS2WBPO1_1F",
"iclr_2022_QmKblFEgQJ",
"iclr_2022_QmKblFEgQJ",
"iclr_2022_QmKblFEgQJ"
] |
iclr_2022_7oyVOECcrt | Local Permutation Equivariance For Graph Neural Networks | In this work we develop a new method, named {\it locally permutation-equivariant graph neural networks}, which provides a framework for building graph neural networks that operate on local node neighbourhoods, through sub-graphs, while using permutation equivariant update functions. The potential benefits of learning on graph-structured data are vast, and relevant to many application domains. However, one of the challenges, is that graphs are not always of the same size, and often each node in a graph does not have the same connectivity. This necessitates that the update function must be flexible to the input size, which is not the case in most other domains.
Using our locally permutation-equivariant graph neural networks ensures an expressive update function through using permutation representations, while operating on a lower-dimensional space than that utilised in global permutation equivariance. Furthermore, the use of local update functions offers a significant improvement in GPU memory over global methods. We demonstrate that our method can outperform competing methods on a set of widely used graph benchmark classification tasks. | Reject | This paper presents a graph neural network (GNN) architecture that adopts locally permutation-equivariant constructs, which has better scalability compared to globally permutation-equivariant GNNs, and the paper claims this change also does not lose expressivity of the network. All reviewers unanimously recommended rejection, and the main issues are the clarity and writing, to the point where it becomes hard for a reader to follow the precise implementation of the proposed approach and how that compares to prior work. Therefore in its current form this paper is not yet ready for publication at ICLR. When the authors work toward the next revision I’d suggest clarifying a little more about the precise algorithmic implementation of the proposed ideas, with a bit of additional intuition from a higher-level, rather than staying at the current level of technicality. | train | [
"y2v68rAScLH",
"Rc-vu4mWLNd",
"bdBQLrwor5W",
"D4AtNNJy-jA",
"AUPXklqEtDb",
"-BTjjRjvIOb",
"Dg3l3YilnDZ",
"LoHmwg7mZjy",
"bTiH41vddVx",
"H9t2wat9aW9",
"OGlwJFUR2ky"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This is true, [2] does test on the Collab dataset. Here we didn't test can the model run on a single graph of specific size, but could a reasonable training loop be conducted on the dataset. For the scalability experiment we tested on a single GPU and kept batch size, feature space size, etc. the same between bot... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"Rc-vu4mWLNd",
"Dg3l3YilnDZ",
"D4AtNNJy-jA",
"OGlwJFUR2ky",
"H9t2wat9aW9",
"LoHmwg7mZjy",
"bTiH41vddVx",
"iclr_2022_7oyVOECcrt",
"iclr_2022_7oyVOECcrt",
"iclr_2022_7oyVOECcrt",
"iclr_2022_7oyVOECcrt"
] |
iclr_2022_v-f7ifhKYps | Maximum Entropy Population Based Training for Zero-Shot Human-AI Coordination | An AI agent should be able to coordinate with humans to solve tasks. We consider the problem of training a Reinforcement Learning (RL) agent without using any human data, i.e., in a zero-shot setting, to make it capable of collaborating with humans. Standard RL agents learn through self-play. Unfortunately, these agents only know how to collaborate with themselves and normally do not perform well with unseen partners, such as humans. The methodology of how to train a robust agent in a zero-shot fashion is still subject to research. Motivated from the maximum entropy RL, we derive a centralized population entropy objective to facilitate learning of a diverse population of agents, which is later used to train a robust AI agent to collaborate with unseen partners. The proposed method shows its effectiveness compared to baseline methods, including self-play PPO, the standard Population-Based Training (PBT), and trajectory diversity-based PBT, in the popular Overcooked game environment. We also conduct online experiments with real humans and further demonstrate the efficacy of the method in the real world. | Reject | This paper has several issues:
(1) The empirical results were incomplete and hard to interpret.
1.a The paper uses non-standard benchmark domains making comparisons with results in the literature very difficult. The paper does not use the same environments as related baselines they build on. Some progress on this last point was made during the discussion period---well done authors!
1.b The experiments did not sweep key hyperparameters of the TrajeDi baseline, and generally did not comment on nor ablate several other potentially important hyperparameters and design choices
(2) the proposed method is very similar to another called TrajeDi and the paper and results don't clearly show why the modification of TrajeDi is significant (see #1). The paper initially claimed the TrajeDi was a concurrent submission but one reviewer pointed out the work was published in May
(3) writing and structure could be improved. In addition some inaccurate statements could be cleaned up
(4) The algorithm is more generally applicable beyond human-AI coordination and the reviewers found it odd the paper did not focus on this
In addition, the authors did not respond to several of the reviewers responses. This made it difficult for the reviewers to increase their scores. Several reviewers found the work intriguing, but its not ready yet | test | [
"l78MVPtuDqY",
"lEC5YVlj8Jb",
"ZdIE0lRtYG",
"BYRbsrrBydc",
"mXixxO8ZVGC",
"ZlLhmDsF43V",
"TOf894ht0fc",
"xNQs5ZNA5mT",
"ksFeJFDngwc",
"fHKtBWC0_26",
"4lYRSnx4ubr",
"n4zfWv0e1yr",
"2tLAWeXt6fa",
"bu36Otbj8Ea",
"GQw_qCuPXCU",
"O79X6ooM_5v"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a maximum entropy objective that can be applied to population-based multi-agent training in an RL setting to create a population of multi-agent policies that can more easily cope with zero-shot introductions of \"unseen\" policies. They formulate a framework for successfully applying this objec... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"iclr_2022_v-f7ifhKYps",
"n4zfWv0e1yr",
"2tLAWeXt6fa",
"mXixxO8ZVGC",
"O79X6ooM_5v",
"iclr_2022_v-f7ifhKYps",
"ksFeJFDngwc",
"iclr_2022_v-f7ifhKYps",
"fHKtBWC0_26",
"xNQs5ZNA5mT",
"GQw_qCuPXCU",
"l78MVPtuDqY",
"bu36Otbj8Ea",
"iclr_2022_v-f7ifhKYps",
"iclr_2022_v-f7ifhKYps",
"iclr_2022_... |
iclr_2022_qO-PN1zjmi_ | Novelty detection using ensembles with regularized disagreement | Despite their excellent performance on in-distribution (ID) data, deep neural networks often confidently predict on out-of-distribution (OOD) samples that come from novel classes instead of flagging them for expert evaluation. Even though conventional OOD detection algorithms can distinguish far OOD samples, current methods that can identify near OOD samples require training with labeled data that is very similar to these near OOD samples. In turn, we develop a new ensemble-based procedure for \emph{semi-supervised novelty detection} (SSND) that only utilizes a mixture of unlabeled ID and OOD samples to achieve good detection performance on near OOD data. It crucially relies on regularization to promote diversity on the OOD data while preserving agreement on ID data. Extensive comparisons of our approach to state-of-the-art SSND methods on standard image data sets (SVHN/CIFAR-10/CIFAR-100) and medical image data sets reveal significant gains with negligible increase in computational cost.
| Reject | The authors propose a semi-supervised novelty detection method which tries to identify out-of-distribution samples in the unlabeled data (consisting of in- and out-distribution samples) using a disagreement score of an ensemble. The ensemble is generated by fine-tuning the trained classiifer on the labeled training data plus the unlabeled data which all get a fixed label (which is repeated several times to generate the ensemble). The main idea is that one uses early stopping based on an in-distribution validation set in order to avoid overfitting on the unlabeled points which allows then identification of the out-distribution points via the disagreement score.
The reviewers appreciated the simplicity of the approach and the extensive experimental results. The authors did a good job in trying to answer all questions and concerns of the reviewers.
However, some concerns remained:
- the setting assumes that the OOD data is fixed which was considered as partially unrealistic and thus evaluation of the OOD detection performance on unseen OOD distributions was requested in order to understand the limitations of the method (this was only partially done by the authors).
- the theoretical result is for a two-layer network and completely based on previous work. As the authors use much deeper networks later on in the experiments, this result cannot be used to theoretically justify the approach.
- there remained concerns about the necessary diversity of the ensemble and the early stopping procedure
While I think that the paper has its merits, it is not yet ready for publication. I encourage the authors to to take into account the above points and other remaining concerns of the reviewers in a revised version. | train | [
"WZ6vFnR4vOh",
"e6NuBexRe2",
"jwvhvTdpFMN",
"F92CLylpV7m",
"QP2FGMnl7bk",
"IvfEB3WUtM",
"uayROwaqNd9",
"g5tYRwLUXb6",
"FuShoxB8u-A",
"FUl3741Z8Qi",
"eP-tb57cKwk",
"IA991uT8zni",
"rWuxgBZlFoU",
"ElhMfz_k-Z6",
"Dac2VJhk4zg",
"Io5KM60ShQJ",
"FWcM09f9By2",
"X33UHvKvXCm",
"RHdvoM_Oe4G... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We thank Reviewer VFKa for responding to our rebuttal.\n\n> Practicality of the settings\n\nAs we pointed out before, we do not introduce this setting. This is a well-established problem, that appears under various different names in the literature: semi-supervised novelty detection (Scott et al, 2008; Blanchard... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"F92CLylpV7m",
"iclr_2022_qO-PN1zjmi_",
"QP2FGMnl7bk",
"ElhMfz_k-Z6",
"IA991uT8zni",
"uayROwaqNd9",
"Io5KM60ShQJ",
"p3QbPD8Va1P",
"iclr_2022_qO-PN1zjmi_",
"iclr_2022_qO-PN1zjmi_",
"iclr_2022_qO-PN1zjmi_",
"rWuxgBZlFoU",
"lCr2-0IOjq2",
"Dac2VJhk4zg",
"FUl3741Z8Qi",
"FWcM09f9By2",
"Wne... |
iclr_2022_s5lIqsrOu3Z | Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction | This work proposes a new computational framework for automatically learning a closed-loop transcription between multi-class multi-dimensional data and a linear discriminative representation (LDR) that consists of multiple multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as the equilibrium point of a two-player minimax game between the encoder and decoder. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation avoids expensive evaluating and minimizing approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, conceptually and computationally this new formulation unifies the benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for complex multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark datasets demonstrate tremendous potential of this framework: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and often better than existing methods based on GAN, VAE or a combination of both. | Reject | The authors describe an approach to modeling data via an implicit representation that lives on a union of linear subspaces. While the reviewers consider the authors' approach as novel and having potential, they (and myself) consider that the exposition and notation could be improved, and that the paper as it is is hard to understand and contextualize. | train | [
"wHR0BNZEk75",
"gi_vY4JJWPKS",
"zPEmWt3qq1",
"xRElH4TJgx1",
"9b2BOqH2UbU",
"wSxRc7geXGt",
"yGtE5V9mJuk"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all three reviewers for unanimously recognizing the novelty of our formulation. This new approach is based on a relatively recent work of MCR2 by others that may not be so familiar to the readers, we will provide more explanation in the revised version to be more self-contained. Some of you seem to have ... | [
-1,
-1,
-1,
-1,
5,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_s5lIqsrOu3Z",
"yGtE5V9mJuk",
"wSxRc7geXGt",
"9b2BOqH2UbU",
"iclr_2022_s5lIqsrOu3Z",
"iclr_2022_s5lIqsrOu3Z",
"iclr_2022_s5lIqsrOu3Z"
] |
iclr_2022_f6CQliwyra | A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning | Representation learning lies at the heart of the empirical success of deep learning for dealing with the curse of dimensionality. However, the power of representation learning has not been fully exploited yet in reinforcement learning (RL), due to i), the trade-off between expressiveness and tractability; and ii), the coupling between exploration and representation learning. In this paper, we first reveal the fact that under some noise assumption in the stochastic control model, we can obtain the linear spectral feature of its corresponding Markov transition operator in closed-form for free. Based on this observation, we propose Spectral Dynamics Embedding (SPEDE), which breaks the trade-off and completes optimistic exploration for representation learning by exploiting the structure of the noise. We provide rigorous theoretical analysis of SPEDE, and demonstrate the practical superior performance over the existing state-of-the-art empirical algorithms on several benchmarks.
| Reject | The AC summarizes the major strengths and weaknesses of the paper pointed out by the reviewers (with possible omissions, and additions by the AC)
Strengths:
1. The paper makes an important observation that the linear MDP assumptions can be met when the true dynamics has additive noise
2. Inspired by the theory, the paper proposes a new algorithm that empirical outperforms SAC. The success of the algorithm is very interesting (and surprising to some degree.)
Weaknesses
3. Most of the reviewers and the AC thinks the representation learning perspective is questionable. If one strongly believes that the $\phi, \mu$ in the linear MDP assumption should be interpreted as representations, then yes, this paper is about representation learning in RL and the representation learning is a free lunch. However, suppose one ignores the linear MDP perspective for the moment, and only looks at the modeling assumption $s' = f^*(s,a) +\epsilon$, then $f^*$ can only be interpreted as a "dynamics model" and has nothing to do with the term "representation" that is commonly used in practice. (representation means the penultimate layer of the neural nets typically in emprical RL.) Moreover, in the theory part of the paper, the dynamics model is learned via a (standard) model-based approach---fitting $f(s,a)$ to $s'$---which also suggests that $f$ should be interpreted as a dynamics model instead of a representation. How to reconcile these two perspectives? The AC's own opinion is that this suggests we shouldn't blindly call the $\phi$ in the linear MDP formulation a representation in all scenarios. But regardless of AC's own opinion, I suspect that the paper needs to very explicitly discuss and clarify these discrepancies (instead of somewhat sweeping it under the rug and claiming the paper is about representation learning without a stronger justification.)
4. The sample efficiency depends on the Eluder dimension, which is only known to be polynomial for linear models. Recent works have shown that the Eluder dimension for even simple nonlinear models can be exponential. The analysis seems to be also quite related to previous analysis that uses the Eluder dimension. I think this fact limits the theoretical contribution of the paper.
5. There could be a better exposition of the empirical implementation in the paper. It appears that the implemented algorithm still has some major differences from the theoretical algorithms.
6. It's unclear if the paper should only compare with model-free algorithms. At least the theoretical algorithm fits $f(s,a)$ to $s'$ explicitly (in the definition of confidence region). Therefore, it does not seem to be quite fair to compare with model-free algorithms.
Given these considerations, and given that the majority of the reviewers express some concerns about various subsets of these concerns (3-6), the AC would recommend the authors revise the paper and resubmit to another top ML venue. The AC thinks that the paper contains really interesting and novel observations, but the interpretation of the observation might require more thoughts and clarification. | train | [
"Nts9OFjkqDU",
"yo0nZS8pwD",
"Gib7k2i14ek",
"hejJdggviz",
"Nm85ZuZKyX",
"MQPYQGnOsFj",
"PrtLDG-5kBZ",
"C3RVkIyDhD",
"uphOWH5e4q",
"iazfCv9m-t",
"bhpNpxb2epa",
"ljzi2H2HzKu",
"roe0o1OuAj6",
"mzEzoG6vf-X",
"Sx4OX4cQXmf",
"np8Lc3S1Kw3",
"NggitPVAUmN",
"W_lQb24mGr",
"MU-_dm4znXA",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a practical exploration algorithm for finite-horizon RL problems and provided a theoretical guarantee for its algorithm when the transition kernel of the RL problem can be encoded with RKHS. It also conducted experiments to validate its algorithm. Strength. This paper extended the model in LC3[... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_f6CQliwyra",
"Gib7k2i14ek",
"hejJdggviz",
"Nm85ZuZKyX",
"MQPYQGnOsFj",
"iclr_2022_f6CQliwyra",
"MU-_dm4znXA",
"W_lQb24mGr",
"2R3-78Bilf",
"2R3-78Bilf",
"MU-_dm4znXA",
"2R3-78Bilf",
"Nts9OFjkqDU",
"W_lQb24mGr",
"NggitPVAUmN",
"MQPYQGnOsFj",
"iclr_2022_f6CQliwyra",
"iclr_2... |
iclr_2022_gf9buGzMCa | Expressiveness of Neural Networks Having Width Equal or Below the Input Dimension | The understanding about the minimum width of deep neural networks needed to ensure universal approximation for different activation functions has progressively been extended \citep{park2020minimum}. In particular, with respect to approximation on general compact sets in the input space, a network width less than or equal to the input dimension excludes universal approximation. In this work, we focus on network functions of width less than or equal to the latter critical bound. We prove a maximum principle from which we conclude that for all continuous and monotonic activation functions, universal approximation of arbitrary continuous functions is impossible on sets that coincide with the boundary of an open set plus an inner point. Conversely, we prove that in this regime, the exact fit of partially constant functions on disjoint compact sets is still possible for ReLU network functions under some conditions on the mutual location of these components. We also show that with cosine as activation function, a three layer network of width one is sufficient to approximate any function on arbitrary finite sets. | Reject | *Summary:* Study expressive power of narrow networks.
*Strengths:*
- Study the narrow setting, which is not as well studied as the wide setting.
- Some reviewers found the paper well written.
*Weaknesses:*
- Restricted class of targets and activations.
- Similar results have appeared in previous works.
*Discussion:*
99iL asked about the possibility to remove certain assumptions and the extension to other activations. Authors answer negatively to both. 99iL acknowledges the response and concludes the so-called maximum principle is the most interesting result, but also points out that similar results appear in previous work and that it would have been good to see some extensions. qHTG indicates that the paper is well written and has interesting contributions but that some of the theoretical results only apply in settings that are more restrictive than in other recent related works. Authors agree that generalizations deserve to be investigated in the context of the presented results, but point out that their principle does not apply in that case, and hence that such generalizations are out of scope. Although qHTG identifies several good aspects in this work, they maintain the overall assessment of just marginally above the threshold. PCRn finds the work very interesting but is concerned about the novelty and points out that although the work is technical, the main message is not very strong and that the extraction of insights to solve tasks is not as clear. PCRn concludes that the paper presents various relatively weak results but not a sufficiently significant message. Authors remark that some of their results constitute a mathematical tool for future works.
*Conclusion:*
One reviewer rated this work marginally below the acceptance threshold and three other marginally above. Considering the reviews and the discussion, I conclude that this paper obtains a few interesting results but leaves much for future work. Further development of the current results would make the article significantly stronger. In view of the very high quality of other submissions to the conference, I find that this article tightly misses the bar for acceptance. Therefore I recommend to reject this article. I encourage the authors to revise and resubmit. | train | [
"1TOTARW1nWU",
"OCBbyqKfZP6",
"ZIrRNS0WdMU",
"bxjva6snFKX",
"UWkOR9UYaq",
"VsUOsOEd_uW",
"vIigIDcl_vb",
"zihlhbkbEPA",
"hum-a5z8Av",
"Kbnf2Y1MN_",
"sRTjqpc7ivr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the responses to the comments. Some of my questions are clarified. And after reading the other reviews and responses, I believe that the maximum principle is the most interesting result (but similar results appear in Beise et al 2021). It would have been good to see more results for non-monotonic fu... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"zihlhbkbEPA",
"Kbnf2Y1MN_",
"UWkOR9UYaq",
"iclr_2022_gf9buGzMCa",
"bxjva6snFKX",
"sRTjqpc7ivr",
"Kbnf2Y1MN_",
"hum-a5z8Av",
"iclr_2022_gf9buGzMCa",
"iclr_2022_gf9buGzMCa",
"iclr_2022_gf9buGzMCa"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.