paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_wTTjnvGphYj | Graph Neural Networks with Learnable Structural and Positional Representations | Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers. Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties. We introduce a novel generic architecture which we call \texttt{LSPE} (Learnable Structural and Positional Encodings). We investigate several sparse and fully-connected (Transformer-like) GNNs, and observe a performance increase for molecular datasets, from $1.79\%$ up to $64.14\%$ when considering learnable PE for both GNN classes. | Accept (Poster) | This work adds the positional encoding (akin to those in transformers, but adapted) to GNNs.
In their reviews, reviewers raised a number of concerns about this work, in particular, lack of novelty, lack of ablations to demonstrate the claims of the paper, lack of comparison to previous work (e.g., position-aware GNNS, Graphormer and GraphiT which would appear very related to this work), lack of motivation (e.g., the introduced positional loss do not actually improve performance), and whether the experimental results were really significant.
During the rebuttal, the authors replied to the reviews, to address. the concerns that they could. Of the reviewers, unfortunately only one reviewer elected to respond to the authors. It is disappointing that the four other reviewers did not respond and overall the reviewers did not discuss this paper further.
The authors chose to highlight privately to the AC that two reviewers who scored the paper unfavourably did not respond. The authors then argued this should be taken into account in the score (presumably to make acceptance more likely)--however, two favourable reviewers also did not respond (not highlighted by the authors). I understand this kind of private request to the AC to dismiss unfavourable reviews (especially if they do not respond) is becoming common--I find it unhelpful--I can see who and who has not responded.
Nonetheless, looking at the responses to the original concerns of the reviewers highlighted above, I believe the authors have adequately addressed the concerns of the reviewers. Therefore i recommend acceptance but only as a poster. | train | [
"mddeTJAtO33",
"bpxQ8HfekMg",
"1wXbBH4ne0p",
"JKxzv9VtXb7",
"royrnpAbW2s",
"qCNIG3O5M4G",
"RSXFU-uivz",
"CcegQCvF0Lb",
"gSSQrJh755",
"oou2jZSymE6",
"b8GJM_E_yXW",
"o_fPjMZMwf1",
"_gfjRRvRbG",
"RuI0yzDMwI7",
"q8RRXXMdMqL",
"6o5hCS6TlRm",
"A5zJhIqy8fi",
"Oqnl1Hgp7gJ",
"nHHQFI6pm3",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official... | [
" Dear Reviewer E8yv, \n\nWe thank you again for taking your time reviewing this work. We put our best efforts to prepare the rebuttal to your questions. We would very much appreciate if you could engage with us with your feedback on our rebuttal. We would be glad to answer any further questions and clarify any co... | [
-1,
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"aXTXRQIZsD",
"cLpDt8pPEZk",
"royrnpAbW2s",
"iclr_2022_wTTjnvGphYj",
"iclr_2022_wTTjnvGphYj",
"iclr_2022_wTTjnvGphYj",
"CcegQCvF0Lb",
"oou2jZSymE6",
"iclr_2022_wTTjnvGphYj",
"b8GJM_E_yXW",
"royrnpAbW2s",
"0-hVHUnBgUd",
"RuI0yzDMwI7",
"JKxzv9VtXb7",
"6o5hCS6TlRm",
"aXTXRQIZsD",
"Oqnl1... |
iclr_2022_XHUxf5aRB3s | Dealing with Non-Stationarity in MARL via Trust-Region Decomposition | Non-stationarity is one thorny issue in cooperative multi-agent reinforcement learning (MARL). One of the reasons is the policy changes of agents during the learning process. Some existing works have discussed various consequences caused by non-stationarity with several kinds of measurement indicators. This makes the objectives or goals of existing algorithms are inevitably inconsistent and disparate. In this paper, we introduce a novel notion, the $\delta$-$stationarity$ measurement, to explicitly measure the non-stationarity of a policy sequence, which can be further proved to be bounded by the KL-divergence of consecutive joint policies. A straightforward but highly non-trivial way is to control the joint policies' divergence, which is difficult to estimate accurately by imposing the trust-region constraint on the joint policy. Although it has lower computational complexity to decompose the joint policy and impose trust-region constraints on the factorized policies, simple policy factorization like mean-field approximation will lead to more considerable policy divergence, which can be considered as the trust-region decomposition dilemma. We model the joint policy as a pairwise Markov random field and propose a trust-region decomposition network (TRD-Net) based on message passing to estimate the joint policy divergence more accurately. The Multi-Agent Mirror descent policy algorithm with Trust region decomposition, called MAMT, is established by adjusting the trust-region of the local policies adaptively in an end-to-end manner. MAMT can approximately constrain the consecutive joint policies' divergence to satisfy $\delta$-stationarity and alleviate the non-stationarity problem. Our method can bring noticeable and stable performance improvement compared with baselines in cooperative tasks of different complexity. | Accept (Poster) | Although the initial scores of the paper were not positive, the authors managed to properly address the questions/concerns of the reviewers and the changes they made to the paper convinced the reviewers' to update their scores. This clearly shows that there were flaws in the original presentation of the paper. So, I would recommend the authors to take the reviewers' comments into account when they prepare the camera-ready version of their work. | train | [
"LjG0--wabto",
"QeQWJvGltQz",
"hNy-1HhMm1",
"jNjW0-kfqy-",
"zXAgp3Mj5g9",
"XH93qusUO4t",
"yFv2-jnpRW",
"bPyFHZSo-sA",
"UxIQtUxbUO",
"haIjcfeBks",
"3Kxv3F1nK20",
"x6Z9fnQywG",
"b4DP5NMjsZA",
"qG2DEIETrcp",
"pvHY7XBnP6R",
"1gaQiuksBSp",
"1zrepalZwkv",
"znVQmoRQ70k",
"2FI57-gY9iO",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" We appreciate the reviewer for raising the score to 6! Thanks for the valuable comments and suggestions!",
" We really appreciate the feedback from the reviewer and thanks for raising the score!",
" We appreciate Reviewer XYrN's comments and raising the score to 8!\n\nThanks to Reviewer 16B5 for raising the s... | [
-1,
-1,
-1,
6,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"jNjW0-kfqy-",
"XH93qusUO4t",
"2FI57-gY9iO",
"iclr_2022_XHUxf5aRB3s",
"bPyFHZSo-sA",
"iclr_2022_XHUxf5aRB3s",
"2FI57-gY9iO",
"haIjcfeBks",
"iclr_2022_XHUxf5aRB3s",
"qG2DEIETrcp",
"pvHY7XBnP6R",
"b4DP5NMjsZA",
"0Dfm4xwodHQ",
"pvHY7XBnP6R",
"1gaQiuksBSp",
"UxIQtUxbUO",
"znVQmoRQ70k",
... |
iclr_2022_pETy-HVvGtt | Disentanglement Analysis with Partial Information Decomposition | We propose a framework to analyze how multivariate representations disentangle ground-truth generative factors. A quantitative analysis of disentanglement has been based on metrics designed to compare how one variable explains each generative factor. Current metrics, however, may fail to detect entanglement that involves more than two variables, e.g., representations that duplicate and rotate generative factors in high dimensional spaces. In this work, we establish a framework to analyze information sharing in a multivariate representation with Partial Information Decomposition and propose a new disentanglement metric. This framework enables us to understand disentanglement in terms of uniqueness, redundancy, and synergy. We develop an experimental protocol to assess how increasingly entangled representations are evaluated with each metric and confirm that the proposed metric correctly responds to entanglement. Through experiments on variational autoencoders, we find that models with similar disentanglement scores have a variety of characteristics in entanglement, for each of which a distinct strategy may be required to obtain a disentangled representation. | Accept (Poster) | The topic of the paper is the use of partial information decomposition (PID) for the analysis of interactions in latent representations.
All reviewers ended up appreciating the paper after a good extensive discussion with the authors. The numerical investigation is somewhat on the short side. One reviewer asks for more ablation studies and one reviewer asks for more investigation on real datasets to show the advantage of the method.
The paper is borderline. The theoretical development is fine. But one could argue that the paper could benefit from some more work on the experiments. However, the main points of the method is in place and further validation of the method can be left for future contributions. | train | [
"O6H0Q2PUlOo",
"WgkIsg5SjTH",
"OBqts77mTlN",
"wq4OW_t5vD-",
"aqmxJ15b-PQ",
"jb4YDO2wkK1",
"W9PiJ6PaB8K",
"eNB_KCfOcD9",
"chk3pL7r9ks",
"2-UTFS25Wt",
"_aEwrH61iA-",
"qQAw1lePE-",
"zs4BycBF1Ks",
"EzdeHvu95C",
"5ZZ7lBzG0V",
"NxXH5PQdKwH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. We would like to comment on **robustness** against training randomness. We emphasize again that the deviation caused by training time randomness is small enough to compare models (Fig.3 and 4) and hyperparameter choices (Fig.7). Note that the variance of UniBound is not consistently la... | [
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"WgkIsg5SjTH",
"qQAw1lePE-",
"iclr_2022_pETy-HVvGtt",
"aqmxJ15b-PQ",
"chk3pL7r9ks",
"iclr_2022_pETy-HVvGtt",
"iclr_2022_pETy-HVvGtt",
"NxXH5PQdKwH",
"2-UTFS25Wt",
"jb4YDO2wkK1",
"eNB_KCfOcD9",
"zs4BycBF1Ks",
"5ZZ7lBzG0V",
"OBqts77mTlN",
"iclr_2022_pETy-HVvGtt",
"iclr_2022_pETy-HVvGtt"
... |
iclr_2022_AwgtcUAhBq | Domain Adversarial Training: A Game Perspective | The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training. In this paper, we interpret this approach from a game theoretical perspective. Defining optimal solutions in domain-adversarial training as a local Nash equilibrium, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance. Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge–Kutta), for which we derive asymptotic convergence guarantees. This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers. Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than of half training iterations. Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework. | Accept (Poster) | Summary (from reviewer uzT5): This paper analyzes adversarial domain learning (DAL) from a game-theoretical perspective, where the optimal condition is defined as obtaining the local Nash equilibrium. From this view, the authors show that the standard optimization method in DAL can violate the asymptotic guarantees of the gradient-play dynamics, thus requiring careful tuning and small learning rates. Based on these analyses, this paper proposed to replace the existing optimization method with higher-order ordinary differential equation solvers. Both theoretical and experimental results show that the latter ODE method is more stable and allows for higher learning rates, leading to noticeable improvements in transfer performance and the number of training iterations.
All reviewers appreciated the contributions of this paper and recommended acceptance. While the methods themselves are not novel, the game perspective applied to this problem appears to be and the use of higher-order solves yield interesting theoretical and empirical improvements.
== Additional comments ==
1) For the comparison vs. game optimization algorithms (Figure 3), it would be nice to normalize the x-axis so that one "epoch" yields comparable computational cost among the different methods (as RK4 and RK2 is much more expensive than EG or GD per mini-batch). Given that EG had such bad performance there, it would not change the conclusions; but the current scaling is still quite misleading. Same comments for Figure 2.
2) Note that modern approaches for stochastic extragradient recommend to use different step-sizes for the extrapolation step and the update step (see e.g. Hsieh et al. NeurIPS 2020 "Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling") I suspect that much bigger step-sizes could be used in this case while maintaining convergence, and this version should be added to Figure 3.
3) In "Related Work | Two-Player Zero-Sum Games" -> note that Gidel et al. 2019a provided all their convergence theory and methods for stochastic variational inequalities and thus it also applies to three-player games, unlike seems to be implied by this paragraph. In particular, all the algorithms they investigated (Extra-Adam amongst others) could also be applied to DAL. While I can see that the specifics of the objective in DAL might be different than for GAN optimization, it would be worthwhile to acknowledge these alternative approaches more clearly, and I encourage the DAL community to investigate their performance more exhaustively for DAL than what was done in this paper. | val | [
"sqaHTzr6LUH",
"7GqHin6bLum",
"9XLWy0embGS",
"WAW6bcCU-S",
"ALso10qB8_",
"hJTeolr0wD",
"4CvV_l0qMq9",
"sFAIHPWNAAs",
"y-64Ix8UXw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors exhibit a strong link between game theory and domain-adversarial training. They show the optimal point in the latter is a Nash equilibrium of a three players game. From this perspective, the authors show that standard approaches, like gradient descent, cannot work in this setting as the method is known... | [
8,
6,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
2,
3,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2022_AwgtcUAhBq",
"iclr_2022_AwgtcUAhBq",
"sFAIHPWNAAs",
"iclr_2022_AwgtcUAhBq",
"y-64Ix8UXw",
"sqaHTzr6LUH",
"7GqHin6bLum",
"iclr_2022_AwgtcUAhBq",
"iclr_2022_AwgtcUAhBq"
] |
iclr_2022_7l1IjZVddDW | Improving Federated Learning Face Recognition via Privacy-Agnostic Clusters | The growing public concerns on data privacy in face recognition can be partly relieved by the federated learning (FL) paradigm. However, conventional FL methods usually perform poorly due to the particularity of the task, \textit{i.e.}, broadcasting class centers among clients is essential for recognition performances but leads to privacy leakage. To resolve the privacy-utility paradox, this work proposes PrivacyFace, a framework largely improves the federated learning face recognition via communicating auxiliary and privacy-agnostic information among clients. PrivacyFace mainly consists of two components: First, a practical Differentially Private Local Clustering (DPLC) mechanism is proposed to distill sanitized clusters from local class centers. Second, a consensus-aware recognition loss subsequently encourages global consensuses among clients, which ergo leads to more discriminative features. The proposed schemes are mathematically proved to be differential private, introduce a lightweight overhead as well as yield prominent performance boosts (\textit{e.g.}, +9.63\% and +10.26\% for TAR@FAR=1e-4 on IJB-B and IJB-C respectively). Extensive experiments and ablation studies on a large-scale dataset have demonstrated the efficacy and practicability of our method. | Accept (Spotlight) | This paper received 4 quality reviews. The rebuttal and discussions were effective. All reviewers raised their ratings after the rebuttal. It finally received 3 ratings of 8, and 1 rating of 5. The AC concurs with the contributions made by this work and recommend acceptance. | train | [
"N16QvKWbk0I",
"lNkB-SYhOnS",
"ZKYad8x_I7v",
"Go8JNv3iHv1",
"D7i5dFZ9tRA",
"RA5WFWQOZc0",
"Ne_RmE23UfK",
"gLk-Ud-ma2B",
"_dgojm5rqTB",
"_R7zSyhExR4",
"txUEPe80syh",
"jsXb0jJ1Nlk",
"vwVM7K-ZPqh",
"3_OHjNScV0K",
"UnHQyiBBSHaj",
"0_HkMlIfEpH",
"L-geubtJjv",
"EXDYq1Lv6FZv",
"2UU00zKS... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We are glad that most concerns are addressed and appreciate the reviewer for raising the rating.",
" The authors have answered most of my concerns and make changes. I have changed my rating to accept.",
"This work proposes a novel framework to train a face recognition network for the federated learning settin... | [
-1,
-1,
8,
-1,
-1,
-1,
8,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"lNkB-SYhOnS",
"3_OHjNScV0K",
"iclr_2022_7l1IjZVddDW",
"RA5WFWQOZc0",
"_dgojm5rqTB",
"0_HkMlIfEpH",
"iclr_2022_7l1IjZVddDW",
"iclr_2022_7l1IjZVddDW",
"_R7zSyhExR4",
"txUEPe80syh",
"jsXb0jJ1Nlk",
"vwVM7K-ZPqh",
"FcBam5iZMF1",
"ZKYad8x_I7v",
"gLk-Ud-ma2B",
"Ne_RmE23UfK",
"Kv9sCHsDZAr",... |
iclr_2022_JGO8CvG5S9 | Universal Approximation Under Constraints is Possible with Transformers | Many practical problems need the output of a machine learning model to satisfy a set of constraints, $K$. Nevertheless, there is no known guarantee that classical neural network architectures can exactly encode constraints while simultaneously achieving universality. We provide a quantitative constrained universal approximation theorem which guarantees that for any non-convex compact set $K$ and any continuous function $f:\mathbb{R}^n\rightarrow K$, there is a probabilistic transformer $\hat{F}$ whose randomized outputs all lie in $K$ and whose expected output uniformly approximates $f$. Our second main result is a ``deep neural version'' of Berge's Maximum Theorem (1963). The result guarantees that given an objective function $L$, a constraint set $K$, and a family of soft constraint sets, there is a probabilistic transformer $\hat{F}$ that approximately minimizes $L$ and whose outputs belong to $K$; moreover, $\hat{F}$ approximately satisfies the soft constraints. Our results imply the first universal approximation theorem for classical transformers with exact convex constraint satisfaction. They also yield that a chart-free universal approximation theorem for Riemannian manifold-valued functions subject to suitable geodesically convex constraints. | Accept (Spotlight) | The paper studies an interesting question of whether neural networks can approximate the target function while keep the output in the constraint set. The constraint set is quite natural for e.g. multi-class classification, where the output has to stay on on the probability manifold. The challenge here is that traditional universal approximation theory only guarantees that $\hat{f}(x) \approx f(x)$, but can not guarantee that $\hat{f}(x)$ lies exactly in the same constraint set as $f(x)$.
The paper made a significant contribution in the theory of deep learning -- It is shown that the neural network can indeed approximate any regular functions while keep the output stay in the regular constraint set. This gives a solid backup in terms of the representation power of neural networks in practice, to represent target functions whose outputs are in certain constraint set (e.g. probabilities). | test | [
"3dwVF301PVB",
"srbIudF_uGZ",
"4NNTtdEThwS",
"7LTPJL5zNi",
"aeCWht_mQ_",
"rRLLgY5AIBa",
"eaisTbVJwkC",
"s0RVao4EE5t",
"x0sifV2ZSxv",
"eYh7kL_TVwQ",
"Q3DixvZP847",
"k1qPlSol487",
"hRO8iurpqIH",
"r9RJ29bjsja",
"VX7-KPBAZp",
"jq12Qhw4_by",
"R3DCUFyhlZJ",
"7ym6vkFhIlk",
"HO1Oc0HYOUz"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"public",
"public",
"official_reviewer",
"official_... | [
" I have updated my review. The updated manuscript and response answers most of the critical points in my initial review.",
"The paper intends to provide a stronger type of universal approximation result of transformer models. However, more details on experiments is required with a major revision of presentation ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
10
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"4NNTtdEThwS",
"iclr_2022_JGO8CvG5S9",
"aeCWht_mQ_",
"k1qPlSol487",
"7LTPJL5zNi",
"srbIudF_uGZ",
"iclr_2022_JGO8CvG5S9",
"jq12Qhw4_by",
"HO1Oc0HYOUz",
"k198GFjemD5",
"srbIudF_uGZ",
"srbIudF_uGZ",
"srbIudF_uGZ",
"FDtHDPkVDOF",
"iclr_2022_JGO8CvG5S9",
"R3DCUFyhlZJ",
"7ym6vkFhIlk",
"k... |
iclr_2022_xm6YD62D1Ub | VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning | Recent self-supervised methods for image representation learning maximize the agreement between embedding vectors produced by encoders fed with different views of the same image. The main challenge is to prevent a collapse in which the encoders produce constant or non-informative vectors. We introduce VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with two regularizations terms applied to both embeddings separately: (1) a term that maintains the variance of each embedding dimension above a threshold, (2) a term that decorrelates each pair of variables. Unlike most other approaches to the same problem, VICReg does not require techniques such as: weight sharing between the branches, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves results on par with the state of the art on several downstream tasks. In addition, we show that our variance regularization term stabilizes the training of other methods and leads to performance improvements. | Accept (Poster) | This paper presents a self-supervised learning method for the multi-modal setting where each modality has its own feature extraction mapping, and i) the extracted features shall be close for paired data, ii) in the feature space each view has close to diagonal covariance, while iii) the scale for each feature dimension is constrained away from zero to avoid trivial features. The presentation is clear and the reviewers do not have major confusion on the methodology. There have been some discussions between the authors and reviewers, and most questions on the empirical study have been addressed by the authors with additional experiments. The remaining concern is on the novelty (difference from prior SSL methods especially Barlow-Twins) and significance. I think that while it is relatively straightforward to extend methods like Barlow-twins to the multi-modal setting, I do see the value of empirically demonstrating the effectiveness of an alternative loss to the currently pervasive contrastive learning paradigm, and hence the paper is worth discussion in my opinion. In the end, the method resembles classical multi-modal methods like canonical correlation analysis, in terms of the objective (matching paired data in latent space) and constraints (un-correlated feature in each view, and unit-scale constraint for each feature dimension); such connections shall be discussed. | train | [
"jMoa9CYb-KP",
"WiUoe37MOma",
"A6_B3fxZcfJ",
"-xbnsDYDp2",
"o4pwUuLCWtW",
"v7rzfntiiFt",
"qogy2A71s4V",
"3karh1mLPiQ",
"TARZfMoUw7F"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a Variance-Invariance-Covariance regularization technique for self-supervised learning. The loss function used in the paper consists of three terms: the invariance term encouraging samples with different view to have similar embedding; the variance term, which is a hinge loss on the variance of... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_xm6YD62D1Ub",
"v7rzfntiiFt",
"TARZfMoUw7F",
"3karh1mLPiQ",
"qogy2A71s4V",
"jMoa9CYb-KP",
"iclr_2022_xm6YD62D1Ub",
"iclr_2022_xm6YD62D1Ub",
"iclr_2022_xm6YD62D1Ub"
] |
iclr_2022_yeP_zx9vqNm | Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks | Recent work suggests that feature constraints in the training datasets of deep neural networks (DNNs) drive robustness to adversarial noise (Ilyas et al., 2019). The representations learned by such adversarially robust networks have also been shown to be more human perceptually-aligned than non-robust networks via image manipulations (Santurkar et al., 2019, Engstrom et al., 2019). Despite appearing closer to human visual perception, it is unclear if the constraints in robust DNN representations match biological constraints found in human vision. Human vision seems to rely on texture-based/summary statistic representations in the periphery, which have been shown to explain phenomena such as crowding (Balas et al., 2009) and performance on visual search tasks (Rosenholtz et al., 2012). To understand how adversarially robust optimizations/representations compare to human vision, we performed a psychophysics experiment using a metamer task similar to Freeman \& Simoncelli, 2011, Wallis et al., 2016 and Deza et al., 2019 where we evaluated how well human observers could distinguish between images synthesized to match adversarially robust representations compared to non-robust representations and a texture synthesis model of peripheral vision (Texforms a la Long et al., 2018). We found that the discriminability of robust representation and texture model images decreased to near chance performance as stimuli were presented farther in the periphery. Moreover, performance on robust and texture-model images showed similar trends within participants, while performance on non-robust representations changed minimally across the visual field. These results together suggest that (1) adversarially robust representations capture peripheral computation better than non-robust representations and (2) robust representations capture peripheral computation similar to current state-of-the-art texture peripheral vision models. More broadly, our findings support the idea that localized texture summary statistic representations may drive human invariance to adversarial perturbations and that the incorporation of such representations in DNNs could give rise to useful properties like adversarial robustness. | Accept (Spotlight) | This paper shows that images synthesized to match adversarially robust
representations are similar to original images to humans when viewed
peripherally. This was not true for adversarially non-robust
representations. Additionally the adversarially robust
representations were similar to the texform model image from a model
of human peripheral vision.
Reviewers increased their score a lot during the rebuttal period as
the authors provided more details on the experiments and agreed to
tone down some of the claims (especially the strong claim that the
robust representations capture peripheral computation similar to
current SOA texture peripheral vision models). As well stated by
reviewer s6dV, two representations with the same null-space are not
necessarily the same.
With reviewer scores of 8 across the board, reviewers agree that this
is interesting work that should be presented at the conference. I agree. | train | [
"sTfqBcnMEDr",
"Ug2TExPBFT2",
"fGv8ncM34Dj",
"5T2JcMFmEhx",
"OHfCS3HF4c",
"lnwBOwf6wp9",
"ekGTe9HdfbM",
"sE2FT2dZ5qi",
"ILV_405bon0",
"ONge5pv8hfF",
"uCW8fA-hJ8",
"lVGuFg6GNe3",
"iFqBu7c_2eP",
"H7FGNpf1T_k",
"BKiwxZZPZ3s",
"Xs1s-fC7lF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper examines whether an adversarially trained (“robust”) ResNet-50 classification model might better resemble the representation of human peripheral vision than a non-adversarially trained (“non-robust”) model. It does so by means of a human perceptual experiment: test images from Restricted ImageNet (a col... | [
8,
-1,
8,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_yeP_zx9vqNm",
"lVGuFg6GNe3",
"iclr_2022_yeP_zx9vqNm",
"OHfCS3HF4c",
"lnwBOwf6wp9",
"ekGTe9HdfbM",
"fGv8ncM34Dj",
"iclr_2022_yeP_zx9vqNm",
"ONge5pv8hfF",
"BKiwxZZPZ3s",
"Xs1s-fC7lF",
"iFqBu7c_2eP",
"sTfqBcnMEDr",
"BKiwxZZPZ3s",
"sE2FT2dZ5qi",
"iclr_2022_yeP_zx9vqNm"
] |
iclr_2022_vcUmUvQCloe | Joint Shapley values: a measure of joint feature importance | The Shapley value is one of the most widely used measures of feature importance partly as it measures a feature's average effect on a model's prediction. We introduce joint Shapley values, which directly extend Shapley's axioms and intuitions: joint Shapley values measure a set of features' average effect on a model's prediction. We prove the uniqueness of joint Shapley values, for any order of explanation. Results for games show that joint Shapley values present different insights from existing interaction indices, which assess the effect of a feature within a set of features. The joint Shapley values seem to provide sensible results in ML attribution problems. With binary features, we present a presence-adjusted global value that is more consistent with local intuitions than the usual approach. | Accept (Poster) | The reviewers think the topic is important and challenging. The results are novel, and the experimental section provides a nice illustration how the joint Shapley values can be used. However, the paper can be improved by including more real world applications and experiments. | train | [
"5Eb1qDURcuD",
"CmSOjsNF679",
"pPRqVZGsb49",
"tV9OVFQxhub",
"c_VrO3JjMgj",
"ecyLypaHmoV"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We agree that calculating exact joint Shapley value calculations is computationally intensive. As mentioned, in passing, in the Conclusions, we are hopeful that efficient sampling techniques developed for classic Shapley values can be directly extended to joint Shapley values. Indeed, as the joint Shapley is su... | [
-1,
-1,
-1,
5,
8,
8
] | [
-1,
-1,
-1,
3,
3,
2
] | [
"ecyLypaHmoV",
"c_VrO3JjMgj",
"tV9OVFQxhub",
"iclr_2022_vcUmUvQCloe",
"iclr_2022_vcUmUvQCloe",
"iclr_2022_vcUmUvQCloe"
] |
iclr_2022_BGvt0ghNgA | Lipschitz-constrained Unsupervised Skill Discovery | We study the problem of unsupervised skill discovery, whose goal is to learn a set of diverse and useful skills with no external reward. There have been a number of skill discovery methods based on maximizing the mutual information (MI) between skills and states. However, we point out that their MI objectives usually prefer static skills to dynamic ones, which may hinder the application for downstream tasks. To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills. Another benefit of LSD is that its learned representation function can be utilized for solving goal-following downstream tasks even in a zero-shot manner — i.e., without further training or complex planning. Through experiments on various MuJoCo robotic locomotion and manipulation environments, we demonstrate that LSD outperforms previous approaches in terms of skill diversity, state space coverage, and performance on seven downstream tasks including the challenging task of following multiple goals on Humanoid. Our code and videos are available at https://shpark.me/projects/lsd/. | Accept (Poster) | The paper introduces unsupervised skill discovery using Lipschitz-constrained skills. It is well-written and demonstrates the advantages in a solid experimental section. | train | [
"rg1gWtGDtnp",
"EdjUH8P62kI",
"Dpn-pvuy7OO",
"rfVn9so3G7",
"JdRBjBe0gfU",
"JLKnJLqmRNF",
"MJJwELAJmVa",
"fPym59GHSiI",
"0QXkAwK3g4C",
"nPsohyEJ73",
"nEI7l38DnOO",
"C3d2pixgr-g",
"5UOPo76kz2U",
"2v2euxg7hMV",
"hUD-_8XZIpp"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for helping improve our work with active participation in the discussion and reconsidering the score!\n\nJust as a side note, we would like to highlight that discrete LSD can learn a more diverse set of behaviors than continuous LSD, such as skills reaching specific joint poses, moving, or flipp... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"EdjUH8P62kI",
"iclr_2022_BGvt0ghNgA",
"JdRBjBe0gfU",
"MJJwELAJmVa",
"JLKnJLqmRNF",
"rfVn9so3G7",
"fPym59GHSiI",
"0QXkAwK3g4C",
"EdjUH8P62kI",
"hUD-_8XZIpp",
"2v2euxg7hMV",
"5UOPo76kz2U",
"iclr_2022_BGvt0ghNgA",
"iclr_2022_BGvt0ghNgA",
"iclr_2022_BGvt0ghNgA"
] |
iclr_2022_FKp8-pIRo3y | Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation | Complex sequential tasks in continuous-control settings often require agents to successfully traverse a set of ``narrow passages'' in their state space. Solving such tasks with a sparse reward in a sample-efficient manner poses a challenge to modern reinforcement learning (RL) due to the associated long-horizon nature of the problem and the lack of sufficient positive signal during learning.
Various tools have been applied to address this challenge. When available, large sets of demonstrations can guide agent exploration. Hindsight relabelling on the other hand does not require additional sources of information. However, existing strategies explore based on task-agnostic goal distributions, which can render the solution of long-horizon tasks impractical. In this work, we extend hindsight relabelling mechanisms to guide exploration along task-specific distributions implied by a small set of successful demonstrations. We evaluate the approach on four complex, single and dual arm, robotics manipulation tasks against strong suitable baselines. The method requires far fewer demonstrations to solve all tasks and achieves a significantly higher overall performance as task complexity increases. Finally, we investigate the robustness of the proposed solution with respect to the quality of input representations and the number of demonstrations. | Accept (Poster) | This paper proposes a method to improve the sample efficiency of the HER algorithm by sampling goals from a distribution that is learned from human demonstrations. Empirical results on a simulated robotic insertion task show that the proposed method enjoys a better sample efficiency compared to HER.
The reviewers find the paper well-written overall and the proposed idea reasonable. However, there are concerns regarding the limited novelty of the proposed method, which seems incremental. Also, the empirical evaluation suffers from a lack of diversity. The considered tasks are virtually all equivalent to an insertion task. The paper would benefit from further empirical evaluations that include tasks such as those considered in the original HER paper. | train | [
"nQ2JQW01Ib2",
"An0eX40Ihmn",
"cw_9JO6nXx",
"gTPDVp2nSs",
"Tc00_-Nz7RB",
"9I7zoQjVMF-",
"RKb_V_BNo2Y",
"Wwwckh_wAd2",
"ub-ZY-X0bpV",
"Iwj7hQmi-yy",
"RVb30dwdgrt",
"-1ZgHt4R03",
"CTIaJUbyYJ8",
"yB7t2f-eqxA",
"x6hDLOTwtxF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors, \n\nThank you for your response! While I agree that four tasks on the dual arm setups are diverse from a task perspective they are still on a very similar setup. Reaching, aligning and insertion are all part of the same general task of cable manipulation. The second robotic does not have the same co... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"RVb30dwdgrt",
"-1ZgHt4R03",
"RKb_V_BNo2Y",
"iclr_2022_FKp8-pIRo3y",
"gTPDVp2nSs",
"Tc00_-Nz7RB",
"9I7zoQjVMF-",
"iclr_2022_FKp8-pIRo3y",
"CTIaJUbyYJ8",
"Wwwckh_wAd2",
"yB7t2f-eqxA",
"x6hDLOTwtxF",
"Iwj7hQmi-yy",
"iclr_2022_FKp8-pIRo3y",
"iclr_2022_FKp8-pIRo3y"
] |
iclr_2022_hjd-kcpDpf2 | Maximizing Ensemble Diversity in Deep Reinforcement Learning | Modern deep reinforcement learning (DRL) has been successful in solving a range of challenging sequential decision-making problems. Most of these algorithms use an ensemble of neural networks as their backbone structure and benefit from the diversity among the neural networks to achieve optimal results. Unfortunately, the members of the ensemble can converge to the same point either the parametric space or representation space during the training phase, therefore, losing all the leverage of an ensemble. In this paper, we describe Maximize Ensemble Diversity in Reinforcement Learning (MED-RL), a set of regularization methods inspired from the economics and consensus optimization to improve diversity in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training. We integrated MED-RL in five of the most common ensemble-based deep RL algorithms for both continuous and discrete control tasks and evaluated on six Mujoco environments and six Atari games. Our results show that MED-RL augmented algorithms outperform their un-regularized counterparts significantly and in some cases achieved more than 300$\%$ in performance gains. | Accept (Poster) | This paper concerns ensemble methods in deep reinforcement learning, examining several such methods, and proposes to address an important issue wherein ensemble members converge on a representation of approximately the same function, either by their parameters converging to an identical point or equivalent points that give rise to the same function. The authors propose a set of regularization methods aimed at improving diversity, and benchmark these augmentations on five ensemble methods and a dozen environments.
3 of 4 reviewers generally praised the method's simplicity and generality, and found the experiments convincing. Reviewer a9sA describes it as "clearly written and easy to follow", although others found clarity lacking in parts. There was agreement among these 3 reviewers that this was an interesting problem to tackle. Reviewer TfGq notes that this method lacks theoretical justification or guarantees, but that as a largely empirical paper this is perhaps of secondary importance. Reviewers 6miY and a9sA had questions about the precise choice of metrics, hyperparameters and seeds; the resulting discussion cleared up many of these concerns.
The most critical reviewer, i4M1, disputes the existence of the phenomenon at all, saying that "Neural networks converge to different solutions given the initialization is different and multiple local minima." The remainder of i4M1's criticisms seem centered on the choice of environments and the number of seeds (also raised by other reviewers). The issue of seeds has been addressed partially and the authors have committed to strengthening their results in this regard.
Reviewer i4M1's statement on the convergence of neural networks to different minima matches a bit of dated folk wisdom about neural networks, but the AC disputes this. The authors have cited a study from before the DL era properly began that identifies this issue and Section 5 addresses these criticisms directly. In practice, modern neural networks, especially with non-saturating activations, tend to be surprisingly consistent across random seeds when trained against the same data stream, and more recent work posits that the loss landscape is less riddled with local minima than with saddle points (see e.g. Dauphin et al, 2014). _Equivalent_ minima are of course common due to scaling and permutation symmetries but SGD has a well documented preference for low norm solutions in the former case, and the authors' have chosen methods that would at least conceivably overcome these issues, by focusing on summary statistics of the representations rather than their precise values (and indeed, CKA is designed with these concerns in mind).
Despite i4M1's incredulity I am inclined to agree with the majority of reviewers and view the paper as a worthwhile contribution to the body of knowledge (purely empirical though it may be) on both NN ensemble methods and DRL ensembles in particular. The introduction of measures from economics is clever and original, and the results are promising. A more exhaustive study on the entire Atari57 benchmark but can appreciate the resource problem this poses, and find that the suite of considered environments, combined with the augmentation of 5 different DRL ensemble methods, strikes a good balance. I concur on the issue of seeds and would encourage authors to include as many as possible for the camera ready, but on balance would recommend acceptance. | train | [
"T8DSUdrfmz5",
"5Z48j4gzfhS",
"T_6FwbduQQ2",
"rRw-xsBjPFQ",
"2s0lpJKq7oH",
"UOHY8sd7pNO",
"_mlmAazklvj",
"hPpZ3uNBC-t",
"7WO8iwu-rSg",
"_2rmMn981VZ",
"DxiW5jicMtr",
"D9Dw4wp8xj3",
"_9XAKBADWR",
"1CEZ0XP_13rf",
"mr-Pmxo7yHC",
"NCXxbJZXX_P",
"KlPqJ6iEkTk",
"f7tGEuQyHYq"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" Respected Reviewer,\n\nChecking in if we have addressed your concerns in our rebuttal? Please let us know if you have any other question. Thank you so much. ",
" I have updated my review comments (see \"Update after author's revision 20211125\" in \"Summary Of The Review:\"). ",
"In this paper, it is proposed... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"NCXxbJZXX_P",
"_9XAKBADWR",
"iclr_2022_hjd-kcpDpf2",
"2s0lpJKq7oH",
"_mlmAazklvj",
"7WO8iwu-rSg",
"hPpZ3uNBC-t",
"f7tGEuQyHYq",
"f7tGEuQyHYq",
"DxiW5jicMtr",
"D9Dw4wp8xj3",
"_9XAKBADWR",
"T_6FwbduQQ2",
"KlPqJ6iEkTk",
"NCXxbJZXX_P",
"iclr_2022_hjd-kcpDpf2",
"iclr_2022_hjd-kcpDpf2",
... |
iclr_2022_PRZoSmCinhf | Constrained Policy Optimization via Bayesian World Models | Improving sample-efficiency and safety are crucial challenges when deploying reinforcement learning in high-stakes real world applications. We propose LAMBDA, a novel model-based approach for policy optimization in safety critical tasks modeled via constrained Markov decision processes. Our approach utilizes Bayesian world models, and harnesses the resulting uncertainty to maximize optimistic upper bounds on the task objective, as well as pessimistic upper bounds on the safety constraints. We demonstrate LAMBDA's state of the art performance on the Safety-Gym benchmark suite in terms of sample efficiency and constraint violation. | Accept (Spotlight) | The paper describes a new model-based RL technique for constrained MDPs based on Bayesian world models. It improves sample efficiency and safety. The reviewers are unanimous in their recommendation for acceptance. This represents an important advance in RL. Great work! | train | [
"QjEOZu4XoO",
"In2Gct14CT",
"r4D4hq3OoR",
"piQ6-P0x5oV",
"ikf6xPST_9I",
"xlxoiQf_4Iu",
"66R00yN6NPH",
"oBvSH-8L6ht",
"dE5E1UbUI2m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The manuscript introduces LAMBDA a Bayesian model-based policy optimization algorithm that adheres to supplied safety constraints. The approach relies on the model to simulate trajectories and therefore improve efficiency of learning and effectiveness of safety. Experiments on SG6 compare the proposed algorithm to... | [
6,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_PRZoSmCinhf",
"r4D4hq3OoR",
"oBvSH-8L6ht",
"dE5E1UbUI2m",
"QjEOZu4XoO",
"66R00yN6NPH",
"iclr_2022_PRZoSmCinhf",
"iclr_2022_PRZoSmCinhf",
"iclr_2022_PRZoSmCinhf"
] |
iclr_2022_1HxTO6CTkz | Unifying Likelihood-free Inference with Black-box Optimization and Beyond | Black-box optimization formulations for biological sequence design have drawn recent attention due to their promising potential impact on the pharmaceutical industry. In this work, we propose to unify two seemingly distinct worlds: likelihood-free inference and black-box optimization, under one probabilistic framework. In tandem, we provide a recipe for constructing various sequence design methods based on this framework. We show how previous optimization approaches can be "reinvented" in our framework, and further propose new probabilistic black-box optimization algorithms. Extensive experiments on sequence design application illustrate the benefits of the proposed methodology. | Accept (Spotlight) | The paper investigates various approaches, and a unifying framework, for sequence design. There were a variety of opinions about the paper. It was felt, after discussion, that the paper would benefit from a sharper focus, and somewhat suffers from being overwhelmed by various approaches, lacking a clear narrative. But overall all reviewers had a positive sentiment, and the paper makes a nice contribution to the growing body of work on protein design. | train | [
"4tYt6vRtTpB",
"EXdydVliLk2",
"zMTvPfbAhUc",
"Mx3Au1LhJtt",
"-2_PQuhhMw7",
"FTUaQdJ-nhn",
"9YaBX-QBuzq",
"b7vynnSSpPpU",
"FmDDD59Satb",
"jkG0udpNEP",
"KibJMfZez9_",
"6zPMduTAQje",
"_FLDJdN_9ZN",
"SAcCo0lTEkE",
"ojKuYxnoOJ1",
"jbo_2vhiic1",
"3wL73HiLy5t"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your comprehensive response to my review and addressing most of my comments. I have increased my rating to 'marginally above the acceptance threshold'. Unfortunately, I could not respond before the rebuttal deadline. \n\nI would appreciate if you could describe the differences or your proposed method... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
10
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"zMTvPfbAhUc",
"iclr_2022_1HxTO6CTkz",
"EXdydVliLk2",
"-2_PQuhhMw7",
"b7vynnSSpPpU",
"iclr_2022_1HxTO6CTkz",
"EXdydVliLk2",
"FTUaQdJ-nhn",
"EXdydVliLk2",
"FmDDD59Satb",
"jkG0udpNEP",
"3wL73HiLy5t",
"FTUaQdJ-nhn",
"jbo_2vhiic1",
"_FLDJdN_9ZN",
"iclr_2022_1HxTO6CTkz",
"iclr_2022_1HxTO6... |
iclr_2022_nioAdKCEdXB | Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory | Schrödinger Bridge (SB) is an entropy-regularized optimal transport problem that has received increasing attention in deep generative modeling for its mathematical flexibility compared to the Scored-based Generative Model (SGM). However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives.This raises questions on the suitability of SB models as a principled alternative for generative applications. In this work, we present a novel computational framework for likelihood training of SB models grounded on Forward-Backward Stochastic Differential Equations Theory – a mathematical methodology appeared in stochastic optimal control that transforms the optimality condition of SB into a set of SDEs. Crucially, these SDEs can be used to construct the likelihood objectives for SB that, surprisingly, generalizes the ones for SGM as special cases. This leads to a new optimization principle that inherits the same SB optimality yet without losing applications of modern generative training techniques, and we show that the resulting training algorithm achieves comparable results on generating realistic images on MNIST, CelebA, and CIFAR10. Our code is available at https://github.com/ghliu/SB-FBSDE. | Accept (Poster) | The paper presents a new computational framework, grounded on Forward-Backward SDEs theory, for the log-likelihood training of Schrödinger Bridge and provides theoretical connections to score-based generative models. The presentation of the results is not satisfactory (the algorithm should be clarified in several places and the notation is not accurate which raises doubts about the soundness of the method). The paper is thus very hard to read for the non-experts on the subject. Furthermore, some reviewers raise concerns about the similarity of this method to other algorithms that were never cited in the paper. Finally, the empirical analysis, as of now, is limited.
In the rebuttal the authors carefully addressed lots of the comments. However paper's presentation still needs to be substantially improved (de-densification of the paper would be extremely important since now the main narrative is very convoluted). The authors made several changes in the manuscript, but detailed discussion regarding training time complexity still seems to be missing (main body and the Appendix) in the new version of the manuscript, even though this was one of the main raised concerns. Overall, the manuscript requires major rewriting. Since the comments regarding the content were successfully addressed (the reviewers are satisfied with detailed answers given by the authors), the paper satisfies the conference bar and can be accepted. | train | [
"ZldHoWLv0NO",
"3CtjvF54oRZ",
"OjRLNPsfrb8",
"VR7iQITK0IO",
"OnuMUWM8yKL",
"-Bf-Rsan0Ri",
"ZoOA4W6StRe",
"Q-bcvUvlC_u",
"R-d5yLoI7pB",
"zQfRZ63oXZ",
"kFJq094YaLA",
"xM6SFYVTDxM",
"UpozLCmfmJ",
"E4eUoVApHQJ",
"BinaooEyCb",
"AuJxc0aa3zq",
"hs41LH5I7FM",
"OpD25CE3bCq",
"f72XlJKBEYg"... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"Inspired by recent work on score-based generative modeling, this paper proposes to solve Schrodinger bridges for generative modeling. Different from other Schrodinger bridge works, this paper connects the training to maximum likelihood, and provides a way to compute the log-likelihood of the model. The resulting m... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_nioAdKCEdXB",
"OjRLNPsfrb8",
"VR7iQITK0IO",
"ZldHoWLv0NO",
"Q-bcvUvlC_u",
"ZoOA4W6StRe",
"xM6SFYVTDxM",
"f72XlJKBEYg",
"EFNuiY5Yfc4",
"ZldHoWLv0NO",
"iclr_2022_nioAdKCEdXB",
"BinaooEyCb",
"OpD25CE3bCq",
"iclr_2022_nioAdKCEdXB",
"AuJxc0aa3zq",
"hs41LH5I7FM",
"kFJq094YaLA",
... |
iclr_2022_9xhgmsNVHu | Is High Variance Unavoidable in RL? A Case Study in Continuous Control | Reinforcement learning (RL) experiments have notoriously high variance, and minor details can have disproportionately large effects on measured outcomes. This is problematic for creating reproducible research and also serves as an obstacle when applying RL to sensitive real-world applications. In this paper, we investigate causes for this perceived instability. To allow for an in-depth analysis, we focus on a specifically popular setup with high variance -- continuous control from pixels with an actor-critic agent. In this setting, we demonstrate that poor outlier runs which completely fail to learn are an important source of variance, but that weight initialization and initial exploration are not at fault. We show that one cause for these outliers is unstable network parametrization which leads to saturating nonlinearities. We investigate several fixes to this issue and find that simply normalizing penultimate features is surprisingly effective. For sparse tasks, we also find that partially disabling clipped double Q-learning decreases variance. By combining fixes we significantly decrease variances, lowering the average standard deviation across 21 tasks by a factor >3 for a state-of-the-art agent. This demonstrates that the perceived variance is not necessarily inherent to RL. Instead, it may be addressed via simple modifications and we argue that developing low-variance agents is an important goal for the RL community. | Accept (Poster) | The reviewers unanimously appreciated the quality of the experiments. The main point raised was about the related work by Wang et al. but that was addressed by the authors in the rebuttal. I thus encourage the authors to make sure that discussion is reflected in the final version of their work. | train | [
"5Kvz2Ovi7AZ",
"7Fp26CKw33a",
"ohYgj5YwNcu",
"y0M3LaS7GML",
"pauJ0Sk1K4P",
"fnymOVY34l-",
"PIMmiCp_SCn",
"IZ1x1rU9IIK",
"0Np4rOmJA8b",
"A6PGlzBzdFs",
"TnxyoS6xVVX",
"EIJVAUpIOvB",
"eg8x6w_-7kS",
"zKW070J_Vcc",
"R_W6AM0fzBl",
"QN_nwiBephC",
"Hs3feGqqn80",
"PkEZQhUuYDc",
"xkgHXds_8... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"au... | [
"This paper gives a systematic evaluation on different factors that may cause the high-variance of actor-critic algorithm on various robotic control problems. They found that the main source of variance is from the numerical instability. The authors then propose four training techniques to mitigate the instability,... | [
6,
10,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_9xhgmsNVHu",
"iclr_2022_9xhgmsNVHu",
"y0M3LaS7GML",
"pauJ0Sk1K4P",
"IZ1x1rU9IIK",
"iclr_2022_9xhgmsNVHu",
"IZ1x1rU9IIK",
"0Np4rOmJA8b",
"A6PGlzBzdFs",
"aORw-e34z3x",
"4dUxoCFq7R",
"aORw-e34z3x",
"R_W6AM0fzBl",
"iclr_2022_9xhgmsNVHu",
"QN_nwiBephC",
"Hs3feGqqn80",
"PkEZQhUu... |
iclr_2022_hfU7Ka5cfrC | Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation | Machine learning training methods depend plentifully and intricately on hyperparameters, motivating automated strategies for their optimisation. Many existing algorithms restart training for each new hyperparameter choice, at considerable computational cost. Some hypergradient-based one-pass methods exist, but these either cannot be applied to arbitrary optimiser hyperparameters (such as learning rates and momenta) or take several times longer to train than their base models. We extend these existing methods to develop an approximate hypergradient-based hyperparameter optimiser which is applicable to any continuous hyperparameter appearing in a differentiable model weight update, yet requires only one training episode, with no restarts. We also provide a motivating argument for convergence to the true hypergradient, and perform tractable gradient-based optimisation of independent learning rates for each model parameter. Our method performs competitively from varied random hyperparameter initialisations on several UCI datasets and Fashion-MNIST (using a one-layer MLP), Penn Treebank (using an LSTM) and CIFAR-10 (using a ResNet-18), in time only 2-3x greater than vanilla training. | Accept (Spotlight) | The paper provides a method for with tuning continuous hyperparameters (HPs). It is closely related to a previous work (Lorraine, 2019) that was limited to certain HPs, and in particular could not be applied to HPs controlling the learning such as learning rate, momentum, and are known to be influential to the convergence and overall performance (for non-convex objectives).
The reviews indicate a uniform opinion that the paper tackles an important problem, that its methods provide a non-trivial improvement over previous techniques and in particular those of (Lorrain, 2019), and that the provided experiments are extensive and convincing. The initial reviews had several concerns about technical details in the paper such as the analysis or how the meta-hyperparameters are tuned. However, in the discussions the authors provided adequate responses, resolving these concerns. I believe that with minor edits that are possible to get done by the camera-ready deadline the authors can incorporate their responses into the paper making it a welcome addition to ICLR. | train | [
"spxp20aFLfR",
"qsls1vL2Ttg",
"w8NAYzDrAuA",
"haigtSOrnne",
"ZNggwral0p4",
"JDHBSFgdgPF",
"EPAeVJikQ31",
"Ws4BQuC7dGT",
"tFdTBOo7I-O",
"fDzIpFSiVXP",
"wf582Y_HezG",
"laCfaPNBEWI",
"Fz3LEpgELPe",
"p0bH53mGHK",
"hbD75qkxze",
"Hljc-huSPoi",
"O-In7mj2ktg",
"n8-FfkLzaCI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper tackles the hyperparameter optimization problem with a one-pass approach that alternatively optimizes over machine learning model parameters and hyperparameters. To optimize over a bilevel optimization problem, this paper generalizes Lorraine et al. (2019) by replacing the gradient update of parameters ... | [
8,
-1,
-1,
-1,
6,
-1,
-1,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_hfU7Ka5cfrC",
"w8NAYzDrAuA",
"JDHBSFgdgPF",
"O-In7mj2ktg",
"iclr_2022_hfU7Ka5cfrC",
"n8-FfkLzaCI",
"laCfaPNBEWI",
"iclr_2022_hfU7Ka5cfrC",
"iclr_2022_hfU7Ka5cfrC",
"tFdTBOo7I-O",
"iclr_2022_hfU7Ka5cfrC",
"Ws4BQuC7dGT",
"tFdTBOo7I-O",
"Fz3LEpgELPe",
"p0bH53mGHK",
"O-In7mj2ktg... |
iclr_2022_J1rhANsCY9 | Learning Representation from Neural Fisher Kernel with Low-rank Approximation | In this paper, we study the representation of neural networks from the view of kernels. We first define the Neural Fisher Kernel (NFK), which is the Fisher Kernel applied to neural networks. We show that NFK can be computed for both supervised and unsupervised learning models, which can serve as a unified tool for representation extraction. Furthermore, we show that practical NFKs exhibit low-rank structures. We then propose an efficient algorithm that computes a low-rank approximation of NFK, which scales to large datasets and networks. We show that the low-rank approximation of NFKs derived from unsupervised generative models and supervised learning models gives rise to high-quality compact representations of data, achieving competitive results on a variety of machine learning tasks. | Accept (Poster) | This is a borderline paper which elicited much discussion.
The paper proposes to extract features from pre-trained networks through Kernel functions. It develops the idea of Fisher kernels for Neural networks calling it NFK. The methodology applies to both supervised and un-supervised setting.
The paper shows that proposed kernel has low rank structure and serves as the basis for developing an algorithm for computing the kernel on large datasets. The idea of extending Fisher kernels, their efficient computation, and investigating their usage in both Supervised and Unsupervised are some of the key strengths of the paper.
The reviewers though appreciative suggested (1) several new experiments, (2) inclusion of more background work related to Power method
and (3) have more technical discussion clarifying the contributions related to background.
The author(s) during rebuttal tried to incorporate most of the suggestions in the revised draft.
Since there was consensus on the novelty, the detailed discussions, and the results of the additional experiments, one could potentially accept this paper if there is space. The results will be interesting will be those who investigate the interplay of kernel methods and Deep Networks. | val | [
"1cee7d4Ta7j",
"1HgIvpKvtiT",
"F_E-QH4WjBs",
"PMTPZrPMz4q",
"3hlcuPJOA4n",
"HGyjnvMIy05",
"_fYj3pLcyor",
"EXDaeTvKMT2",
"3mUI7irmL_k",
"n8NrRCMb4L9",
"zXoBku8WwrB",
"WSAUYLxVOg7",
"L8vtMt4N7NK",
"42GLACAc_O",
"G5qDBZUrsLH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThanks for your responses to my concerns. I am satisfied with your answers.",
" Dear AC and all reviewers,\n\nThanks again for your positive feedback and constructive suggestions, which have helped us improve the quality and clarity of our paper! Since it is close to the end of the discussion... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"F_E-QH4WjBs",
"iclr_2022_J1rhANsCY9",
"L8vtMt4N7NK",
"42GLACAc_O",
"G5qDBZUrsLH",
"iclr_2022_J1rhANsCY9",
"G5qDBZUrsLH",
"G5qDBZUrsLH",
"G5qDBZUrsLH",
"G5qDBZUrsLH",
"42GLACAc_O",
"42GLACAc_O",
"iclr_2022_J1rhANsCY9",
"iclr_2022_J1rhANsCY9",
"iclr_2022_J1rhANsCY9"
] |
iclr_2022_5K7RRqZEjoS | Multiset-Equivariant Set Prediction with Approximate Implicit Differentiation | Most set prediction models in deep learning use set-equivariant operations, but they actually operate on multisets. We show that set-equivariant functions cannot represent certain functions on multisets, so we introduce the more appropriate notion of multiset-equivariance. We identify that the existing Deep Set Prediction Network (DSPN) can be multiset-equivariant without being hindered by set-equivariance and improve it with approximate implicit differentiation, allowing for better optimization while being faster and saving memory. In a range of toy experiments, we show that the perspective of multiset-equivariance is beneficial and that our changes to DSPN achieve better results in most cases. On CLEVR object property prediction, we substantially improve over the state-of-the-art Slot Attention from 8% to 77% in one of the strictest evaluation metrics because of the benefits made possible by implicit differentiation. | Accept (Poster) | The paper points out how set equivariant functions limit the types of functions that can be represented on multisets. They develop an new notion of multiset equivariance to address this limitation. The paper improves an existing multi-set equivariant Deep Set Prediction Network through implicit differentiation, which is an area of rising interest. The reviewers and I note that the paper is well written. | test | [
"ruSNPZYKmdW",
"9NXTRO4VlF",
"BMD89a9RgDg",
"g2S39lwoU6V",
"fOHz9zuv7P6",
"ihyEcRUUuhz",
"HDTaMAEz-4",
"uzFeQxfCXKj",
"NaNzQWYHZtr",
"--FwqahTNd",
"uITEny8SSW",
"4Y8hq3uEN7x",
"u0G_oRd3KB",
"8ywxITjZ3SX",
"QlfUOJP_6qM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes the usage of multiset equivariance as a weaker constraint for deep sets. The authors then modify DSPN to be multiset equivariant and introduce the usage of Jacobian-free implicit differentiation to speed up computation. Finally, the authors compare on two test tasks and show general improvements... | [
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_5K7RRqZEjoS",
"4Y8hq3uEN7x",
"iclr_2022_5K7RRqZEjoS",
"HDTaMAEz-4",
"iclr_2022_5K7RRqZEjoS",
"fOHz9zuv7P6",
"BMD89a9RgDg",
"8ywxITjZ3SX",
"iclr_2022_5K7RRqZEjoS",
"QlfUOJP_6qM",
"8ywxITjZ3SX",
"ruSNPZYKmdW",
"8ywxITjZ3SX",
"iclr_2022_5K7RRqZEjoS",
"iclr_2022_5K7RRqZEjoS"
] |
iclr_2022_6VpeS27viTq | Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations | Owing much to the revolution of information technology, recent progress of deep learning benefits incredibly from the vastly enhanced access to data available in various digital formats. Yet those publicly accessible information also raises a fundamental issue concerning Intellectual Property, that is, how to precisely control legal or illegal exploitation of a dataset for training commercial models. To tackle this issue, this paper introduces and investigates a new concept called ''learnability lock'' for securing the process of data authorization. In particular, we propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to encrypt data samples so that they become ''unlearnable'' by machine learning models with negligible loss of visual features. Meanwhile, authorized clients can use a specific key to unlock the learnability of the protected dataset and train models normally. The proposed learnability lock leverages class-wise perturbation that applies a universal transformation function on data samples of the same label. This ensures that the learnability can be easily restored with a simple inverse transformation while remaining difficult to be detected or reverse-engineered. We empirically demonstrate the success and practicability of our method on visual classification tasks. | Accept (Poster) | The paper considers a relevant and interesting problem of protecting the intellectual property of data. The goal of the proposed method is to prevent unauthorized usage of the data, and the protection is attained when a model trained on the perturbed dataset will predict poorly and thus cannot be considered as a realistic inference model by the unauthorized attacker.
Technically, the paper tackles the problem of "unlearnable examples": to perturb the images of a labeled dataset to obtain perturbed dataset such that models trained on perturbed dataset have significantly lower performance, the perturbations are small, and one can approximately recover the original labeled dataset with the correct "secret key" (learnable parameters).
The authors propose two invertible transformations to craft adversarial perturbations: linear pixel-wise transformation and convolutional functional transformation based on invertible ResNet. Numerous experiments demonstrate the effectiveness of the proposed transformations in both securing the data (making the data unlearnable when transformation is applied) and unlocking the transformation (making the data learnable when the transformation is inverted).
The paper is well motivated and exhibits competitive results. Although there are some concerns about the similarity of the work compared with [1], we believe the additional constraint of this work, that one can approximately recover the original labeled dataset with the correct "secret key", justifies a significant contribution.
[1] "Unlearnable Examples: Making Personal Data Unexploitable" Huang et al., ICLR '21 | train | [
"PXbHqZGEb3G",
"g0pEKWLHLm8",
"nthUMk9YuT",
"hv8uDNuWSw",
"PADrOjHWXIZ",
"dtz4y5eHpEp",
"vlBCUsP_r0t",
"nuwonUt4A1",
"KbGNe431ig",
"26nqwH1Ccax",
"WZD-Km3KF9W",
"c8Gc6gGKQQt",
"C-UE2JaaHtB",
"EcuqhTKYw30",
"j9eNblWmWJZ",
"s-p9lBYUHz3",
"qpbW4iZPxV9",
"CQ6Vpgzuzw0",
"gV4Odz6wx67",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"- The paper tackles the problem of 'unlearnable examples': to perturb the images of a labeled dataset $D_c$ to obtain $D_p$ with the desiderata (a) training models on $D_p$ leads to models with significantly lower performance (b) image perturbations are constrained to some $\\epsilon$-ball and (c) with the correct... | [
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_6VpeS27viTq",
"8LvjZJp_f2R",
"PADrOjHWXIZ",
"iclr_2022_6VpeS27viTq",
"KbGNe431ig",
"PXbHqZGEb3G",
"26nqwH1Ccax",
"EvCMYaOcr8g",
"26nqwH1Ccax",
"CQ6Vpgzuzw0",
"c8Gc6gGKQQt",
"gV4Odz6wx67",
"iclr_2022_6VpeS27viTq",
"C-UE2JaaHtB",
"3m86ELjHm95",
"PXbHqZGEb3G",
"hv8uDNuWSw",
... |
iclr_2022_SsHBkfeRF9L | Neural graphical modelling in continuous-time: consistency guarantees and algorithms | The discovery of structure from time series data is a key problem in fields of study working with complex systems. Most identifiability results and learning algorithms assume the underlying dynamics to be discrete in time. Comparatively few, in contrast, explicitly define dependencies in infinitesimal intervals of time, independently of the scale of observation and of the regularity of sampling. In this paper, we consider score-based structure learning for the study of dynamical systems. We prove that for vector fields parameterized in a large class of neural networks, least squares optimization with adaptive regularization schemes consistently recovers directed graphs of local independencies in systems of stochastic differential equations. Using this insight, we propose a score-based learning algorithm based on penalized Neural Ordinary Differential Equations (modelling the mean process) that we show to be applicable to the general setting of irregularly-sampled multivariate time series and to outperform the state of the art across a range of dynamical systems. | Accept (Poster) | This main focus of this paper is graph modeling. Specifically, this paper considers a setting in which data is generated under continuous time dynamics based on neural ODE. Theoretical results regarding parameter estimation are provided. The results are also supported by experiments.
The reviewers appreciate a thorough response to their questions and think that this paper would be of interest to ICLR and ML community. Please address reviewers comments in your final version. | train | [
"vabFnzUPB4b2",
"c2hlzYXs-2",
"uDTqGKXH4Wq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a brand new graphical modeling framework from the perspective of neural ODEs. Traditionally structure learning involves using sampled data to learn the structure of graphs. This paper, however, looks at the graph structure learning problem from a different viewpoint, using continuous-time dyn... | [
5,
5,
8
] | [
4,
4,
3
] | [
"iclr_2022_SsHBkfeRF9L",
"iclr_2022_SsHBkfeRF9L",
"iclr_2022_SsHBkfeRF9L"
] |
iclr_2022_bjy5Zb2fo2 | Scattering Networks on the Sphere for Scalable and Rotationally Equivariant Spherical CNNs | Convolutional neural networks (CNNs) constructed natively on the sphere have been developed recently and shown to be highly effective for the analysis of spherical data. While an efficient framework has been formulated, spherical CNNs are nevertheless highly computationally demanding; typically they cannot scale beyond spherical signals of thousands of pixels. We develop scattering networks constructed natively on the sphere that provide a powerful representational space for spherical data. Spherical scattering networks are computationally scalable and exhibit rotational equivariance, while their representational space is invariant to isometries and provides efficient and stable signal representations. By integrating scattering networks as an additional type of layer in the generalized spherical CNN framework, we show how they can be leveraged to scale spherical CNNs to the high-resolution data typical of many practical applications, with spherical signals of many tens of megapixels and beyond. | Accept (Poster) | The submission develops a rotationally equivariant scattering transform on the sphere. Many developments in deep learning make use of spherical representations, and the development of a rotationally equivariant scattering transform is an important if not unexpected development. The reviews are split with half of the reviewers believing it to be slightly above the threshold for acceptance, and half believe it to be slightly below the threshold for acceptance. In the papers favor, it solves an important case of the scattering transform framework, which has been demonstrated to be important in diverse machine learning applications such as learning with small data sets, differentially private learning, and network initialization. As such, continued fundamental development in this area is valuable, especially in the context of representation learning, the focus of ICLR. | train | [
"064rJYS_GXs",
"YuB4yX9YBo1",
"dT1QCrxxVp4",
"_XKvT_1xGFM",
"_hD9L0iEs1V",
"tM1FR1DvVIe",
"KVyK9pB1Ygn",
"p_Q8zgwFmqu",
"1AuwD7s1XSg",
"1G9tNNDbYby",
"KrJZT5lxeH9",
"7lpORedfehK",
"-mC8TkBWe1U",
"NsyZQ74lD2H",
"LNOEy7Bc2H"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" In terms of the additional experiment and the results in Figure 3, the Reviewer's interpretation is indeed correct. The normalisation with respect to digit one is described in the main text but we agree it would be useful to also include this in the figure caption and will update the manuscript accordingly. The ... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"YuB4yX9YBo1",
"p_Q8zgwFmqu",
"iclr_2022_bjy5Zb2fo2",
"dT1QCrxxVp4",
"LNOEy7Bc2H",
"LNOEy7Bc2H",
"LNOEy7Bc2H",
"NsyZQ74lD2H",
"NsyZQ74lD2H",
"dT1QCrxxVp4",
"-mC8TkBWe1U",
"dT1QCrxxVp4",
"iclr_2022_bjy5Zb2fo2",
"iclr_2022_bjy5Zb2fo2",
"iclr_2022_bjy5Zb2fo2"
] |
iclr_2022_Az-7gJc6lpr | Relational Learning with Variational Bayes | In psychology, relational learning refers to the ability to recognize and respond to relationship among objects irrespective of the nature of those objects. Relational learning has long been recognized as a hallmark of human cognition and a key question in artificial intelligence research. In this work, we propose an unsupervised learning method for addressing the relational learning problem where we learn the underlying relationship between a pair of data irrespective of the nature of those data. The central idea of the proposed method is to encapsulate the relational learning problem with a probabilistic graphical model in which we perform inference to learn about data relationship and other relational processing tasks. | Accept (Poster) | This papers studies the classical problem of relational learning from a probabilistic perspective. The authors propose four reasonable constraints to encode relational properties, and develop a PGM-based variational method for learning relational properties from data. After extensive discussion with the authors, a majority of the reviewer reviewers agree the approach is interesting, if not without some flaws.
The problem studied is interesting, novel, and could lead to new developments in the area of relational learning. It is expected that the experiments have some limitations given the authors have approached the problem from a fresh new angle, which the reviewers have appreciated.
Please pay attention to the suggestions from the reviewers, and in particular, please add a more detailed discussion with statistical relational learning: This material may not be familiar to the broader ML audience, and therefore it is essential to make these comparisons explicit. | train | [
"-zqst9UTThJ",
"r4-MG-ctqGc",
"VFJWN4GEEXf",
"vCOYyWrAlz",
"VuEUTXoBiUq",
"buKVzehAWR0",
"ImlrSj-D-zg",
"FhS0geKnvYc",
"g-P3blWg1-p",
"R3Sb9Zh6PmZ",
"J-9ddgRw59c",
"HdRVIgUTth",
"wFHJ1UPeWqz",
"aic8tnPO2AA",
"kQJOvf4rnO1",
"snpEhyyXPf7",
"oQ0mT3vOOZl",
"SVbYbRWX1ij",
"YJy79wB__fT... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Znwn:\n\nWe are approaching the deadline of the final stage of discussion. At this time, we wanted to summarize our response to your outstanding concerns (please see our response below for a detailed discussion):\n\n1. While we agree it is nice to include real-world examples, they are not central to... | [
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"YJy79wB__fT",
"VuEUTXoBiUq",
"iclr_2022_Az-7gJc6lpr",
"iclr_2022_Az-7gJc6lpr",
"ImlrSj-D-zg",
"iclr_2022_Az-7gJc6lpr",
"YJy79wB__fT",
"SVbYbRWX1ij",
"vCOYyWrAlz",
"J-9ddgRw59c",
"wFHJ1UPeWqz",
"iclr_2022_Az-7gJc6lpr",
"aic8tnPO2AA",
"HdRVIgUTth",
"VFJWN4GEEXf",
"HdRVIgUTth",
"HdRVIg... |
iclr_2022_0RDcd5Axok | Towards a Unified View of Parameter-Efficient Transfer Learning | Fine-tuning large pretrained language models on downstream tasks has become the de-facto learning paradigm in NLP. However, conventional approaches fine-tune all the parameters of the pretrained model, which becomes prohibitive as the model size and the number of tasks grow. Recent work has proposed a variety of parameter-efficient transfer learning methods that only fine-tune a small number of (extra) parameters to attain strong performance. While effective, the critical ingredients for success and the connections among the various methods are poorly understood. In this paper, we break down the design of state-of-the-art parameter-efficient transfer learning methods and present a unified framework that establishes connections between them. Specifically, we re-frame them as modifications to specific hidden states in pretrained models, and define a set of design dimensions along which different methods vary, such as the function to compute the modification and the position to apply the modification. Through comprehensive empirical studies across machine translation, text summarization, language understanding, and text classification benchmarks, we utilize the unified view to identify important design choices in previous methods. Furthermore, our unified framework enables the transfer of design elements across different approaches, and as a result we are able to instantiate new parameter-efficient fine-tuning methods that tune less parameters than previous methods while being more effective, achieving comparable results to fine-tuning all parameters on all four tasks. | Accept (Spotlight) | The paper reviews and draws connections between several parameter-efficient fine-tuning methods.
All reviewers found the paper addresses an important research problem, and the theoretical justification and empirical analyses are convincing. | train | [
"-ZIWcdyn-tP",
"9syxfYwIBa",
"qge8pmaXH54",
"oH79aBFHsUW",
"oxVY7ETXmbJ",
"PBPjBdrsEve",
"XbEUCnnma0e"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper formulates different approaches for parameter-efficient transfer learning (such as adapters, prefix-tuning, LoRA) under a common framework. Approaches vary alongside different design dimensions: functional form, insertion form (sequential or parallel), which representation is directly modified by the ap... | [
8,
-1,
-1,
-1,
-1,
10,
8
] | [
4,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2022_0RDcd5Axok",
"XbEUCnnma0e",
"oH79aBFHsUW",
"-ZIWcdyn-tP",
"PBPjBdrsEve",
"iclr_2022_0RDcd5Axok",
"iclr_2022_0RDcd5Axok"
] |
iclr_2022_1L0C5ROtFp | Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space | Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactual reasoning of physical processes in pixel space, which requires the prediction of the impact of interventions on initial conditions. Going beyond the identification of structural relationships, we deal with the challenging problem of forecasting raw video over long horizons. Our method does not require the knowledge or supervision of any ground truth positions or other object or scene properties. Our model learns and acts on a suitable hybrid latent representation based on a combination of dense features, sets of 2D keypoints and an additional latent vector per keypoint. We show that this better captures the dynamics of physical processes than purely dense or sparse representations. We introduce a new challenging and carefully designed counterfactual benchmark for predictions in pixel space and outperform strong baselines in physics-inspired ML and video prediction. | Accept (Oral) | This paper introduces the Filtered-CoPhy method, an approach for learning counterfactual reasoning of physical processes in pixel space. The approach enables forecasting raw videos over long horizons, without requiring strong supervision, e.g. object positions or scene properties.
The paper initially received one strong accept, one weak accept, and one weak reject recommendations. The main reviewers' concerns relate to clarifications and consolidations in experiments, including stronger baselines, experiments on real data, or more diversity on the datasets. The rebuttal did a good job in answering reviewers' concerns, especially by providing new experimental results and analysis. Eventually, all reviewers recommended a clear acceptance after authors' feedback.
The AC's own readings confirmed the reviewers' recommendations. The proposed approach is a meaningful extension of CoPhy for the unsupervised prediction at the pixel level. The proposed approach is solid, clearly described, and overcomes important limitations of previous methods. The dataset is also an important outcome for the community. Causality and counterfactual reasoning are of primary importance for the design of effective and explainable AI prediction models: this paper brings therefore an important contribution to the ICLR community. | train | [
"49tiSIH2_0",
"mvSF6FI45ix",
"jwE8leqtIvi",
"7BDp2k5p6Yl",
"YQyFDXoDeYB",
"4eecg4axMcn",
"SvYcqI-KwrT",
"pDOi2YcQp65",
"o-qBXmNGtfi",
"PDPrb6lQjgg",
"VaaHWk4TB3f",
"eMj2WFELiJ6",
"Gu6sqRND4A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Sincerely thank you for the quantitative comparisons and detailed explanations, and the visual examples are quite impressive.\n\nI increased my rating by 3. ",
"This paper studies an interesting problem of counterfactual video prediction, which aims to predict the future frame (D) based on the initial frame (C)... | [
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
10
] | [
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"4eecg4axMcn",
"iclr_2022_1L0C5ROtFp",
"YQyFDXoDeYB",
"iclr_2022_1L0C5ROtFp",
"SvYcqI-KwrT",
"pDOi2YcQp65",
"PDPrb6lQjgg",
"VaaHWk4TB3f",
"7BDp2k5p6Yl",
"Gu6sqRND4A",
"mvSF6FI45ix",
"iclr_2022_1L0C5ROtFp",
"iclr_2022_1L0C5ROtFp"
] |
iclr_2022_nnU3IUMJmN | Capturing Structural Locality in Non-parametric Language Models | Structural locality is a ubiquitous feature of real-world datasets, wherein data points are organized into local hierarchies. Some examples include topical clusters in text or project hierarchies in source code repositories. In this paper, we explore utilizing this structural locality within non-parametric language models, which generate sequences that reference retrieved examples from an external source. We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods. Experiments on two different domains, Java source code and Wikipedia text, demonstrate that locality features improve model efficacy over models without access to these features, with interesting differences. We also perform an analysis of how and where locality features contribute to improving performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure.
| Accept (Poster) | Reviewers were in agreement but borderline. The paper has a nice hypothesis and develops the work using two realistic datasets, Wikipedia and Code. One reviewer was initially more negative but changed their views based on the authors improvements to the paper.
The idea is fairly simple, but does require modellers come up with the structural features. There was discussion that more down-stream tasks are needed to highlight the approach. Moreover, more datasets should be experimented with. In all, experiments are good but improvement is easily done. | train | [
"fv9Uc971t-E",
"4NHnGWbMP8i",
"elHCxTGcmkj",
"mUXNbudtnuu",
"Hbl1CGnaSLa",
"bPUShdlKUgX",
"NXR07GUXqgY",
"yUr0qOQItrQ",
"u0XTMa9Dm1R",
"m13zW0bFWqb",
"zJJjcZOdcp7",
"upf1SFbadHp",
"GwW8Y6sf3wk",
"qiVMLBDt0u",
"grUNvHglxk",
"6j6ua52GFyI",
"Bego_e2eMTt",
"DRBfjS5fO3G",
"rB3IxbB_5rz... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_re... | [
" I thank you for the prompt reply and considerations regarding our revision and responses. I am glad that the conversation clears things up and the revisions improved the paper. I am also appreciative of your other comments and will seriously consider them as future work.\n\n",
" I thank you for the prompt reply... | [
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"elHCxTGcmkj",
"bPUShdlKUgX",
"bPUShdlKUgX",
"iclr_2022_nnU3IUMJmN",
"iclr_2022_nnU3IUMJmN",
"iclr_2022_nnU3IUMJmN",
"grUNvHglxk",
"qiVMLBDt0u",
"6j6ua52GFyI",
"jUVlres-xqq",
"upf1SFbadHp",
"GwW8Y6sf3wk",
"qiVMLBDt0u",
"iSWErmDLm38",
"6j6ua52GFyI",
"WfksNx5neXE",
"rB3IxbB_5rz",
"ic... |
iclr_2022_-AOEi-5VTU8 | Fast Differentiable Matrix Square Root | Computing the matrix square root or its inverse in a differentiable manner is important in a variety of computer vision tasks. Previous methods either adopt the Singular Value Decomposition (SVD) to explicitly factorize the matrix or use the Newton-Schulz iteration (NS iteration) to derive the approximate solution. However, both methods are not computationally efficient enough in either the forward pass or in the backward pass. In this paper, we propose two more efficient variants to compute the differentiable matrix square root. For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Pad\'e Approximants (MPA). The backward gradient is computed by iteratively solving the continuous-time Lyapunov equation using the matrix sign function. Both methods yield considerable speed-up compared with the SVD or the Newton-Schulz iteration. Experimental results on the de-correlated batch normalization and second-order vision transformer demonstrate that our methods can also achieve competitive and even slightly better performances. The code is available at \href{https://github.com/KingJamesSong/FastDifferentiableMatSqrt}{https://github.com/KingJamesSong/FastDifferentiableMatSqrt}. | Accept (Poster) | The paper presents some efficiency improvements over existing methods to compute matrix square root and its gradient. Reviewers find that the novelty over existing methods is sufficient, and that the improvements are valuable.
I propose a poster despite the relatively high numerical scores, because the group of practitioners who will use the result is somewhat niche -- the reviewers are of course selected from this group and hence value the paper more highly.
In addition the real-world speedups are modest, but it is nevertheless important to document this approach. | train | [
"f6Lb0UMDd2x",
"fD1J4aBRCKq",
"PvnhE9dTe4n",
"RacoTues3Ve",
"_8T2_jDxhA_",
"6Nh8_LRL63",
"btgdKVUU7f",
"lKwUPst9nKo",
"_MMyvmMPVm",
"3T4byCZdHIB",
"7B0wDPE2_9u",
"FoPVpy3pCtb",
"NGTNRh1U6d_",
"-4Lr_2IWBF2",
"f3ZHlHN83Me",
"nSYUq_Hu83"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The work proposes a fast method to solve the matrix square root. The proposed approach is differentiable and is faster than SVD and NS iteration. Thus, the proposed method is very suitable for optimizing deep neural networks that involves matrix square root computation. Its forward pass uses matrix taylor polynomi... | [
8,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2022_-AOEi-5VTU8",
"nSYUq_Hu83",
"RacoTues3Ve",
"_8T2_jDxhA_",
"iclr_2022_-AOEi-5VTU8",
"7B0wDPE2_9u",
"_MMyvmMPVm",
"iclr_2022_-AOEi-5VTU8",
"3T4byCZdHIB",
"NGTNRh1U6d_",
"_8T2_jDxhA_",
"nSYUq_Hu83",
"_8T2_jDxhA_",
"nSYUq_Hu83",
"f6Lb0UMDd2x",
"iclr_2022_-AOEi-5VTU8"
] |
iclr_2022_B7ZbqNLDn-_ | Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank? | In this paper, we question the rationale behind propagating large numbers of parameters through a distributed system during federated learning. We start by examining the rank characteristics of the subspace spanned by gradients (i.e., the gradient-space) in centralized model training, and observe that the gradient-space often consists of a few leading principal components accounting for an overwhelming majority (95-99%) of the explained variance. Motivated by this, we propose the "Look-back Gradient Multiplier" (LBGM) algorithm, which utilizes this low-rank property of the gradient-space in federated learning. Operationally, LBGM recycles the gradients between model update rounds to significantly reduce the number of parameters to be propagated through the system. We analytically characterize the convergence behavior of LBGM, revealing the nature of the trade-off between communication savings and model performance. Our subsequent experimental results demonstrate the improvement LBGM obtains on communication overhead compared to federated learning baselines. Additionally, we show that LBGM is a general plug-and-play algorithm that can be used standalone or stacked on top of existing sparsification techniques for distributed model training. | Accept (Poster) | The paper shows that most variance of gradients used in FL and distributed learning in general is in very low rank subspaces, an observation also made in Konecny et al 2016 and some other related works in deep learning, though sometimes for a different purpose.
The paper then proposes lightweight updates combining a fresh gradient with old updates. Experiments and a theoretical convergence guarantee complement the results, which are mostly convincing.
The experiments compare against ATOMO but strangely not against the more common PowerSGD, which would also work with partial client participation.
Overall, reviewers all agreed that the paper is interesting, well-motivated and deserves acceptance.
We hope the authors will incorporate the open points as mentioned by the reviewers. | train | [
"fG5vCDVXVif",
"AzvHtG7S8E",
"Ke-c4s9mTt7",
"PSwKV9vkPqV",
"Iuz3Xv8gjKU",
"YdM0oCjUFqU",
"it-5hml6P6g",
"PC8z393ifsM",
"WqYkGjPe8yJ",
"OWJtqovXYl3",
"B6tEXrBI8f7",
"XTcaoTbePd",
"jxKIfXzyj3m",
"98qkC_XRBte",
"c-hHNkuNjdA",
"f7QSBn7CiHa",
"TgiGZlSwHxs",
"j72dlVq30Em",
"nm-66bgy6DM... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
"The paper studies the gradient subspace and finds it low-rank propery.\nThis observation motivates them to propose a new algorithm that reuses similar past gradients to save communication.\nThey provide a theoretical analysis (with some mistakes) and conduct experiments to validate their method. From my perspectiv... | [
8,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_B7ZbqNLDn-_",
"PC8z393ifsM",
"it-5hml6P6g",
"YdM0oCjUFqU",
"iclr_2022_B7ZbqNLDn-_",
"XTcaoTbePd",
"AzvHtG7S8E",
"j72dlVq30Em",
"OWJtqovXYl3",
"LI1GvFAXbMs",
"fG5vCDVXVif",
"Iuz3Xv8gjKU",
"Iuz3Xv8gjKU",
"qvzW5A8R0Fl",
"qvzW5A8R0Fl",
"qvzW5A8R0Fl",
"iclr_2022_B7ZbqNLDn-_",
... |
iclr_2022_mhYUBYNoGz | Machine Learning For Elliptic PDEs: Fast Rate Generalization Bound, Neural Scaling Law and Minimax Optimality | In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). To simplify the problem, we focus on a prototype elliptic PDE: the Schr\"odinger equation on a hypercube with zero Dirichlet boundary condition, which has wide application in the quantum-mechanical systems. We establish upper and lower bounds for both methods, which improves upon concurrently developed upper bounds for this problem via a fast rate generalization bound. We discover that the current Deep Ritz Methods is sub-optimal and propose a modified version of it. We also prove that PINN and the modified version of DRM can achieve minimax optimal bounds over Sobolev spaces. Empirically, following recent work which has shown that the deep model accuracy will improve with growing training sets according to a power law, we supply computational experiments to show a similar behavior of dimension dependent power law for deep PDE solvers. | Accept (Poster) | This paper proposes a new theory for modified DRM and PINN for solving elliptical PDEs, and delivers valuable advances on important topics. | train | [
"8HmQ6jmI6y",
"QU8-dsOxpQX",
"cVmt2Kux10w",
"_QGjV_M_9gi",
"5oHSzu5ZGWb",
"jSQmEfvTon2",
"_0KBw2-Teqa",
"DxL0j3pHgg9",
"mVWq8dgEQF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I see, these are interesting points.",
"This paper studies the the statistical error of the Deep Ritz Method and Physics-Informed Neural Networks using neural networks and truncated Fourier basis in solving PDEs. The static Schrodinger equation is used as a prototype PDE. With appropri... | [
-1,
6,
8,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
3,
3
] | [
"jSQmEfvTon2",
"iclr_2022_mhYUBYNoGz",
"iclr_2022_mhYUBYNoGz",
"cVmt2Kux10w",
"QU8-dsOxpQX",
"mVWq8dgEQF",
"DxL0j3pHgg9",
"iclr_2022_mhYUBYNoGz",
"iclr_2022_mhYUBYNoGz"
] |
iclr_2022_q79uMSC6ZBT | Learning to Complete Code with Sketches | Code completion is usually cast as a language modelling problem, i.e., continuing an input in a left-to-right fashion. However, in practice, some parts of the completion (e.g., string literals) may be very hard to predict, whereas subsequent parts directly follow from the context.
To handle this, we instead consider the scenario of generating code completions with "holes" inserted in places where a model is uncertain. We develop Grammformer, a Transformer-based model that guides the code generation by the programming language grammar, and compare it to a variety of more standard sequence models.
We train the models on code completion for C# and Python given partial code context. To evaluate models, we consider both ROUGE as well as a new metric RegexAcc that measures success of generating completions matching long outputs with as few holes as possible.
In our experiments, Grammformer generates 10-50% more accurate completions compared to traditional generative models and 37-50% longer sketches compared to sketch-generating baselines trained with similar techniques. | Accept (Poster) | The paper proposes a transformer model of code that leaves "holes" at points of generation at which the model is uncertain. The model is evaluated on C# and Python programs and outperforms existing techniques.
The reviewers found the Grammformer model and the RegexAcc evaluation metric to be useful and interesting. The experimental results are also compelling. Given this, I recommend acceptance. Please make sure to incorporate the feedback in the reviews and the additional experimental results into the final version. | train | [
"ujY0ZKXDnXx",
"i0bUuzWW_Oi",
"UWfGI2aVEXA",
"IlLisasIZb",
"hwMYie65bz",
"PeIkTlTcgYe",
"JzQ6kQQjF-_",
"MhoTisT6CD",
"shjO2yD-kro",
"GgHel11VrZ_"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" [Part 2 of our reply]\n\n\n> Is Grammformer with no “stop expansion” conceptually equivalent to AnyCodeGen by Alon et. al. (ICML 2020)?\n\nNot quite. There are two substantial differences:\n 1. AnyCodeGen uses (code) paths (as in code2vec and code2seq) to embed (partial) programs, whereas Grammformer uses a Trans... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"GgHel11VrZ_",
"GgHel11VrZ_",
"shjO2yD-kro",
"MhoTisT6CD",
"JzQ6kQQjF-_",
"iclr_2022_q79uMSC6ZBT",
"iclr_2022_q79uMSC6ZBT",
"iclr_2022_q79uMSC6ZBT",
"iclr_2022_q79uMSC6ZBT",
"iclr_2022_q79uMSC6ZBT"
] |
iclr_2022_wogsFPHwftY | Learning Super-Features for Image Retrieval | Methods that combine local and global features have recently shown excellent performance on multiple challenging deep image retrieval benchmarks, but their use of local features raises at least two issues. First, these local features simply boil down to the localized map activations of a neural network, and hence can be extremely redundant. Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing. In this paper, we propose a novel architecture for deep image retrieval, based solely on mid-level features that we call Super-features. These Super-features are constructed by an iterative attention module and constitute an ordered set in which each element focuses on a localized and discriminant image pattern. For training, they require only image labels. A contrastive loss operates directly at the level of Super-features and focuses on those that match across images. A second complementary loss encourages diversity. Experiments on common landmark retrieval benchmarks validate that Super-features substantially outperform state-of-the-art methods when using the same number of features, and only require a significantly smaller memory footprint to match their performance. Code and models are available at: https://github.com/naver/FIRe. | Accept (Poster) | The paper proposes a model for large-scale image retrieval. Unlike previous work that rely on local features, the proposed method aggregates local features into the so-called Super-features to improve their discriminability and expressiveness. To do so, the method proposes an iterative attention module (Local Feature Integracion Transformer, LIT), that outputs an ordered set of such features. By exploiting the fact that features are ordered, the paper proposes a contrastive loss on Super-features that match across images. The paper presents a thorough empirical evaluation on several publicly available datasets including relevant baselines.
Overall the paper is well written and the empirical results are strong (including detailed ablations that motivate the design of the method). All reviewers and the AC appreciate the idea of applying the contrastive training at local feature level while only requiring image-level labels.
Reviewer hp4Y points out that the proposed LIT is not particularly novel, but previous work are properly cited. Also this is not a major issue given that the motivation is very clear, it is well executed and the empirical results are strong.
Reviewer uoYN had initial concerns regarding inconsistencies in the mathematical formulation of the method, which were resolved in a detail (and constructive) discussion with the authors.
All reviewers recommend accepting the paper, three of which consider the contribution to be strong. The AC agrees with this assessment and recommends accepting the paper. | test | [
"S3Z9A_NNhD-",
"3qlo2uDO67",
"isqo18MBb-",
"JSx_SXHgXaH",
"B4KLs4ZliKh",
"H2JysAP7DBr",
"R164osWgugm",
"CPe5trtmhVv",
"hPRCXNnLeX",
"uBZ3v8cx4_D",
"SE9yibbWrA9",
"ik5mqPTiiez",
"S7FYm_Uapk",
"_PCM5gzC2qF",
"oswQrYColew",
"SfARGadtCqC",
"EQifl_pDom"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces the concept of super features into image retrieval. The whole framework is named FIRe. The idea is to aggregate a set of features on the feature maps at different scales according to a set of query vectors representing different latent concepts. Hence, the proposed model somehow implements a ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2022_wogsFPHwftY",
"uBZ3v8cx4_D",
"B4KLs4ZliKh",
"R164osWgugm",
"hPRCXNnLeX",
"_PCM5gzC2qF",
"CPe5trtmhVv",
"ik5mqPTiiez",
"_PCM5gzC2qF",
"S7FYm_Uapk",
"EQifl_pDom",
"SfARGadtCqC",
"S3Z9A_NNhD-",
"oswQrYColew",
"iclr_2022_wogsFPHwftY",
"iclr_2022_wogsFPHwftY",
"iclr_2022_wogsFP... |
iclr_2022_Vs5NK44aP9P | Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression | Even though fine-grained pruning techniques achieve a high compression ratio, conventional sparsity representations (such as CSR) associated with irregular sparsity degrade parallelism significantly. Practical pruning methods, thus, usually lower pruning rates (by structured pruning) to improve parallelism. In this paper, we study fixed-to-fixed (lossless) encoding architecture/algorithm to support fine-grained pruning methods such that sparse neural networks can be stored in a highly regular structure. We first estimate the maximum compression ratio of encoding-based compression using entropy. Then, as an effort to push the compression ratio to the theoretical maximum (by entropy), we propose a sequential fixed-to-fixed encoding scheme. We demonstrate that our proposed compression scheme achieves almost the maximum compression ratio for the Transformer and ResNet-50 pruned by various fine-grained pruning methods. | Accept (Poster) | ### Summary
The key idea behind this approach is a new technique to map irregular sparsity to a regular, compressed pattern. The results can, in principle, therefore overcome several standard limitations with irregular data storage formats. The results improve over existing (though related) techniques.
### Discussion
#### Strenghts
- An interesting and timely topic to study
- Results show non-compute improvements
#### Weakness
The primary weakness noted among the reviewers was the lack of study on actual decoding performance. As I note below, this is a serious oversight that given the already existing theoretical work in the area warrants study as the community should begin to turn towards mapping that theory to practice.
### Recommendation
I recommend Accept (poster). This is a strong piece of theoretical work. However, I would like to note that while I believe this work meets the current evaluation standards set in the area, it is time for follow on work to take the additional step to validate the practicality of the approach through a performance evaluation (either in simulation or FPGA/ASIC work). | train | [
"aIS0lmPSX8S",
"MFqfN95p7Cg",
"LOL0lyA_QS-",
"7AsieLVyTHG",
"2sld8d6pJFs",
"F8HVL9PpG38",
"rV58sRYzEcx",
"AklpUNIPw",
"TIgx4jap-TV",
"ubYobDb5JeE",
"ptdMV4HiGSn",
"Bc0baK7ZxBv",
"JnFf_usx4KE",
"_kUa86l3hx",
"A1Yyzxq_0-5",
"rjVTTXORQlb",
"QAWouE5jcz",
"yB7KeHwrqk5"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a sequential fixed-to-fixed encoding scheme for sparse neural network weight encoding/decoding. The core component of the algorithm is shifted registers that expand the decoding window and extra code for recording unmatched bits. This work proposes a sequential fixed-to-fixed encoding scheme fo... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2022_Vs5NK44aP9P",
"LOL0lyA_QS-",
"7AsieLVyTHG",
"JnFf_usx4KE",
"F8HVL9PpG38",
"rV58sRYzEcx",
"AklpUNIPw",
"TIgx4jap-TV",
"ubYobDb5JeE",
"iclr_2022_Vs5NK44aP9P",
"aIS0lmPSX8S",
"QAWouE5jcz",
"yB7KeHwrqk5",
"rjVTTXORQlb",
"iclr_2022_Vs5NK44aP9P",
"iclr_2022_Vs5NK44aP9P",
"iclr_2... |
iclr_2022_hGXij5rfiHw | Discovering Invariant Rationales for Graph Neural Networks | Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features --- rationale --- which guides the model prediction. Unfortunately, the leading rationalization models often rely on data biases, especially shortcut features, to compose rationales and make predictions without probing the critical and causal patterns. Moreover, such data biases easily change outside the training distribution. As a result, these models suffer from a huge drop in interpretability and predictive performance on out-of-distribution data. In this work, we propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs. It conducts interventions on the training distribution to create multiple interventional distributions. Then it approaches the causal rationales that are invariant across different distributions while filtering out the spurious patterns that are unstable. Experiments on both synthetic and real-world datasets validate the superiority of our DIR in terms of interpretability and generalization ability on graph classification over the leading baselines. Code and datasets are available at https://github.com/Wuyxin/DIR-GNN. | Accept (Poster) | The work proposes a method to learn graph representations based on subgraphs that are invariant to spurious subgraphs. The reviewers found the paper easy to read and the theory interesting, well explained and justified. The reviewers seem happy with the existing and new experiments that came during the rebuttal phase. I too found the paper interesting and mostly well-written.
Besides the corrections done during the rebuttal, in further discussion with the authors, I raised a concern that the work must make additional assumptions about the support of the induced subgraph distributions that were not clearly stated in the paper: The work makes the assumption that there is enough training data such that all spurious induced subgraph patterns $S$ that are smaller than the truly correlated induced subgraph $C$ can be identified as spurious. The authors promised to make this into a clearly demarcated assumption since it a key requirement for the method to work. | train | [
"u-5MPyeb8UV",
"4Kn37TwxLCE",
"8znCbRieb3",
"cGPi8KK3Ngo",
"rFboZ-SX8Gj",
"t-bBJMCKHO3",
"COdOeKKCgwI",
"T9D1-cwGT4A",
"H2gPdMMP8-4",
"uodrDBrwMpB",
"jPG9UHg525F",
"L7p8ux5mu26",
"z601u6_2KAa",
"D78HXtO71HV",
"lY-c1uzPKCu"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We again appreciate all reviewers’ approval and replies. We further update our paper by adding an **Open Discussions** section in Appendix G, which presents some promising future directions to further enhance our DIR framework. We sincerely hope this can make our work more sound and offer more inspiration for the... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_hGXij5rfiHw",
"t-bBJMCKHO3",
"iclr_2022_hGXij5rfiHw",
"4Kn37TwxLCE",
"H2gPdMMP8-4",
"jPG9UHg525F",
"uodrDBrwMpB",
"iclr_2022_hGXij5rfiHw",
"L7p8ux5mu26",
"8znCbRieb3",
"COdOeKKCgwI",
"T9D1-cwGT4A",
"lY-c1uzPKCu",
"iclr_2022_hGXij5rfiHw",
"iclr_2022_hGXij5rfiHw"
] |
iclr_2022_fQTlgI2qZqE | Fast Generic Interaction Detection for Model Interpretability and Compression | The ability of discovering feature interactions in a black-box model is vital to explainable deep learning. We propose a principled, global interaction detection method by casting our target as a multi-arm bandits problem and solving it swiftly with the UCB algorithm. This adaptive method is free of ad-hoc assumptions and among the cutting-edge methods with outstanding detection accuracy and stability. Based on the detection outcome, a lightweight and interpretable deep learning model (called ParaACE) is further built using the alternating conditional expectation (ACE) method. Our proposed ParaACE improves the prediction performance by 26 % and reduces the model size by 100+ times as compared to its Teacher model over various datasets. Furthermore, we show the great potential of our method for scientific discovery through interpreting various real datasets in the economics and smart medicine sectors. The code is available at https://github.com/zhangtj1996/ParaACE. | Accept (Poster) | This paper tackles the problem of feature interactions identification in black-box models, which is an important problem towards achieving explainable AI/ML. The authors formulate the problem under the multi-armed bandit setting and propose a solution based on the UCB algorithm. This simplification of the problem leads to a computationally feasible solution, for which the authors provide several theoretical analyses. The importance of the learned interactions is showcased in a new deep learning model leveraging these interactions, leading to a reduction in model size (thereby competing against pruning methods) as well as an improvement in accuracy (thereby competing against generalization methods). Although the proposed approach essentially builds on the specific UCB algorithm, it could likely be extended/modified to other (potentially more efficient) bandit strategies. A drawback of this work resides in the experiments being entirely synthetics. In order to close the gap with practice, experiments on real datasets of higher dimensionality should be conducted. | train | [
"ZdgriG_hVBG",
"0DGiR9nBH4",
"lhPCpkOQFL1",
"DcP2GSLta_b",
"T-J7u1eahJk",
"6BcZukgqG1Z",
"127anzySvg",
"9n_sNsjeO2",
"PhXKWBlUCXm",
"4d2jmSZMwd",
"vf0W-0SJw_I",
"BqaP3d9EbGC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to detect pairwise estimation using a more efficient evaluation of Hessian values via sampling based on the multi-armed bandit approach. The desire to consider hessian (and second derivatives in general) as the interaction strength is natural and has been discussed in the literature. \nThe autho... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_fQTlgI2qZqE",
"127anzySvg",
"T-J7u1eahJk",
"ZdgriG_hVBG",
"PhXKWBlUCXm",
"BqaP3d9EbGC",
"ZdgriG_hVBG",
"vf0W-0SJw_I",
"4d2jmSZMwd",
"iclr_2022_fQTlgI2qZqE",
"iclr_2022_fQTlgI2qZqE",
"iclr_2022_fQTlgI2qZqE"
] |
iclr_2022_DIjCrlsu6Z | Controlling Directions Orthogonal to a Classifier | We propose to identify directions invariant to a given classifier so that these directions can be controlled in tasks such as style transfer. While orthogonal decomposition is directly identifiable when the given classifier is linear, we formally define a notion of orthogonality in the non-linear case. We also provide a surprisingly simple method for constructing the orthogonal classifier (a classifier utilizing directions other than those of the given classifier). Empirically, we present three use cases where controlling orthogonal variation is important: style transfer, domain adaptation, and fairness. The orthogonal classifier enables desired style transfer when domains vary in multiple aspects, improves domain adaptation with label shifts and mitigates the unfairness as a predictor. The code is available at https://github.com/Newbeeer/orthogonal_classifier | Accept (Spotlight) | This paper introduces the concept of classifier orthogonalization. This is a generalization of orthogonality of linear classifiers (linear classifiers with orthogonal weights) to the non-linear setting. It introduces the notion of a full and principal classifier, where the full classifier is one that minimizes the empirical risk, and the principal classifier is one that uses only partial information. The orthogonalization procedure assumes that the input domain, X can be divided into two sets of latent random variables Z1 and Z2 via a bijective mapping. The random variables Z1 are the principal random variables, and Z2 contains all other information. Z1 and Z2 are assumed to be conditionally independent given the target label. The paper outlines two approaches to construct orthogonal classifiers that operate only on Z2. The approach is highlighted in three applications: controlled style transfer, domain adaptation, and fair classification.
The reviewers all found the proposed method to be principled and compelling. Beyond clarification questions and some discussion on related work, the reviewers raised a few issues that were subsequently addressed: 1) Additional baselines for domain adaptation and fairness. 2) Controlled style transfer being a new task with no established baselines, and 3) The feasibility of training a proper “full classifier” that minimizes the empirical risk, and its necessity in the approach. The authors addressed these concerns and updated the paper, to the satisfaction of the reviewers. All of them unanimously recommend acceptance. | train | [
"aqjM-PguMS-",
"h_0FoiBeLDM",
"iqQMDHv-bDS",
"bLx_p0icEj3",
"truzeRIs8aa",
"SZq5ErMgOqw",
"WsHc4twlnwY",
"Aqp8tAbKFNA",
"wyGgVsqGb--",
"hrcbAKgp01C",
"7U7gWjtDSK",
"EGWrzJc5km_",
"CiNgKo92u7h"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and clarification on experimental settings. I believe the paper is good for publication.",
" Thank you to the authors for the detailed response, I'm satisfied with the response.",
"- The paper introduces the notion of \"orthogonal classifiers\": classifiers that rely on orthogonal ... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"7U7gWjtDSK",
"hrcbAKgp01C",
"iclr_2022_DIjCrlsu6Z",
"truzeRIs8aa",
"SZq5ErMgOqw",
"WsHc4twlnwY",
"wyGgVsqGb--",
"iclr_2022_DIjCrlsu6Z",
"iqQMDHv-bDS",
"CiNgKo92u7h",
"EGWrzJc5km_",
"iclr_2022_DIjCrlsu6Z",
"iclr_2022_DIjCrlsu6Z"
] |
iclr_2022_sXNVFBc-0aP | Public Data-Assisted Mirror Descent for Private Model Training | In this paper, we revisit the problem of effectively using public data to improve the privacy/utility trade-offs for differentially private (DP) model training. Here, public data refers to auxiliary data sets that have no privacy concerns. We consider public training data sets that are from the *same distribution* as the private training data set.
For convex losses, we show that a variant of Mirror Descent provides population risk guarantees which are independent of the dimension of the model ($p$). Specifically, we apply Mirror Descent with the loss generated by the public data as the *mirror map*, and using DP gradients of the loss generated by the private (sensitive) data. To obtain dimension independence, we require $G_Q^2 \leq p$ public data samples, where $G_Q$ is the Gaussian width of the smallest convex set $Q$ such that the public loss functions are 1-strongly convex with respect to $\|\cdot\|_Q$. Our method is also applicable to non-convex losses, as it does not rely on convexity assumptions to ensure DP guarantees. We further show that our algorithm has a natural "noise stability" property: If in a bounded region around the current iterate, the public loss satisfies $\alpha_v$-strong convexity in a direction $v$, then using noisy gradients instead of the exact gradients shifts our next iterate in the direction $v$ by an amount proportional to $1/\alpha_v$ (in contrast with DP stochastic gradient descent (DP-SGD)), where the shift is isotropic). Analogous results in prior works had to explicitly learn the geometry using the public data in the form of preconditioner matrices.
We demonstrate the empirical efficacy of our algorithm by showing privacy/utility trade-offs on linear regression, and deep learning benchmark datasets (CIFAR-10, EMNIST, and WikiText-2). We show that our algorithm not only significantly improves over traditional DP-SGD, which does not have access to public data, but also improves over DP-SGD on models that have been pretrained with the public data to begin with. | Reject | This paper proposes a new algorithm for private ERM, when given access to public data, with a dimension-independent risk guarantee if (A) the public and private datasets are of the same distribution, (B) public dataset size exceeds the dimensionality (or, rather, the squared Gaussian width of an appropriate set), and (C) the public and private loss functions share a minimizer (and the gradients at the shared minimizer must satisfy some variance bounds). The algorithm uses the public data as the Bregman mirror map within private mirror descent (where Gaussian noise is added to the gradients), thus implicitly affecting the geometry, as opposed to explicitly learning the geometry as done in earlier works.
One reviewer was very positive, but two hovered around the borderline and expressed some reservations about the theory and experiments. Regarding the experiments, they did not compare to the ICML'21 paper by Asi et al --- however the authors of that paper have (surprisingly) still not released their code, so I think this is forgivable. Since the paper was on the borderline, I read it myself, with a focus on the theoretical aspects. I find myself agreeing with the second reviewer that the assumptions are strong, and their justification is weak and unrealistic.
Regardless of whether the paper, is accepted or not, I strongly recommend the authors to add condition (C) to their abstract (just the part about the shared minimizer) --- currently the abstract mentions two of the above but not the critical third one. I think (A) is already a strong assumption --- their justification that some users opt-in to reveal their data does not justify this, because the opt-in will not be random (if the opt-in depends on covariates like gender/age/..., the datasets will not be identically distributed). On top of that, (C) is also a strong assumption --- indeed usually the loss functions would be different (for eg, the private one would be clipped, and clipping will rarely preserve the population minimizer, as well as regularized) --- their justification that for a linear model with symmetric noise, clipping does not change the minimizer may be true (though not proved), but we would never expect the linear model to be true in practice even if we employ it as a working model. Last, assumption (B) restricts its use in many common high-dimensional data problems. Overall, I am pressed into a corner to find situations in which all three assumptions would be true.
Nevertheless, supposing that these assumptions hold, the algorithm is indeed clean, and the empirics appear reasonable. Overall, the paper remains on the borderline. Whether accepted or rejected, I expect the authors to do a much better job of carefully justifying their assumptions, with realistic and not far-fetched examples (as suggested by the second reviewer). | train | [
"UIwS-wmVh4J",
"YhS2Ado4mlt",
"W6zxc8HqFaM",
"9FjvgTmZ0k3",
"Wzs4UanAIM",
"boe-Y4hMcTA",
"hbgemPfWlLi",
"kvvs-766Zuh",
"FcYECTKz4WT",
"Uu6ixjrT3Hk",
"buEz9MpzsSP",
"v21UYZixJga",
"CR0kveSBfsS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public"
] | [
"In this paper, the authors study differentially private empirical risk minimization (DP-ERM). Specifically, they study the case where the constraint set $\\mathcal{C}$ has additional geometric structure, i.e., its Gaussian width could much lower than the underlying dimension $p$, such as the $\\ell_1$-norm ball. T... | [
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1
] | [
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1
] | [
"iclr_2022_sXNVFBc-0aP",
"9FjvgTmZ0k3",
"iclr_2022_sXNVFBc-0aP",
"Wzs4UanAIM",
"boe-Y4hMcTA",
"FcYECTKz4WT",
"W6zxc8HqFaM",
"UIwS-wmVh4J",
"hbgemPfWlLi",
"buEz9MpzsSP",
"iclr_2022_sXNVFBc-0aP",
"CR0kveSBfsS",
"buEz9MpzsSP"
] |
iclr_2022_h-z_zqT2yJU | Reducing the Teacher-Student Gap via Adaptive Temperatures | Knowledge distillation aims to obtain a small and effective deep model (student) by learning the output from a larger model (teacher). Previous studies found a severe degradation problem, that student performance would degrade unexpectedly when distilled from oversized teachers. It is well known that larger models tend to have sharper outputs. Based on this observation, we found that the sharpness gap between the teacher and student output may cause this degradation problem. To solve this problem, we first propose a metric to quantify the sharpness of the model output. Based on the second-order Taylor expansion of this metric, we propose Adaptive Temperature Knowledge Distillation (ATKD), which automatically changes the temperature of the teacher and the student, to reduce the sharpness gap. We conducted extensive experiments on CIFAR100 and ImageNet and achieved significant improvements. Specifically, ATKD trained the best ResNet18 model on ImageNet as we knew (73.0% accuracy). | Reject | The authors study the degradation problem observed in KD for large teacher networks and propose to address it by quantifying and adapting to a *sharpness gap* between the student and the teacher. The reviewers generally appreciated the proposed approach in handling larger teachers and found it effective within the scope of the numerical results provided in the paper. That said, the reviewers raised several critical issues concerning the writing and the presentation of several crucial parts of the paper, in particular those related to the sharpness measure and the proposed training method ATKD. Thus, given this, and the exchanges between the reviewers and the authors, in its present form, the paper cannot be recommended for acceptance. The authors are encouraged to incorporate the valuable feedback provided by the knowledgeable reviewers. | test | [
"-FLNfm3Pd8q",
"k81OtzSeda1",
"NGDj4z7sw3L",
"6ljgMDN6mP7",
"pW5LAtpB69k",
"EWDbinMwY_0",
"Tl-azzXcLZ",
"3pil3hwK-Cb",
"W-hr0Wxod_Q",
"puAQDDB09C",
"zyx0P7vIvWl",
"gfqe3j1KnOO",
"SR51kPR15uP",
"4HSl9sbdl0r",
"WexBsnSlySA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" One of our main contributions in this work is that we found a smaller sharpness gap could lead to better student performance while keeping other factors the same. Table 8 does not show significant results because the student performance is also affected by other factors (i.e., the noise contained in the teacher l... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"NGDj4z7sw3L",
"iclr_2022_h-z_zqT2yJU",
"3pil3hwK-Cb",
"Tl-azzXcLZ",
"W-hr0Wxod_Q",
"zyx0P7vIvWl",
"zyx0P7vIvWl",
"WexBsnSlySA",
"k81OtzSeda1",
"4HSl9sbdl0r",
"SR51kPR15uP",
"iclr_2022_h-z_zqT2yJU",
"iclr_2022_h-z_zqT2yJU",
"iclr_2022_h-z_zqT2yJU",
"iclr_2022_h-z_zqT2yJU"
] |
iclr_2022_u2JeVfXIQa | Adaptive Cross-Layer Attention for Image Restoration | Non-local attention module has been proven to be crucial for image restoration. Conventional non-local attention processes features of each layer separately, so it risks missing correlation between features among different layers. To address this problem, we propose Cross-Layer Attention (CLA) module in this paper. Instead of finding correlated key pixels within the same layer, each query pixel is allowed to attend to key pixels at previous layers of the network. In order to mitigate the expensive computational cost of such hierarchical attention design, only a small fixed number of keys can be selected for each query from a previous layer. We further propose a variant of CLA termed Adaptive Cross-Layer Attention (ACLA). In ACLA, the number of keys to be aggregated for each query is dynamically selected. A neural architecture search method is used to find the insert positions of ACLA modules to render a compact neural network with compelling performance. Extensive experiments on image restoration tasks including single image super-resolution, image denoising, image demosaicing, and image compression artifacts reduction validate the effectiveness and efficiency of ACLA. | Reject | The paper introduces an cross-layer attention mechanism for image restoration. To reduce the computational complexity, the framework uses deformable convolutions and an adaptive selection for reducing the number of keys, as well as a neural architecture search. The paper received three borderline reject recommendations and a clear accept. After reading the reviews, responses, and the paper in details, the area chair agrees with Reviewer 6N93 that the paper has some merit. Unfortunately, he/she also agrees with the fact that the proposed framework is quite complicated with many components for a marginal improvement (something that also Reviewer 6N93 has mentioned in the discussion between reviewers). Overall, this points towards rejection, which is the final recommendation of the area chair.
Another point that would be helpful, in case this paper is resubmitted elsewhere, is to release the code for the method, given its complexity. | train | [
"Y3ytzQl_nkX",
"VC5x6sb35y",
"c9JpC_faDdX",
"Y9QCGzbntgP",
"Ip23rggsTzf",
"D8bGIbLjr4J",
"FV0euyt2YLz",
"v5B1jIx1pHf",
"iZ_KJqWcGY6",
"h2bPEmrAb8n",
"A9JWCOQRavY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work presents cross-layer attention (CLA) modules to find informative keys across different CNN layers for each query feature. Furthermore, an adaptive cross-layer attention (ACLA) is also formulated to dynamically select keys from different CNN layers by using a NAS method. After that, the authors embedded t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
5
] | [
"iclr_2022_u2JeVfXIQa",
"iclr_2022_u2JeVfXIQa",
"D8bGIbLjr4J",
"Ip23rggsTzf",
"A9JWCOQRavY",
"FV0euyt2YLz",
"h2bPEmrAb8n",
"iZ_KJqWcGY6",
"iclr_2022_u2JeVfXIQa",
"iclr_2022_u2JeVfXIQa",
"iclr_2022_u2JeVfXIQa"
] |
iclr_2022_XIZaWGCPl0b | Tesseract: Gradient Flip Score to Secure Federated Learning against Model Poisoning Attacks | Federated learning—multi-party, distributed learning in a decentralized environment—is vulnerable to model poisoning attacks, even more so than centralized learning approaches. This is because malicious clients can collude and send in carefully tailored model updates to make the global model inaccurate. This motivated the development of Byzantine-resilient federated learning algorithms, such as Krum, Trimmed mean, and FoolsGold. However, a recently developed targeted model poisoning attack showed that all prior defenses can be bypassed. The attack uses the intuition that simply by changing the sign of the gradient updates that the optimizer is computing, for a set of malicious clients, a model can be pushed away from the optima to increase the test error rate. In this work, we develop tesseract—a defense against this directed deviation attack, a state-of-the-art model poisoning attack. TESSERACT is based on a simple intuition that in a federated learning setting, certain patterns of gradient flips are indicative of an attack. This intuition is remarkably stable across different learning algorithms, models, and datasets. TESSERACT assigns reputation scores to the participating clients based on their behavior during the training phase and then takes a weighted contribution of the clients. We show that TESSERACT provides robustness against even an adaptive white-box version of the attack. | Reject | The paper presents a defense against the gradient sign flip attacks on federated learning. The proposed method is novel, technically sound and well evaluated. The crucial issue of the paper is, however, that this defense is specific to gradient-flip attacks. The authors show the robustness of their method against white-box attacks adhering to this threat model and claim that "an adaptive white-box attacker with access to all internals of TESSERACT, including dynamically determined threshold parameters, cannot bypass its defense". The latter statement does not seem to be well justified, and following the extensive discussion of the paper, the reviewers were still not convinced that the proposed method is secure by its design. The AC therefore feel that the specific arguments of the paper should be revised - or the claim of robustness further substantiated - in order for the paper to be accepted.
Furthermore, as a comment related to ethical consideration, the AC remarks that the paper's acronym, Tesseract, is used by an open source OCR software (https://tesseract-ocr.github.io/) as well as in a recent paper: Pendlebury et al., TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time, USENIX Security 2019.
All of the above mentioned reservations essentially add up to a "major revision" recommendation which, given the decision logic of ACLR, translates into the rejection option. | train | [
"45VdOLyA2ir",
"Wd4baO28trm",
"wpnhJUNOfYX",
"eDE3HuZPc30",
"4EJc_sc1Pho",
"b4KXK132GjR",
"_169rfJfId9",
"UPXsVC35z0C",
"ItP0EwPvc0N",
"MVkx8YjdkWm",
"L61N8WfWO04",
"JWqbkzIa_0V",
"_PiyDIQN5dVv",
"28yLG3mEJGj",
"rMPotCOhCz",
"o_BGusGH4l",
"Gs4XEYdksqH",
"hjGEzOgmOy6",
"f3LyNI4LD8... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
... | [
" Yes, TESSERACT gets at the root cause of any untargeted model poisoning attack, as any such attack has to perform gradient flips.\n\nUntargeted attacks with the malicious objective of rendering high test error rates in the global model need to constantly push the model away from the optima. This is done so as to ... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Wd4baO28trm",
"rMPotCOhCz",
"iclr_2022_XIZaWGCPl0b",
"4EJc_sc1Pho",
"b4KXK132GjR",
"UPXsVC35z0C",
"28yLG3mEJGj",
"MVkx8YjdkWm",
"MVkx8YjdkWm",
"L61N8WfWO04",
"iclr_2022_XIZaWGCPl0b",
"Gs4XEYdksqH",
"YC4MyCfW71W",
"YC4MyCfW71W",
"Gs4XEYdksqH",
"yhXJtXWOeJ",
"o_BGusGH4l",
"AUj0aG20V... |
iclr_2022_PtuQ8bk9xF5 | Learning to Act with Affordance-Aware Multimodal Neural SLAM | Recent years have witnessed an emerging paradigm shift toward embodied artificial intelligence, in which an agent must learn to solve challenging tasks by interacting with its environment. There are several challenges in solving embodied multimodal tasks, including long-horizon planning, vision-and-language grounding, and efficient exploration. We focus on a critical bottleneck, namely the performance of planning and navigation. To tackle this challenge, we propose a Neural SLAM approach that, for the first time, utilizes several modalities for exploration, predicts an affordance-aware semantic map, and plans over it at the same time. This significantly improves exploration efficiency, leads to robust long-horizon planning, and enables effective vision-and-language grounding. With the proposed Affordance-aware Multimodal Neural SLAM (AMSLAM) approach, we obtain more than 40% improvement over prior published work on the ALFRED benchmark and set a new state-of-the-art generalization performance at a success rate of 23.48% on the test unseen scenes. | Reject | This paper presents a SLAM based approach for the ALFRED benchmark. The presented method, Affordance aware Multimodal Neural SLAM has two key advantages over past works: It uses a multimodal exploration strategy and it predicts an affordance aware semantic map. It also obtains a very large performance improvement over the ALFRED benchmark. The reviewers for this paper were quite impressed by the large improvements obtained by this technique. However, there were two major concerns across the reviews: (1) Are the design choices made in this paper heavily engineered towards ALFRED ? (2) Does the work make too many assumptions about the setting (unrealistic assumptions that may not really hold in more realistic environments or the real world) ? The authors have provided a detailed response and answered many questions posed to them, but the reviewers continue to have concerns about the generalizability of the proposed method. Another point of concern pointed out by a reviewer is whether it is reasonable in a realistic setting to perform exploration with a knowledge of the downstream task. This point has not really been answered satisfactorily by the authors. My takeaway is that the method presented by the authors clearly works on ALFRED. But it contains several design choices that are largely ALFRED specific and in some cases unrealistic. This provides fewer benefits to readers looking for more general insights that can be valuable across a suite of tasks. As a result of this, and in spite of the large gains, I recommend rejecting this paper. | train | [
"X8WWIbBp4v",
"Tq6L2NPDwhu",
"YsCDXFGF7I2",
"pru-tRofu8v",
"hOdTuCUhWam",
"lSH4aUhAi5e",
"zrPluCr9Xx5",
"f3JWofVcnTG",
"099LIraOPpL",
"usPN-RzIGB5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I first want to thank the authors for the replies. I have also read the other reviews and the paper revision.\n\nThe authors have addressed some of my concerns:\n - the comparison fairness regarding the backbones;\n - adding qualitative results and analysis;\n - clarifying that the exploration steps are inc... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"zrPluCr9Xx5",
"iclr_2022_PtuQ8bk9xF5",
"f3JWofVcnTG",
"iclr_2022_PtuQ8bk9xF5",
"099LIraOPpL",
"hOdTuCUhWam",
"usPN-RzIGB5",
"Tq6L2NPDwhu",
"iclr_2022_PtuQ8bk9xF5",
"iclr_2022_PtuQ8bk9xF5"
] |
iclr_2022_mF122BuAnnW | Localized Randomized Smoothing for Collective Robustness Certification | Models for image segmentation, node classification and many other tasks map a single input to multiple labels. By perturbing this single shared input (e.g. the image) an adversary can manipulate several predictions (e.g. misclassify several pixels). A recent collective robustness certificate provides strong guarantees on the number of predictions that are simultaneously robust. This method is however limited to strictily models, where each prediction is associated with a small receptive field. We propose a more general collective certificate for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e.g. based on their proximity in the image). The certificate is based on our novel localized randomized smoothing approach, where the random perturbation strength for different input regions is proportional to their importance for the outputs. The resulting locally smoothed model yields strong collective guarantees while maintaining high prediction quality on both image segmentation and node classification tasks. | Reject | The authors develop a framework for improving robustness certificates obtained by randomly smoothed classifiers in settings with multiple outputs (segmentation or node classification), by combining local robustness certificates obtained for individual classifiers. They validate their results empirically and demonstrate gains from their approach.
The reviewers were mostly in agreement that the authors make a novel and interesting contribution. However, there were a lot of technical concerns raised by reviewers that, while addressed during the discussion phase, would require a substantial revision of the paper to address adequately. Overall, I feel the paper is borderline but recommend rejection and encourage the authors to incorporate feedback from the reviewers and submit to a future venue. | train | [
"V4PL_-IYB8",
"7lWRFLbCKPy",
"busFaB6vyj0",
"b9zLswjsGYT",
"Ir4G0c77EiZ",
"H6kYvt9-fuP",
"0EbvJx7E70a",
"9Gt10ljy2gn",
"Wqc7ZHU4q_u",
"cbi5PwkCRkP",
"uSHDVGnXif",
"Qy0ApZK5Xx",
"i4SedfFyDVC",
"X4ZzAEDJIGr",
"XXxAFh4_SN",
"Pnbw_8QEtF",
"OOUQCmBp_gt",
"2UF4Ku_pu2L",
"yytpiqRIHtJ",
... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
" We are happy to show an additional preliminary experimental result on the graph dataset Citeseer [1]. Since the reviewers Fv4q, y7cf and z9Ri asked for more datasets, we have started to conduct experiments on other graph datasets and also plan to apply our method to other semantic segmentation datasets and models... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
4
] | [
"41FGRK_xqW",
"umtqWSuoae",
"ZZj5cnLMe7l",
"iclr_2022_mF122BuAnnW",
"lMy28o_NWxR",
"0EbvJx7E70a",
"9Gt10ljy2gn",
"Qy0ApZK5Xx",
"i4SedfFyDVC",
"b9zLswjsGYT",
"i4SedfFyDVC",
"i4SedfFyDVC",
"XXxAFh4_SN",
"i4SedfFyDVC",
"DJ75ikBCfr",
"tYufZgA7Vel",
"tYufZgA7Vel",
"umtqWSuoae",
"tYufZ... |
iclr_2022_VQhFC3Ki5C | DEEP GRAPH TREE NETWORKS | We propose Graph Tree Networks (GTree), a self-interpretive deep graph neural network architecture which originates from the tree representation of the graphs. In the tree representation, each node forms its own tree where the node itself is the root node and all its neighbors up to hop-k are the subnodes. Under the tree representation, the message propagates upward from the leaf nodes to the root node naturally and straightforwardly to update the root node's hidden features. This message passing (or neighborhood aggregation) scheme is essentially different from that in the vanilla GCN, GAT and many of their derivatives, and is demonstrated experimentally a superior message passing scheme. Models adopting this scheme has the capability of going deep. Two scalable graph learning models are proposed within this GTree network architecture - Graph Tree Convolution Network (GTCN) and Graph Tree Attention Network (GTAN), with demonstrated state-of-the-art performances on several benchmark datasets. The deep capability is also demonstrated for both models. | Reject | While I understand and have empathy with the authors' viewpoint of their work and novelty, this unfortunately has not reached reviewers' hearts in the way that they intended. There has been no strong support for acceptance, as the questions about the amount of novelty piled up. Some interactions between authors and reviewers happened. It is clear that the authors made an effort to show the differences with respect to other tree-related models and algorithms, as well as to highlight the strengths of the approach which was claimed to be too similar and simplistic. I believe the interactions helped with improving the first viewpoint of reviewers, however this improvement has not been enough, as reviewers did not significantly changed their stances. This is a short process and indeed it is not easy to change first impressions. If anything to add is that I hope that the impressions can be used to give a new presentation to the work that will enhance the work and its view by others. | train | [
"ibHooOWfupX",
"JBU2H6J0bMD",
"b6Nf5r7ce9",
"goyk4wbphQP",
"DCQk4YjdcG-",
"jdB2eKADk13",
"C3rVrrf_WN2",
"zApU_6sXiWk",
"pjlc6Q95lrj",
"qYDtc2i9IQ4",
"2-ghocvj3AX",
"tKPYtUL5Lbl",
"X0aJNb4AAR0",
"cBXa6UL-Km-"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 3. In terms of contribution, we propose a general message passing rule in our GTreeNet which may open new research opportunities by exploring combinations of different aggregation functions and transformations. Our GTCN and GTAN models achieve state-of-the-art performance on several popular benchmark datasets. Th... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
1,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"JBU2H6J0bMD",
"DCQk4YjdcG-",
"C3rVrrf_WN2",
"iclr_2022_VQhFC3Ki5C",
"2-ghocvj3AX",
"iclr_2022_VQhFC3Ki5C",
"qYDtc2i9IQ4",
"X0aJNb4AAR0",
"cBXa6UL-Km-",
"jdB2eKADk13",
"tKPYtUL5Lbl",
"iclr_2022_VQhFC3Ki5C",
"iclr_2022_VQhFC3Ki5C",
"iclr_2022_VQhFC3Ki5C"
] |
iclr_2022___ObYt4753c | A Simple Approach to Adversarial Robustness in Few-shot Image Classification | Few-shot image classification, where the goal is to generalize to tasks with limited labeled data, has seen great progress over the years. However, the classifiers are vulnerable to adversarial examples, posing a question regarding their generalization capabilities. Recent works have tried to combine meta-learning approaches with adversarial training to improve the robustness of few-shot classifiers. We show that a simple transfer-learning based approach can be used to train adversarially robust few-shot classifiers. We also present a method for novel classification task based on calibrating the centroid of the few-shot category towards the base classes. We show that standard adversarial training on base categories along with centroid-based classifier in the novel categories, outperforms or is on-par with state-of-the-art advanced methods on standard benchmarks such as Mini-ImageNet, CIFAR-FS and CUB datasets. Our method is simple and easy to scale, and with little effort can lead to robust few-shot classifiers. | Reject | This paper finally received divergent and borderline reviews with one positive (6) and two negative (5) rates. Based on the reviews, authors’ responses and updated manuscript, we would like to decide to reject this work at this time even though this submission has a lot of potentials such as simplicity and efficiency.
Positively, all the reviews agree that the proposed approach is simple but effective to improve the robustness of few-shot classifiers. However, there is some room for improvement to be a stronger submission: (i) the technical novelty may need to be better presented, and (ii) the improved performance may need to be better justified (e.g., the effect of the pretrained stage). | train | [
"GgauyN-KtZo",
"_vWmh3E-K4l",
"sJAg4r6Mjd5",
"PH7aLAjQpI",
"K7L00ohBRJ",
"hrHbG8sqVM",
"US4Xi91g2-O",
"oooveyMbHQj",
"bhEq-QE6IHW"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers once again for their detailed feedback. Please let us know in case there are any questions/comments. Since this is the last day of the discussion period, we are hoping to receive a response from reviewers soon. ",
"This paper aims to address the problem of adversarial attack for low shot ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2022___ObYt4753c",
"iclr_2022___ObYt4753c",
"iclr_2022___ObYt4753c",
"iclr_2022___ObYt4753c",
"oooveyMbHQj",
"bhEq-QE6IHW",
"_vWmh3E-K4l",
"iclr_2022___ObYt4753c",
"iclr_2022___ObYt4753c"
] |
iclr_2022_youe3QQepVB | Generative Modeling for Multitask Visual Learning | Generative modeling has recently shown great promise in computer vision, but it has mostly focused on synthesizing visually realistic images. In this paper, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model that is useful across various visual perception tasks. Correspondingly, we propose a general multi-task oriented generative modeling (MGM) framework, by coupling a discriminative multi-task network with a generative network. While it is challenging to synthesize both RGB images and pixel-level annotations in multi-task scenarios, our framework enables us to use synthesized images paired with only weak annotations (i.e., image-level scene labels) to facilitate multiple visual tasks. Experimental evaluation on challenging multi-task benchmarks, including NYUv2 and Taskonomy, demonstrates that our MGM framework improves the performance of all the tasks by large margins, especially in the low-data regimes, and our model consistently outperforms state-of-the-art multi-task approaches. | Reject | The paper received borderline reviews. While the reviewers acknowledged good motivation, good number of experiments and good numeral results that demonstrated the proposed method outperforms the existing state of the art, there are shared concerns: the experimental setup is not really a "low data" regime, generative models jointly trained with the multi-task model only led to marginal improvements, and the prediction quality is quite low for all methods. In addition, it's unclear why the images generated by MGM have a lot of artifacts, and how the artifacts affect the performance. Overall, the reviewers were not convinced after the rebuttal. | train | [
"aC5RffNW4v",
"clVTN8s8Y68",
"YCSTwvpjik5",
"Sol6qg8ykL",
"H8wkNUCpAfj",
"mT52Nugzgsq",
"dvFsnI5KoPh",
"PY8rTK5K03O",
"HkDpFh9RbD5",
"9-0Z-SzqPbW",
"ueU_sjuOOki",
"NvUxHjxtLp"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > I found fig 6 still have border artifacts, and with rgb images as background overlay. I don't think the authors addressed this issue properly\n \nWe greatly appreciate the reviewer’s suggestion on an alternative way of visualization. We seriously considered the reviewer’s suggestion and improved our visualizati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"clVTN8s8Y68",
"H8wkNUCpAfj",
"NvUxHjxtLp",
"YCSTwvpjik5",
"Sol6qg8ykL",
"9-0Z-SzqPbW",
"iclr_2022_youe3QQepVB",
"HkDpFh9RbD5",
"ueU_sjuOOki",
"iclr_2022_youe3QQepVB",
"iclr_2022_youe3QQepVB",
"iclr_2022_youe3QQepVB"
] |
iclr_2022_3mgYqlH60Uj | Learning Symmetric Locomotion using Cumulative Fatigue for Reinforcement Learning | Modern deep reinforcement learning (DRL) methods allow simulated characters to learn complex skills such as locomotion from scratch. However, without further exploitation of domain-specific knowledge, such as motion capture data, finite state machines or morphological specifications, physics-based locomotion generation with DRL often results in unrealistic motions. One explanation for this is that present RL models do not estimate biomechanical effort; instead, they minimize instantaneous squared joint actuation torques as a proxy for the actual subjective cost of actions. To mitigate this discrepancy in a computationally efficient manner, we propose a method for mapping actuation torques to subjective effort without simulating muscles and their energy expenditure. Our approach is based on the Three Compartment Controller model, in which the relationships of variables such as maximum voluntary joint torques, recovery, and cumulative fatigue are present. We extend this method for sustained symmetric locomotion tasks for deep reinforcement learning using a Normalized Cumulative Fatigue (NCF) model.
In summary, in this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation without the use of any finite state machines, morphological specification or motion capture data. Our results show that the learned policies are more symmetric, periodic and robust compared to methods found in previous literature. | Reject | This paper investigated using bio-inspired cumulative fatigue model to improve bipedal locomotion via deep RL. The proposed method marginally improved bipedal locomotion behavior. The size of the experiments should be improved, together with generalization to other symmetric walkers. AC agrees with the reviewers that the empirical performance is not significant enough. The paper may fit the scope of a bipedal locomotion journal/conference better than ICLR. | train | [
"Wl08ZCtyToy",
"-bEweNArnFN",
"eiap5KTpND",
"rPQsv67lcl6",
"z6gxuKYcwjs",
"txCUrxiijB",
"izD6C0pxgQi",
"97NEFownGtm",
"H0CSKkKqNW",
"o9vw4WbNVM9",
"ZVi1RuWvTDC",
"GjJVjbN0_qb",
"H8XKPzsQLRr",
"sf9Pk6At_iS",
"9Lx7fdEm_RW",
"LaZhXD5YuSB",
"8WNXyXMtBvj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to use a normalized cumulative fatigue (NCF)-based reward to learn symmetric locomotion with Deep RL. The motivation is that most prior work on locomotion synthesis does not estimate cumulative biomechanical effort and only minimizes instantaneous joint torques. The paper derives the NCF reward ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"iclr_2022_3mgYqlH60Uj",
"GjJVjbN0_qb",
"rPQsv67lcl6",
"o9vw4WbNVM9",
"8WNXyXMtBvj",
"Wl08ZCtyToy",
"LaZhXD5YuSB",
"9Lx7fdEm_RW",
"sf9Pk6At_iS",
"LaZhXD5YuSB",
"8WNXyXMtBvj",
"Wl08ZCtyToy",
"9Lx7fdEm_RW",
"iclr_2022_3mgYqlH60Uj",
"iclr_2022_3mgYqlH60Uj",
"iclr_2022_3mgYqlH60Uj",
"icl... |
iclr_2022_5ALGcXpmFyC | Training Data Size Induced Double Descent For Denoising Neural Networks and the Role of Training Noise Level | When training a denoising neural network, we show that more data isn’t more beneficial. In fact the generalization error versus number of of training data points is a double descent curve.
Training a network to denoise noisy inputs is the most widely used technique for pre-training deep neural networks. Hence one important question is the effect of scaling the number of training data points. We formalize the question of how many data points should be used by looking at the generalization error for denoising noisy test data. Prior work on computing the generalization error focus on adding noise to target outputs. However, adding noise to the input is more in line with current pre-training practices. In the linear (in the inputs) regime, we provide an asymptotically exact formula for the generalization error for rank 1 data and an approximation for the generalization error for rank r data. We show using our formulas, that the generalization error versus number of data points follows a double descent curve. From this, we derive a formula for the amount of noise that needs to be added to the training data to minimize the denoising error and see that this follows a double descent curve as well. | Reject | This paper proposes a theory for double descent phenomena in denoting deep neural networks. There are two major concerns: (1) The assumption that the data lie in a low dimensional subspace is quite strong, and needs to be weaken or better justified. (2) The theory only works for r=1, where the rank is one. For general rank, how to apply the proposed analysis is hand wavy and not convincing. The paper can be significantly strengthen if these two issues could be addressed. | train | [
"eTPDX3pB6hL",
"jlxphmDdYkY",
"HCrltmYBLhz",
"tsBealYIKVA",
"vhhZII_dPVV",
"8NOSdtV_BoA",
"ptRUJ7J0Viu",
"WVkPQfkJ6kv",
"jtZTIIDNRF",
"rD5Gv9P-7Zi",
"N6obxJ87Whe",
"qOmwhrZPCH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the follow up and for discussing our work with us. \n\n*\"First, regarding the data assumption.. (and underlies all of ML).\"*\n\nIn relation to this point, our formula allows for the singular values of be changed as well. However, you are right, for most data, we would need some model on how the si... | [
-1,
-1,
3,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"jlxphmDdYkY",
"jtZTIIDNRF",
"iclr_2022_5ALGcXpmFyC",
"ptRUJ7J0Viu",
"iclr_2022_5ALGcXpmFyC",
"WVkPQfkJ6kv",
"qOmwhrZPCH",
"vhhZII_dPVV",
"HCrltmYBLhz",
"N6obxJ87Whe",
"iclr_2022_5ALGcXpmFyC",
"iclr_2022_5ALGcXpmFyC"
] |
iclr_2022_cMBKc-0OTY5 | Kalman Filter Is All You Need: Optimization Works When Noise Estimation Fails | Determining the noise parameters of a Kalman Filter (KF) has been studied for decades. A huge body of research focuses on the task of noise estimation under various conditions, since precise noise estimation is considered equivalent to minimization of the filtering errors. However, we show that even a small violation of the KF assumptions can significantly modify the effective noise, breaking the equivalence between the tasks and making noise estimation an inferior strategy. We show that such violations are common, and are often not trivial to handle or even notice. Consequentially, we argue that a robust solution is needed - rather than choosing a dedicated model per problem.
To that end, we apply gradient-based optimization to the filtering errors directly, with relation to an efficient parameterization of the symmetric and positive-definite parameters of the KF. In a variety of state-estimation and tracking problems, we show that the optimization improves both the accuracy of the KF and its robustness to design decisions.
In addition, we demonstrate how an optimized neural network model can seem to reduce the errors significantly compared to a KF - and how this reduction vanishes once the KF is optimized similarly. This indicates how complicated models can be wrongly identified as superior to the KF, while in fact they were merely more optimized. | Reject | This paper studies the problem of estimating the trajectory of a linear dynamical system when the covariances for the process and observation noise are unknown. The standard solution is to estimate these covariances from data, and this paper instead suggests an optimization procedure. They show promising experimental results. However there are two shortcomings: In terms of theoretical guarantees, they can only show convergence to a local optimum. Moreover they assume they have access to the ground-truth hidden states. Although this is an assumption that has appeared in earlier works, it seems to limit the applicability. | train | [
"bGqedhjuwvv",
"jivgG3j2cXe",
"Z6IMuMTAcU5",
"LY7VgyZYb0K",
"c6ylW3X3Mza",
"NmMrcs5myzG",
"0pObvr-HV7s",
"nbXyy7ueNZp",
"1KC7GyH6ocx",
"ll0b3tZfOP3",
"Ha02MUsb1OM",
"rCg2w3aSiKW"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 1. We do exactly that. And this is indeed the sensor noise in the physical sense - yet not the optimal parameter for state prediction!\n\nWhy? As explained at the end of Section 4.2, in inference mode (testing) we must replace the unknown H(x) (e.g. by the approximation H(z)). This inserts noise to the *processin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
3
] | [
"jivgG3j2cXe",
"Z6IMuMTAcU5",
"LY7VgyZYb0K",
"nbXyy7ueNZp",
"rCg2w3aSiKW",
"Ha02MUsb1OM",
"ll0b3tZfOP3",
"1KC7GyH6ocx",
"iclr_2022_cMBKc-0OTY5",
"iclr_2022_cMBKc-0OTY5",
"iclr_2022_cMBKc-0OTY5",
"iclr_2022_cMBKc-0OTY5"
] |
iclr_2022_eDjxhFbaWX | HODA: Protecting DNNs Against Model Extraction Attacks via Hardness of Samples | Model Extraction attacks exploit the target model's prediction API to create a surrogate model in order to steal or reconnoiter the functionality of the target model in the black-box setting. Several recent studies have shown that a data-limited adversary who has no or limited access to the samples from the target model's training data distribution can use synthesis or semantically similar samples to conduct model extraction attacks. As the training process of DNN-based classifiers is done in several epochs, we can consider this process as a sequence of subclassifiers so that each subclassifier is created at the end of an epoch. We use the sequence of subclassifiers to calculate the hardness degree of samples. In this paper, we investigate the hardness degree of samples and demonstrate that the hardness degree histogram of a data-limited adversary's sample sequences is distinguishable from the hardness degree histogram of benign users' samples sequences, consisting of normal samples. Normal samples come from the target classifier's training data distribution. We propose Hardness-Oriented Detection Approach (HODA) to detect the sample sequences of model extraction attacks. The results demonstrate that HODA can detect the sample sequences of model extraction attacks with a high success rate by only watching 100 samples of them. | Reject | The paper presents a new method for detection of model extraction attacks. It is based on the intuition that typical model extraction attacks involve samples submitted by users that are harder to classify than "benign" samples submitted by users. By introducing the notion of hardness, a metric is developed for identifying malicious users submitting their samples for the purpose of model extraction. While the proposed method is original, it incurs a substantial overhead. Experimental evaluation of the proposed method also has several deficiencies, in particular, in the assessment of its overhead as well as in modeling of benign users. | train | [
"6mJ35xBd1l_",
"pjd99k8dA8",
"PwA-c9ep1a4",
"n1zRhSMv1D",
"TLFCPgEaFk",
"ljbDuO736i8",
"i8vORP9nVxb",
"xBfsMsD4cpf",
"_hki1usw7ml",
"fOYoihWbch",
"H8AnJpLh5K0",
"kp4FMKXiElF",
"lcYYpInoUf6",
"xrRZcuj-c2X",
"Jdks3SnJ5yf",
"HZlYBK4fFd",
"hCBcZhK73I_",
"fqJPH2ul4k",
"KrTRlUzkC7H",
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer UQxC,\n\nWe truly appreciate your reply, and we are glad that our responses were helpful. Considering your very positive reply, we would really appreciate it if you could please consider raising your score.\n\nThank you.",
" Dear reviewer qQMY,\n\nConsidering your positive reply, we would really a... | [
-1,
-1,
-1,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"PwA-c9ep1a4",
"xBfsMsD4cpf",
"xrRZcuj-c2X",
"iclr_2022_eDjxhFbaWX",
"xBfsMsD4cpf",
"KrTRlUzkC7H",
"n1zRhSMv1D",
"kp4FMKXiElF",
"iclr_2022_eDjxhFbaWX",
"fqJPH2ul4k",
"Jdks3SnJ5yf",
"fOYoihWbch",
"KrTRlUzkC7H",
"H8AnJpLh5K0",
"n1zRhSMv1D",
"lcYYpInoUf6",
"KU5J3R0FssI",
"iclr_2022_eD... |
iclr_2022_CTOJRqLMsl | On the Convergence of Nonconvex Continual Learning with Adaptive Learning Rate | One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially.
The memory based continual learning stores a small subset of the data for previous tasks and applies various methods such as quadratic programming and sample selection.
Some memory-based approaches are formulated as a constrained optimization problem and rephrase constraints on the objective for memory as the inequalities on gradients.
However, there have been little theoretical results on the convergence of continual learning.
In this paper, we propose a theoretical convergence analysis of memory-based continual learning with stochastic gradient descent.
The proposed method called nonconvex continual learning (NCCL) adapts the learning rates of both previous and current tasks with the gradients.
The proposed method can achieve the same convergence rate as the SGD method for a single task when the catastrophic forgetting term which we define in the paper is suppressed at each iteration.
It is also shown that memory-based approaches inherently overfit to memory, which degrades the performance on previously learned tasks. Experiments show that the proposed algorithm improves the performance of continual learning over existing methods for several image classification tasks. | Reject | This paper theoretically studies the convergence of memory-based continual learning with stochastic gradient descent, and suggested several methods based on adaptive learning rates.
The reviewers appreciated the novelty of the direction, and some of them thought the experimental results are promising.
However, most reviewers (3/4) were negative. I think the main reason was the paper presentation and clarity, which they found lacking (and I agree). One reviewer thought the experimental evaluation should be improved, but there might have been some misunderstanding there. Lastly, even the positive reviewer thought the results were somewhat incremental and non-surprising.
I hope the authors improve their paper and re-submit. | test | [
"uClNQckKRJe",
"kmVwW1JgI0e",
"MigYXJriENU",
"W1EiTpVnT3s",
"A8qyPbg_SPp",
"CJTmEAFAgiR",
"asHNrAhYgHe",
"pWlXpOKIW9q",
"_AM0yRX0eEz",
"ydCad8vzSBs",
"Tm8dUJlNvUp",
"euyiXVotnd3",
"KMDHDcrnAfT"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your comments.\n\nWe addressed the importance of forgetting metric, which is one of the valuable metric to evaluate the performance of continual learning in the above official comments for all reviewers and chairs.\nWe know that our theory based schme, NCCL suffers from learning new tasks slig... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
3,
3
] | [
"ydCad8vzSBs",
"A8qyPbg_SPp",
"W1EiTpVnT3s",
"iclr_2022_CTOJRqLMsl",
"CJTmEAFAgiR",
"euyiXVotnd3",
"iclr_2022_CTOJRqLMsl",
"iclr_2022_CTOJRqLMsl",
"asHNrAhYgHe",
"pWlXpOKIW9q",
"KMDHDcrnAfT",
"iclr_2022_CTOJRqLMsl",
"iclr_2022_CTOJRqLMsl"
] |
iclr_2022_wmQCFqV9r8L | SpaceMAP: Visualizing Any Data in 2-dimension by Space Expansion | Dimensionality reduction (DR) and visualization of high-dimensional data is of theoretical and practical value in machine learning and related fields. In theory, there exists an intriguing, non-intuitive discrepancy between the geometry of high-dimensional space and low-dimensional space. Based on this discrepancy, we propose a novel DR and visualization method called Space-based Manifold Approximation and Projection (SpaceMAP). Our method establishes a quantitative space transformation to address the ``crowding problem" in DR; with the proposed equivalent extended distance (EED) and function distortion (FD) theory, we are able to match the capacity of high-dimensional and low-dimensional space, in a principled manner. To handle complex high-dimensional data with different manifold properties, SpaceMAP makes distinctions between the near field, middle field, and far field of data distribution in a data-specific, hierarchical manner. We evaluated SpaceMAP on a range of artificial and real datasets with different manifold properties, and demonstrated its excellent performance in comparison with classical and state-of-the-art DR methods. In addition, the concept of space distortion provides a generic framework for understanding nonlinear DR methods such as t-distributed Stochastic Neighbor Embedding (tSNE) and Uniform Manifold Approximation and Projection (UMAP). | Reject | Reviewers overall found that the paper contains novel and intriguing ideas worth further investigation. There is, however, a consensus that the paper is not ready to be published yet, for several reasons detailed in the reviews pertaining to 1) the fact that several statements should be better supported theoretically or empirically, 2) the technical derivation of the method where several choices made by the authors are surprising and not justified, and 3) the experimental results that do not clearly support the claims of the manuscript. While the authors have improved the manuscript during the discussion phase, there is still too much work to be done in order to address issues remaining. We hope the reviews will be helpful for authors to consider a revision of the paper for a future submission. | train | [
"ISx9Tw5bKj4",
"lWfXdnnpjLv",
"rjt2RRCYBp1",
"sEKIZkftaQk",
"b1f-b-eDmN",
"kxzEOAhkix",
"dBnor40-W_",
"KvTV4Sw9dMM",
"TTwW7ay5me",
"k7CZ-K98JGh",
"i4tAi0D_2Q",
"rI1TKcSz2sU",
"sw9qv1TTlo",
"Qc9M8frBxl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing the comments. I think the changes have improved the paper, but I still think the overall verdict from my side is the same. The paper clearly has an important message, but I think it is still too unclear and vague what exactly separates this from the other methodologies such as T-SNE or UM... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4,
4
] | [
"kxzEOAhkix",
"TTwW7ay5me",
"b1f-b-eDmN",
"dBnor40-W_",
"Qc9M8frBxl",
"k7CZ-K98JGh",
"sw9qv1TTlo",
"rI1TKcSz2sU",
"i4tAi0D_2Q",
"iclr_2022_wmQCFqV9r8L",
"iclr_2022_wmQCFqV9r8L",
"iclr_2022_wmQCFqV9r8L",
"iclr_2022_wmQCFqV9r8L",
"iclr_2022_wmQCFqV9r8L"
] |
iclr_2022_r4PibJdCyn | TotalRecall: A Bidirectional Candidates Generation Framework for Large Scale Recommender \& Advertising Systems | Recommender (RS) and Advertising/Marketing Systems (AS) play the key roles in E-commerce companies like Amazaon and Alibaba. RS needs to generate thousands of item candidates for each user ($u2i$), while AS needs to identify thousands or even millions of high-potential users for given items so that the merchant can advertise these items efficiently with limited budget ($i2u$). This paper proposes an elegant bidirectional candidates generation framework that can serve both purposes all together. Besides, our framework is also superior in these aspects: $i).$ Our framework can easily incorporate many DNN-architectures of RS ($u2i$), and increase the HitRate and Recall by a large margin. $ii).$ We archive much better results in $i2u$ candidates generation compare to strong baselines. $iii).$ We empirically show that our framework can diversify the generated candidates, and ensure fast convergence to better results. | Reject | Reviewers are in agreement that the paper is below the acceptance threshold. Main concerns focus around novelty, experiments, and justification of the paper's main claims. | train | [
"7MhefJ0uOsi",
"ZKTwJWVIKqt",
"Xgl4DVlZoac",
"q0Ep_oPIUMn",
"0u7o7Uhxdrq",
"ubbKRsnHLOm",
"JazFruW8Bvk",
"aunEbS_rThO",
"ExC2_Do-Itx",
"IDutTYayd5g",
"iWQyyrSctt_",
"pT_8--PF8AW",
"SJZdvqpqI_",
"oybBV8sYd86"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to address candidate generations of users and items for recommender and advertising systems simultaneously by a single proposed model. The proposed model provides two ideas for existing two-tower recommendation models: it introduces a normalization of the score function and a bidirectional vers... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_r4PibJdCyn",
"7MhefJ0uOsi",
"7MhefJ0uOsi",
"7MhefJ0uOsi",
"7MhefJ0uOsi",
"iclr_2022_r4PibJdCyn",
"pT_8--PF8AW",
"oybBV8sYd86",
"7MhefJ0uOsi",
"SJZdvqpqI_",
"iclr_2022_r4PibJdCyn",
"iclr_2022_r4PibJdCyn",
"iclr_2022_r4PibJdCyn",
"iclr_2022_r4PibJdCyn"
] |
iclr_2022_9FfAEgUYGON | Mismatched No More: Joint Model-Policy Optimization for Model-Based RL | Many model-based reinforcement learning (RL) methods follow a similar template: fit a model to previously observed data, and then use data from that model for RL or planning. However, models that achieve better training performance (e.g., lower MSE) are not necessarily better for control: an RL agent may seek out the small fraction of states where an accurate model makes mistakes, or it might act in ways that do not expose the errors of an inaccurate model. As noted in prior work, there is an objective mismatch: models are useful if they yield good policies, but they are trained to maximize their accuracy, rather than the performance of the policies that result from them. In this work we propose a single objective for jointly training the model and the policy, such that updates to either component increases a lower bound on expected return. This joint optimization mends the objective mismatch in prior work. Our objective is a global lower bound on expected return, and this bound becomes tight under certain assumptions. The resulting algorithm (MnM) is conceptually similar to a GAN: a classifier distinguishes between real and fake transitions, the model is updated to produce transitions that look realistic, and the policy is updated to avoid states where the model predictions are unrealistic. | Reject | The reviewers agree that the proposed method of joint model-policy optimization using a lower bound is novel and interesting and worthwhile pursuing. But all reviewers find a variety of issues in the paper, such that ratings are just above borderline or below. Given all the mixed feedback, it appears that the paper is still a bit premature for publication and could greatly benefit from improvements in a future submission. | train | [
"mucAvu76udS",
"d2kKweL7PEQ",
"V7dt-Fz7Khr",
"pGtZtGj409V",
"1HiaW5g9XnL",
"XgsY5q7uin-",
"39387VoFT8Z",
"XUjAeuCVc6L",
"bCIS61XiIZX",
"Ikcm2nVc4aE",
"ubmLpy9kXb",
"wRvJlH78k3r",
"wwCC3ItMgPD",
"b0HEW-nE6nD",
"zb5RAf8SFKs",
"myFPS0LQ7BO",
"GjaNBtZVjXj",
"AYzNlGj_7zQ",
"Ar24yIo1Di... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Dear Reviewer,\n\nThank you for raising a number of questions and concerns in the initial review. Revising the paper to address these concerns has made the paper more precise and rigorous. Have the revisions and responses above addressed all the reviewer's concerns? We would be happy to address any additional que... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"myFPS0LQ7BO",
"39387VoFT8Z",
"pGtZtGj409V",
"XUjAeuCVc6L",
"XgsY5q7uin-",
"bCIS61XiIZX",
"ubmLpy9kXb",
"wRvJlH78k3r",
"b0HEW-nE6nD",
"myFPS0LQ7BO",
"Ar24yIo1Di7",
"SVNmfI1ADor",
"AA_eC9RLdjW",
"zb5RAf8SFKs",
"wwCC3ItMgPD",
"GjaNBtZVjXj",
"AYzNlGj_7zQ",
"iclr_2022_9FfAEgUYGON",
"... |
iclr_2022_7VH_ZMpwZXa | No Shifted Augmentations (NSA): strong baselines for self-supervised Anomaly Detection | Unsupervised Anomaly detection (AD) requires building a notion of normalcy, distinguishing in-distribution (ID) and out-of-distribution (OOD) data, using only available ID samples. Recently, large gains were made on this task for the domain of natural images using self-supervised contrastive feature learning as a first step followed by kNN or traditional one-class classifiers for feature scoring.
Learned representations that are non-uniformly distributed on the unit hypersphere have been shown to be beneficial for this task. We go a step further and investigate how the \emph {geometrical compactness} of the ID feature distribution makes isolating and detecting outliers easier, especially in the realistic situation when ID training data is polluted (i.e. ID data contains some OOD data that is used for learning the feature extractor parameters).
We propose novel architectural modifications to the self-supervised feature learning step, that enable such compact ID distributions to be learned. We show that the proposed modifications can be effectively applied to most existing self-supervised learning objectives with large gains in performance. Furthermore, this improved OOD performance is obtained without resorting to tricks such as using strongly augmented ID images (e.g. by 90 degree rotations) as proxies for the unseen OOD data, which imposes overly prescriptive assumptions about ID data and its invariances.
We perform extensive studies on benchmark datasets for one-class OOD detection and show state-of-the-art performance in the presence of pollution in the ID data, and comparable performance otherwise. We also propose and extensively evaluate a novel feature scoring technique based on the angular Mahalanobis distance, and propose a simple and novel technique for feature ensembling during evaluation that enables a big boost in performance at nearly zero run-time cost compared to the standard use of model ensembling or test time augmentations. Code for all models and experiments will be made open-source. | Reject | The paper investigates how the geometrical compactness of in-distribution examples affects OOD detection performance and proposes architectural modifications to enable compact in-distribution embeddings. All the reviewers agreed that the paper has several interesting contributions. I agree with the authors that simplicity is a strength, not a weakness.
My main concern is that the paper's contributions feel a bit scattered. For instance, the paper does a detailed evaluation of normalization and compactness, but makes a few other minor contributions (as detailed by
the authors at https://openreview.net/forum?id=7VH_ZMpwZXa¬eId=m-1y5byLbwS). However, the latter contributions feel a bit narrow to specific methods and are not as comprehensively tested as the claims around normalization.
Overall, the reviewers and I think that the current version falls below the acceptance threshold. I encourage the authors to revise the draft and resubmit to a different venue. | train | [
"bltLG-_ALt_",
"QCN4EiAN2jz",
"9PCDgkoVg6r",
"m-1y5byLbwS",
"qbqT7fn7B9o",
"JOl6xidrDpK",
"Us3v2I1GbNZ",
"IfxZwoCKBDQ",
"Sh-xXce0Opu",
"SMiSWkMm5g9",
"hEQBmgGtfYK",
"hXyjgVzK46",
"-J48vx8f0SZ",
"aW0ypY9aLJT"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >Thanks authors for their response to resolve several of my concerns. I believe they are good (necessary) additions to the paper \n\nThanks for acknowledging that we have resolved many of your concerns, and for acknowledging the quality of our clarifications.\n\n>While the proposed method improve upon its own bas... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"QCN4EiAN2jz",
"Sh-xXce0Opu",
"IfxZwoCKBDQ",
"9PCDgkoVg6r",
"iclr_2022_7VH_ZMpwZXa",
"iclr_2022_7VH_ZMpwZXa",
"aW0ypY9aLJT",
"qbqT7fn7B9o",
"-J48vx8f0SZ",
"hXyjgVzK46",
"iclr_2022_7VH_ZMpwZXa",
"iclr_2022_7VH_ZMpwZXa",
"iclr_2022_7VH_ZMpwZXa",
"iclr_2022_7VH_ZMpwZXa"
] |
iclr_2022_89W18gW0-6o | Provably Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning | Meta-learning for offline reinforcement learning (OMRL) is an understudied problem with tremendous potential impact by enabling RL algorithms in many real-world applications. A popular solution to the problem is to infer task identity as augmented state using a context-based encoder, for which efficient learning of robust task representations remains an open challenge. In this work, we provably improve upon one of the SOTA OMRL algorithms, FOCAL, by incorporating intra-task attention mechanism and inter-task contrastive learning objectives, to robustify task representation learning against sparse reward and distribution shift. Theoretical analysis and experiments are presented to demonstrate the superior performance and robustness of our end-to-end and model-free framework compared to prior algorithms across multiple meta-RL benchmarks. | Reject | The paper addresses the problem of offline meta reinforcement learning. The authors build on the FOCAL algorithm, adding intra-task attention and inter-task contrastive representation learning objectives. The resulting FOCAL++ algorithm outperforms several strong baseline, including FOCAL and a theoretical analysis attempting to show that FOCAL++ provably improves on FOCAL is included.
Reviewers agreed that the novelty of the proposed approach is limited since attention and contrastive representation learning have been used in the closely related (online) meta-RL setting. At the same time reviewers agreed that the results in the paper and the rebuttal show that FOCAL++ improves on a strong set of baselines.
The main shared concern was regarding the significance and validity of the theoretical analysis. After considering the rebuttal reviewers voting for both acceptance and rejection were in agreement that there are issues with the theoretical analysis/justification. While we agree with the authors that the algorithmic and experimental part of the paper is strong, we have to base our decision on the state of the whole paper. In the end we decided not to accept the paper because 1) the paper put a significant focus on a theoretical argument the reviewers found problematic and 2) the authors did not modify the paper to sufficiently address these concerns during the available window. | train | [
"HP6UDb_zOMc",
"vm5F137dJ93",
"-f24AlRv8Gy",
"yDf2ra_pYJd",
"uoFIc8vQHsJ",
"TwACfjF7n1F",
"1hAmaa79IX",
"69HjvpSj51v",
"JXnx8ghSLDN",
"8XZR-yRO-94",
"FqQgobT1VKx",
"ml9dtss-3f9",
"3VOYDVJQRNc",
"7afxWxHaUmu",
"EyNv1gZMHKE",
"fNdP7XEAvFp",
"Mneavn20PQo",
"etuGkLe-xA-",
"xXuN9QBObq... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers' timely feedback after seeing our rebuttal revision. **The reviewers agree that the added experiments comparing FOCAL++ vs. other COMRL baselines have strengthened the paper, and significantly widen the scope to be of interest to the ICLR community.** However, reviewers such as yZzz and oBW... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
2
] | [
"iclr_2022_89W18gW0-6o",
"uoFIc8vQHsJ",
"TwACfjF7n1F",
"8XZR-yRO-94",
"7afxWxHaUmu",
"3VOYDVJQRNc",
"69HjvpSj51v",
"iclr_2022_89W18gW0-6o",
"8XZR-yRO-94",
"FqQgobT1VKx",
"xXuN9QBObqK",
"iclr_2022_89W18gW0-6o",
"XU5QORwsurq",
"EyNv1gZMHKE",
"Mneavn20PQo",
"etuGkLe-xA-",
"iclr_2022_89W... |
iclr_2022_ugxdsne_TlO | GCF: Generalized Causal Forest for Heterogeneous Treatment Effect Estimation Using Nonparametric Methods | Heterogeneous treatment effect (HTE) estimation with continuous treatment is essential in multiple disciplines, such as the online marketplace and pharmaceutical industry. The existing machine learning (ML) methods, like forest-based modeling, either work only for discrete treatments or make partially linear or parametric assumptions that may suffer from model misspecification. To alleviate these problems, we extend causal forest (CF) with non-parametric dose-response functions (DRFs) that can be estimated locally using kernel-based Double/Debiased ML estimators. Moreover, we propose a distance-based splitting criterion in the functional space of Partial DRFs to capture the heterogeneity for continuous treatments. We call the proposed algorithm generalized causal forest (GCF) as it generalizes the use case of CF to a much broader setup. We show the effectiveness of GCF compared to SOTA on synthetic data and proprietary real-world data sets. | Reject | The paper introduces some interesting ideas on how use causal random forests for conditional average treatment effects (CATE), with respect to some baseline treatment level ("0"), when the treatment variable is continuous. Figure 1 summarises the scope of the paper neatly. Scalability issues are also considered.
I think this *is* a paper "nearly there" in terms of a impactful contribution. The main issues are some presentation kinks and extra steps in the theory. I think the very low scores from the reviewers are not quite representative of the overall quality (I would be more generous). However, I'm afraid I'm also inclined towards a reject. The paper neglects some other developments on ML for CATE with continuous treatment e.g. Bica et al.'s "Estimating the Effects of Continuous-valued Interventions using Generative Adversarial Networks" (NeurIPS 2020) and the references within. A focus on the theory would help to differentiate it, but I'm not that confident that the results are currently mature enough to claim them.
Although I suggest a rejection, let me make clear I strongly encourage the authors to further pursue their ideas. You are doing good work, and the next iteration might nail it. As you found out in the discussion, emphasise the continuous aspect of it. I'd also emphasise the fact that you have a clear setup of the problem in terms of the contrast wrt to a baseline treatment effect instead of some generic contrast function. People in ML tend to be oblivious to such a setup, but I'm not convinced you are properly exploiting it. | test | [
"yMHD3QcA1Oj",
"eBYOB696gwm",
"8S9qIMo5roi",
"seW663KCcgd",
"qTVNEXT2a3R",
"XVVynH2lxt",
"dpnmpB9WEJ4",
"S8Q74_ZTniI",
"QD-QKkvotgK",
"TLnGbWWuc29",
"X3TdR2prjht",
"ixQx4BmA2T",
"91W47_xtt-u"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer X6oe,\n\nThanks again for your comments and suggestions for our paper. We are more than happy to answer any further concerns you may have. Please do not hesitate to comment and let us know your feedback.\n\nThank you for your time!\n\nPaper author",
" Dear Reviewer akBL,\n\nThanks again for your c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
1,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"ixQx4BmA2T",
"X3TdR2prjht",
"seW663KCcgd",
"dpnmpB9WEJ4",
"ixQx4BmA2T",
"ixQx4BmA2T",
"S8Q74_ZTniI",
"91W47_xtt-u",
"91W47_xtt-u",
"X3TdR2prjht",
"iclr_2022_ugxdsne_TlO",
"iclr_2022_ugxdsne_TlO",
"iclr_2022_ugxdsne_TlO"
] |
iclr_2022_p4H9QlbJvx | Rethinking Again the Value of Network Pruning -- A Dynamical Isometry Perspective | Several recent works questioned the value of inheriting weight in structured neural network pruning because they empirically found training from scratch can match or even outperform finetuning a pruned model. In this paper, we present evidences that this argument is actually \emph{inaccurate} because of using improperly small finetuning learning rates. With larger learning rates, our results consistently suggest pruning outperforms training from scratch on multiple networks (ResNets, VGG11) and datasets (MNIST, CIFAR10, ImageNet) over most pruning ratios. To deeply understand why finetuning learning rate holds such a critical role, we examine the theoretical reason behind through the lens of \emph{dynamical isometry}, a nice property of networks that can make the gradient signals preserve norm during propagation. Our results suggest that weight removal in pruning breaks dynamical isometry, \emph{which fundamentally answers for the performance gap between a large finetuning LR and~a small one}. Therefore, it is necessary to recover the dynamical isometry before finetuning. In this regard, we also present a regularization-based technique to do so, which is rather simple-to-implement yet effective in dynamical isometry recovery on modern residual convolutional neural networks. | Reject | The paper's primary contributions are:
* Contrary to previous claims, the authors empirically show that inheriting the weights after pruning can be beneficial when using *larger* fine-tuning learning rates than previously done.
* As an explanation, the authors provide suggestive results showing that pruning breaks dynamical isometry, which they claim explains why larger learning rates are needed.
* They propose a regularization-based technique to recover dynamical isometry on modern residual CNNs.
Generally, reviewers were positive about the ideas in the paper, however, even after the rebuttal 3/4 reviewers did not find the arguments were clear or strongly supported yet. One issue that came up several times is a request for more investigation of StrongReg+pruning. At this time, I have to recommend rejection, but I encourage the authors to follow up on the reviewers suggestions and submit to a future venue. | train | [
"bBSOzSh_t71",
"YMX00Wvatt",
"4tkIvTULlLU",
"TpiIFVh0Iq1",
"bha0c__ww0O",
"168KnlUGP3",
"f9HC26wk2Fa",
"TmjV7MVWRiB",
"9KeQRYJjcFb",
"g6DwZ65tmf",
"JNBVU3Tlcfh",
"qLaHLkc7gAt",
"VY05N90k-9U",
"F4vSYSfb6N6",
"fihKSr7ik51",
"0fGRCPJkMPj",
"VcP-kbGnFal",
"bJqmK6d-YLs",
"c7mCiIDjRJv"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" We greatly thank R4 for your kind words and for improving the score. \n\nRegarding \"*I still do not feel the paper makes rigorous arguments and experiments to back its core argument regarding the effectiveness of pruning*\", we feel we may need to make a little further remarks. Note, this should *not* be taken a... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"4tkIvTULlLU",
"iclr_2022_p4H9QlbJvx",
"f9HC26wk2Fa",
"bha0c__ww0O",
"dw_b-uhNpNt",
"qLaHLkc7gAt",
"TmjV7MVWRiB",
"Ikn4F6SvA3",
"YMX00Wvatt",
"-kEDu9CDHa2",
"iclr_2022_p4H9QlbJvx",
"IXfEyI0vgDI",
"F4vSYSfb6N6",
"iclr_2022_p4H9QlbJvx",
"6sA-fgy87ZM",
"YMX00Wvatt",
"-kEDu9CDHa2",
"YM... |
iclr_2022_TNBTpPO0QX | Monotone deep Boltzmann machines | Deep Boltzmann machines refer to deep multi-layered probabilistic models, governed by a pairwise energy function that describes the likelihood of all variables in the network. Due to the difficulty of inference in such systems, they have given way largely to \emph{restricted} deep Boltzmann machines (which do not permit intra-layer or skip connections). In this paper, we propose a class of model that allows for \emph{exact, efficient} mean-field inference and learning in \emph{general} deep Boltzmann machines. To do so, we use the tools of the recently proposed monotone Deep Equilibrium (DEQ) Model, an implicit-depth deep network that always guarantees the existence and uniqueness of its fixed points. We show that, for a class of general deep Boltzmann machine, the mean-field fixed point can be considered as the equivalent fixed point of a monotone DEQ, which gives us a recipe for deriving an efficient mean-field inference procedure with global convergence guarantees. In addition, we show that our procedure outperforms existing mean-field approximation methods while avoiding any issue of local optima. We apply this approach to simple deep convolutional Boltzmann architectures and demonstrate that it allows for tasks such as the joint completion and classification of images, all within a single deep probabilistic setting. | Reject | This is an interesting contribution to the Boltzmann machine (BM) literature that makes a nice connection to DEQ models. On a positive note, reviewers found that it was well-written, clear, and interesting. Unfortunately, there were significant concerns with the manuscript that were not fully addressed in the revision: inappropriate or incomplete baselines, insufficient credit given to previous works, and the fact that this model is limited as compared to its BM relatives.
I would recommend that the authors take into account the reviewers' feedback in a revision of the work. | train | [
"QdP4zXjTMpF",
"AJj28G7pzGi",
"n7TTglcE8DR",
"ByepGOgLiRL",
"y-5w_o_UjBP",
"gq468ZnUqla",
"vR7S2DDT1a",
"YmSd663R_fo",
"qFJAbgOYwi",
"FQk4LtXSCQx",
"XxTvS-h3S9",
"eR1vCZEh9pQ",
"aqLP6Jn0JSA",
"_lSJhPraQyn",
"s8oleR0t3w",
"FQ8y6QfOWW7",
"3KsvJUU3nV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your rebuttal and trying the Baseline 1 that I suggested. The results of that experiment are in line with my expectations: The monotonicity constraint, while theoretically appealing, does not result in a practical advantage. I think it would be useful to find a use case where this monotonicity results ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"eR1vCZEh9pQ",
"XxTvS-h3S9",
"ByepGOgLiRL",
"y-5w_o_UjBP",
"qFJAbgOYwi",
"YmSd663R_fo",
"iclr_2022_TNBTpPO0QX",
"FQk4LtXSCQx",
"aqLP6Jn0JSA",
"3KsvJUU3nV",
"FQ8y6QfOWW7",
"s8oleR0t3w",
"_lSJhPraQyn",
"iclr_2022_TNBTpPO0QX",
"iclr_2022_TNBTpPO0QX",
"iclr_2022_TNBTpPO0QX",
"iclr_2022_T... |
iclr_2022_W5PbuwQFzZx | Locality-Based Mini Batching for Graph Neural Networks | Training graph neural networks on large graphs is challenging since there is no clear way of how to extract mini batches from connected data. To solve this, previous methods have primarily relied on sampling. While this often leads to good convergence, it introduces significant overhead and requires expensive random data accesses. In this work we propose locality-based mini batching (LBMB), which circumvents sampling by using fixed mini batches based on node locality. LBMB first partitions the training/validation nodes into batches, and then selects the most important auxiliary nodes for each batch using local clustering. Thanks to precomputed batches and consecutive memory accesses, LBMB accelerates training by up to 20x per epoch compared to previous methods, and thus provides significantly better convergence per runtime. Moreover, it accelerates inference by up to 100x, at little to no cost of accuracy. | Reject | The paper suggests a non-random strategy for selecting minibatches of nodes for training graph neural networks. The main argument is that consecutive memory accesses are faster than random accesses, and thus they claim a 20x speedup per epoch by precomputing batches at a small cost to accuracy.
There are a number of discussion points. One reviewer finds the results hard to believe because previous work has shown that runtime sampling can be fully pipelined. The authors agree but say their speedups are still better, which isn’t a fully satisfying response, and it calls into question the quality of the baseline implementation. Another concern is about the effect of deterministic minibatches. The authors argue that the empirical results speak for themselves, while the reviewer worries about robustness. There also are some concerns about methodology around hyperparameters and special-casing of preprocessing for one dataset, though those appear mostly resolved.
On the whole, this is a borderline paper that lands just on the side of rejection. I’d encourage the authors to more thoroughly address the questions about quality of the baseline implementation and the reviewer’s concern about robustness of deterministic minibatches, and then resubmit to the next conference. | train | [
"jz8Ed6Tu9Eo",
"8jDuj2c-NE",
"t9olc61AMTO",
"Lc1hjwY8fy3",
"QWxU7ZEgfrh",
"Krnv3bDj_0y",
"tfp_zFCQb3-",
"fvm7yI-y3Yi",
"nOcv7N5oi5",
"QrlzmndAGfX",
"5pFzqJ0mP0r",
"zmnk7ghiAEw",
"QJUijGht80X",
"k91lKvwMM60",
"WspReEiqH0",
"MNE-jXnNNGy",
"1A3XlLXi9a"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We have made significant improvements to the paper based on your helpful feedback and addressed all of your concerns in our response. The discussion phase is ending very soon. We would be very thankful if you would revisit your evaluation.",
" Thank you for reconsidering your score based on the update and discu... | [
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"1A3XlLXi9a",
"QWxU7ZEgfrh",
"iclr_2022_W5PbuwQFzZx",
"iclr_2022_W5PbuwQFzZx",
"zmnk7ghiAEw",
"tfp_zFCQb3-",
"nOcv7N5oi5",
"QrlzmndAGfX",
"QrlzmndAGfX",
"QJUijGht80X",
"iclr_2022_W5PbuwQFzZx",
"Lc1hjwY8fy3",
"t9olc61AMTO",
"t9olc61AMTO",
"Lc1hjwY8fy3",
"1A3XlLXi9a",
"iclr_2022_W5Pbuw... |
iclr_2022_v3LXWP63qOZ | Learning Minimal Representations with Model Invariance | Sparsity has been identified as an important characteristic in learning neural networks that generalize well, forming the key idea in constructing minimal representations. Minimal representations are ones that only encode information required to predict well on a task and nothing more. In this paper we present a powerful approach to learning minimal representations. Our method, called ModInv or model invariance, argues for learning using multiple predictors and a single representation, creating a bottleneck architecture. Predictors' learning landscapes are diversified by training independently and with different learning rates. The common representation acts as a implicit invariance objective to avoid the different spurious correlations captured by individual predictors. This in turn leads to better generalization performance. ModInv is tested on both the Reinforcement Learning and the Self-supervised Learning settings, showcasing strong performance boosts in both. It is extremely simple to implement, does not lead to any delay in walk clock times while training, and can be applied across different problem settings. | Reject | Although all reviewers had many positive comments on the paper, and the authors engaged nicely in the discussion period, at the moment there is a consensus among the reviewers that the central claims of the paper (related to minimal representations / information bottleneck) are not adequately supported by the current experiments. In particular, there were concerns that performance gains could be due to diversity of predictors, rather than minimal representations, which would need to be addressed. It's suggested that the reviewers take all of these comments and discussion into account when preparing a revised version of the paper. | train | [
"Gqs_6bqqniY",
"1VLExx0tCUX",
"e9w0EcxOxxb",
"-49jTpmAQN",
"LLBJhR-agl3",
"yQpOYnV95z5",
"hYr0u9RJ7Ja",
"4hPKuhxJVK_",
"TZFFT2-bRui",
"LYQvQilyp6h",
"QGKvTvgR_Yy"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Apologies for the delay and thanks to the authors for responding to my concerns. I read the other reviews and the corresponding responses to the same. Highlighting below which concerns (mine or otherwise) I think were sufficiently addressed.\n\nThanks for providing the updated ImageNet results with the Barlow Twi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"yQpOYnV95z5",
"e9w0EcxOxxb",
"QGKvTvgR_Yy",
"LYQvQilyp6h",
"TZFFT2-bRui",
"4hPKuhxJVK_",
"iclr_2022_v3LXWP63qOZ",
"iclr_2022_v3LXWP63qOZ",
"iclr_2022_v3LXWP63qOZ",
"iclr_2022_v3LXWP63qOZ",
"iclr_2022_v3LXWP63qOZ"
] |
iclr_2022_ue4CArRAsct | Structure by Architecture: Disentangled Representations without Regularization | We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance inherent to VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, akin to structural causal models, thereby ordering the information without any additional regularization. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets. | Reject | The paper proposes a method for structured representation learning using autoencoders. The method has two primary ingredients: (i) encourage independence in latent blocks by feeding different blocks of the latent representation to different depths of the decoder by injecting noise in an Ada-IN-inspired block, (ii) a so-called hybrid sampling, that samples each block from a fixed learned set of k latent vectors, similar to the codebook used in VQ-VAE (Oord et al 2017). The method is claimed to result in higher fidelity reconstruction and generation while also learning representations that are more disentangled compared to VAE and $\beta$-VAE.
Some limitations that came up in the reviews and later in the discussion among the reviewers are (i) lack of comparison with more advanced disentangled VAEs, which would be helpful in establishing the claim of the paper on better reconstruction but comparable disentanglement performance to regularization based methods (ii) high-level similarity to VLAE and other methods that also use hierarchical latent variables that limits the claims on novelty. Current draft also emphasizes disentanglement which the reviewers found lacking in justification and rigor. The paper is currently not suitable for publication at ICLR but taking into account the comments from reviewers on the presentation aspects will help improve the paper. | train | [
"537xsb-wJy8",
"XYt-aR96Xa_",
"hY9czFFy5j",
"8ICrFwE185K",
"zNXnG5MNk6W",
"lBJ_uIYfv3S",
"xrn5Apz1qMx",
"FnL9DG_DZh6",
"LUKWewJ0SbE",
"C58Faj26nu",
"P5vbAQl7L-h",
"4PVELI0DpeX",
"C0tG8q5BR-E",
"dFAtjMsPf6G",
"oWyF8OR5PCO",
"cVoVlfk5UBF",
"UcydiH3mCGA",
"RTlVfU5Uvr",
"QkGagu5cqc6"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your astute analysis and feedback. The progression of the story of our paper you describe, \"drawing inspiration from causality -> proposed new / alternative architecture for autoencoders -> better reconstruction and nice disentanglement than regularization-based techniques\", is particularly apt.\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"XYt-aR96Xa_",
"C0tG8q5BR-E",
"8ICrFwE185K",
"zNXnG5MNk6W",
"dFAtjMsPf6G",
"FnL9DG_DZh6",
"FnL9DG_DZh6",
"LUKWewJ0SbE",
"C58Faj26nu",
"oWyF8OR5PCO",
"cVoVlfk5UBF",
"RTlVfU5Uvr",
"UcydiH3mCGA",
"7BmiJNTwk-T",
"QkGagu5cqc6",
"iclr_2022_ue4CArRAsct",
"iclr_2022_ue4CArRAsct",
"iclr_202... |
iclr_2022_ks_uMcTPyW4 | Reinforcement Learning with Efficient Active Feature Acquisition | Solving real-life sequential decision making problems under partial observability involves an exploration-exploitation problem. To be successful, an agent needs to efficiently gather valuable information about the state of the world for making rewarding decisions. However, in real-life, acquiring valuable information is often highly costly, e.g., in the medical domain, information acquisition might correspond to performing a medical test on a patient. Thus it poses a significant challenge for the agent to learn optimal task policy while efficiently reducing the cost for information acquisition. In this paper, we introduce a model-based framework to solve such exploration-exploitation problem during its execution. Key to the success is a sequential variational auto-encoder which could learn high-quality representations over the partially observed/missing features, where such representation learning serves as a prime factor to drive efficient policy training under the cost-sensitive setting. We demonstrate our proposed method could significantly outperform conventional approaches in a control domain as well as using a medical simulator. | Reject | Due to the delayed rebuttal made it very hard for reviewers to react.
The paper proposes a new sub-type of POMDPs dubbed AFA-POMDP. The proposed approach first learns a sequential VAE, then an RL approach learns control and feature acquisition policies jointly. The approach is evaluated on two tasks and shows very promising results compared to baselines. Overall the setting and the approach are very interesting.
The replies and revised paper managed to address some of the concerns of the reviewers. However, there remain a few open questions and doubts (see updated reviews), in particular as some of the arguments of the authors remain in the hypothetical, and the reviewers are still not entirely convinced by the choice of the experimental tasks. | test | [
"GIhsuyqNjGA",
"NvjtzsOvBlp",
"MRQWltD2jCK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a reinforcement learning + representation learning approach for simultaneously learning a control policy and feature acquisition policy in environments where feature observation is costly. The authors formulate an approach for learning time series latent variable models that incorporate informa... | [
6,
5,
6
] | [
3,
3,
4
] | [
"iclr_2022_ks_uMcTPyW4",
"iclr_2022_ks_uMcTPyW4",
"iclr_2022_ks_uMcTPyW4"
] |
iclr_2022_0ze7XgWcYNV | Learning When and What to Ask: a Hierarchical Reinforcement Learning Framework | Reliable AI agents should be mindful of the limits of their knowledge and consult humans when sensing that they do not have sufficient knowledge to make sound decisions. We formulate a hierarchical reinforcement learning framework for learning to decide when to request additional information from humans and what type of information would be helpful to request. Our framework extends partially-observed Markov decision processes (POMDPs) by allowing an agent to interact with an assistant to leverage their knowledge in accomplishing tasks. Results on a simulated human-assisted navigation problem demonstrate the effectiveness of our framework: aided with an interaction policy learned by our method, a navigation policy achieves up to a 7× improvement in task success rate compared to performing tasks only by itself. The interaction policy is also efficient: on average, only a quarter of all actions taken during a task execution are requests for information. We analyze benefits and challenges of learning with a hierarchical policy structure and suggest directions for future work. | Reject | This paper is on the theme of active reinforcement learning with a human/assistant in the loop. Under partial observability, an agent acts as per an interaction policy that gathers state/goal information from the assistant, while an operational policy assumed pre-learnt in this paper executes low-level actions. The reviews acknowledge the relevance of this topic and that the paper is well structured and coherently presented overall. However, there are unanimous concerns around experimental evaluation being unconvincing, lack of strong baselines and lack of thorough coverage of related work precluding an accurate assessment of claimed contributions. As such, the paper is not in a form that can be accepted at ICLR -- the authors are encouraged to revise their submission as per review feedback. | train | [
"zw767eznJZS",
"ywodw790ukN",
"tS_xo6ejpW5",
"SqdSFMg84j_",
"N1KpddTQv4M",
"99nTLA4Wfg",
"sdfIg2_SORo",
"a8SyINWh8ox",
"HFwZmpSwfJ",
"35r6n1vFIJw",
"I1kRdLoAbOf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for revisions! After following all the discussions, I will keep my score.",
"The paper presents a reinforcement learning framework where the agent is able to \"ask for help\" to receive from a human additional information to more easily solve the task. The paper is kinda Robotic Navigation-oriented by th... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"sdfIg2_SORo",
"iclr_2022_0ze7XgWcYNV",
"iclr_2022_0ze7XgWcYNV",
"N1KpddTQv4M",
"I1kRdLoAbOf",
"35r6n1vFIJw",
"99nTLA4Wfg",
"ywodw790ukN",
"iclr_2022_0ze7XgWcYNV",
"iclr_2022_0ze7XgWcYNV",
"iclr_2022_0ze7XgWcYNV"
] |
iclr_2022_T_8wHvOkEi9 | Self-Organized Polynomial-time Coordination Graphs | Coordination graph is a promising approach to model agent collaboration in multi-agent reinforcement learning. It factorizes a large multi-agent system into a suite of overlapping groups that represent the underlying coordination dependencies. One critical challenge in this paradigm is the complexity of computing maximum-value actions for a graph-based value factorization. It refers to the decentralized constraint optimization problem (DCOP), which and whose constant-ratio approximation are NP-hard problems. To bypass this fundamental hardness, this paper proposes a novel method, named Self-Organized Polynomial-time Coordination Graphs (SOP-CG), which uses structured graph classes to guarantee the optimality of the induced DCOPs with sufficient function expressiveness. We extend the graph topology to be state-dependent, formulate the graph selection as an imaginary agent, and finally derive an end-to-end learning paradigm from the unified Bellman optimality equation. In experiments, we show that our approach learns interpretable graph topologies, induces effective coordination, and improves performance across a variety of cooperative multi-agent tasks. | Reject | Description of paper content:
The paper studies the problem of achieving coordination among a group of agents in a cooperative, multi-agent task. Coordination graphs reduce the computational complexity of this problem by reducing the joint value function to a sum of local value functions depending on only subsets of agents. In particular, the Q-function of the entire system is “expanded” up to second-order in agent interactions: Q = \sum_{i \in [n]} q_i + \sum_{(i,j) \in G} q_{ij}, where the q_i is function of the i-th agent’s history and current action, and q_{ij} is a function of two agents’ histories and current actions. As G does not include higher-order (third and above) terms, the algorithm does not have exponential dependence on the number of agents. If G includes only a subset of pairs of agents, then the computational complexity is reduced to less than quadratic. Since the coordination problem is cooperative, the authors propose a meta-agent (“coordinator”) that selects the graph G in a dynamic (state-by-state) fashion in order to maximize return. The optimization problems of the meta-agent and the sub-agents are performed by deep Q-learning.
Summary of paper discussion:
The critical comment made by one reviewer was: “Going back on that trend now only to pursue the polynomial-time nature of the running algorithm would in my opinion require far more diverse evaluation examples, backed by a stronger motivation highlighting real-world threats of all the other MARL algorithms taking longer than polynomial time. As is, SOP-CG does not contend amazingly against other MARL algorithms that chose the "NP-hard? Curse of dimensionality? Fine. We'll approximate, approximate, approximate." path rather than the "Polynomial time is our topmost priority; function expressiveness can wait." path. That leads me back to the question of why pursue polynomial time at the cost of losing both the function expressiveness and the peak performance….”
Comments from Area Chair:
Looking at the experiments, the number of agents in the empirical problems is not large. For example, there are 15 agents in "Sensor." Any focus on computational complexity at this scale is hard to justify, especially with algorithms that are approximate. It seems favorable at this small scale to use function approximators that can take in all the agents' histories and actions. This obvious baseline is not included in comparisons. It is hard to justify inclusion of this paper in the conference. | train | [
"u-6lv7owuju",
"YR5OkAOMfCm",
"49GqetVTbTR",
"wB-zQhvzARv",
"JAajMAlVDHf",
"jgwiw7BOM7",
"Mt5zNzeGkb4",
"HIO3zMJUQzq",
"piTzAm5lbw",
"-9j4LlIK-KK",
"snusZQyAuoZ",
"LTVTWMDEFp",
"qfZH63yEJkY",
"95O6OnyD2W6"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Area Chair,\n\nWe thank all the reviewers for their time and efforts in reviewing our work. \n\nIn response to reviewers’ comments, we have provided extensive additional experimental results, illustrations on a new didactic example, time-complexity analysis and comparison, and detailed clarification. Overall... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2022_T_8wHvOkEi9",
"iclr_2022_T_8wHvOkEi9",
"95O6OnyD2W6",
"95O6OnyD2W6",
"qfZH63yEJkY",
"qfZH63yEJkY",
"LTVTWMDEFp",
"LTVTWMDEFp",
"LTVTWMDEFp",
"LTVTWMDEFp",
"LTVTWMDEFp",
"iclr_2022_T_8wHvOkEi9",
"iclr_2022_T_8wHvOkEi9",
"iclr_2022_T_8wHvOkEi9"
] |
iclr_2022_3GHHpYrYils | On Anytime Learning at Macroscale | Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub {\em anytime learning at macroscale} (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test about thirty baseline approaches on three standard benchmarks repurposed for anytime learning at macroscale. Our findings indicate that no model strikes the best trade-off across the board. While replay-based methods attain the lowest error rate, they also incur in a 5 to 10 times increase of compute. Approaches that grow capacity over time do offer better scaling in terms of training flops, but they also underperform simpler ensembling methods in terms of error rate. Overall, ALMA offers both a good abstraction of the typical learning setting faced everyday by practitioners, and a set of unsolved modeling problems for those interested in efficient learning of dynamic models. | Reject | The work presented in this submission is focused on a new approach for learning a model that can perform well at any point in time, and called Anytime Learning at Macroscale (ALMA). The algorithm processes data through a series of training batches, each of these processing steps being followed to a model evaluation. The total loss is the average (or sum) of the losses computed at each step.
Reviewers agreed that the paper is not ready for acceptance at ICLR 2022 as the presentation of the work lacks of clarity, especially w.r.t. to the similarities with online learning and the learning of streams of data, and the fundamental difference between small or moderate batch sizes and very large batches. | train | [
"b2pLRJwQqxF",
"ILXixoFhtem",
"BtqVFRQedm",
"DZkANHCyKz",
"dCUwu1HvDnh",
"J-jD4cZIZE",
"f0Nw_CVxD2",
"Lo8Sw42VN9q",
"rkwhN17xoew",
"G-9klawUS-4",
"G2B3MoXj0Xc",
"Y8gM87zIKJa",
"XUSzwk_m8Yo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response, it did fill some gaps in my understanding of the paper. What is written about the differences with online learning and about using online learning methods is indeed interesting and important; in fact, it should also be discussed in the paper in depth as one of the main topics. (This goes ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Lo8Sw42VN9q",
"iclr_2022_3GHHpYrYils",
"f0Nw_CVxD2",
"dCUwu1HvDnh",
"rkwhN17xoew",
"XUSzwk_m8Yo",
"ILXixoFhtem",
"G2B3MoXj0Xc",
"Y8gM87zIKJa",
"iclr_2022_3GHHpYrYils",
"iclr_2022_3GHHpYrYils",
"iclr_2022_3GHHpYrYils",
"iclr_2022_3GHHpYrYils"
] |
iclr_2022_vKMVrqvXbXu | Effects of Data Geometry in Early Deep Learning | Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure.This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piecewise linear activation splits the data manifold into regions where the neural network behaves as a linear function. We derive bounds on the number of linear regions and the distance to boundaries of these linear regions on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and how our results hold upon changing neural network architectures. We further demonstrate how the complexity of linear regions changes on the low dimensional manifold of images as training progresses, using the MetFaces dataset. | Reject | The paper studies the effect of manifold geometry on the complexity of the function implemented by a random ReLU network, as measured through its decomposition into linear / affine regions. In particular, it provides bounds on a surrogate for the number of such regions and the distance of a fixed point to the boundary of its region. These bounds follow from an extension of an argument of Hanin and Rolnick for Euclidean space. The bounds hold at random initialization, and are complemented with experiments in which they remain valid through training.
Initial reviews of the paper were mixed. All reviewers recognized the extension to structured / non-euclidean data as an important direction, and the results as extending the argument of Hanin and Rolnick to this setting. At the same time, there were questions about the novelty, clarity, and implications of the paper. One issue concerns the implications of the results and the amount of insight they offer into the data complexity - network complexity relationship. In particular, the paper would be stronger with a more explicit accounting for the constant C_{M,\kappa} and intuitive explanations of how manifold properties such as curvature and reach affect the number of linear regions. There were also concerns regarding the statement and proof of Theorem 3, the initial version of which only held for small \epsilon. The review also raised other smaller issues regarding the paper's clarity and implications. After considering the authors feedback and revisions, reviewers retained their mixed evaluation of the paper. This appears to be a promising direction, but a paper that could benefit from further refinement. | train | [
"tSr0Y2mPt1G",
"fvMgUBO83xD",
"5Ldq6jGh2u3",
"M1sdj-9FSYx",
"uYMF_GLGVdL",
"in4SKNdY1cK",
"6mWspjqu3N7",
"OZ9HBp6KFcY"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are adding further clarifications after our most recent update to the draft.\n\n\"Moreover, it is possible to choose an optimal epsilon in Theorem 3.\" -> We have modified the statement of theorem 3 to do just that. The theorem looks similar to that by Hanin and Rolnick in format but incorporates data geomet... | [
-1,
-1,
-1,
-1,
5,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"5Ldq6jGh2u3",
"6mWspjqu3N7",
"OZ9HBp6KFcY",
"uYMF_GLGVdL",
"iclr_2022_vKMVrqvXbXu",
"iclr_2022_vKMVrqvXbXu",
"iclr_2022_vKMVrqvXbXu",
"iclr_2022_vKMVrqvXbXu"
] |
iclr_2022_5ziLr3pWz77 | Neural network architectures for disentangling the multimodal structure of data ensembles | We introduce neural network architectures that model the mechanism that generates data and address the difficult problem of disentangling the multimodal structure of data ensembles. We provide (i) an autoencoder-decoder architecture that implements the $M$-mode SVD and (ii) a generalized autoencoder that employs a kernel activation and implements the doubly nonlinear Kernel-MPCA. The neural network projection architecture decomposes an unlabeled data given an estimated forward model and a set of observations that constrain the solution set. | Reject | Four experts reviewed this paper and rated the paper below the acceptance threshold. The reviewers raised many concerns regarding the paper, mainly the lack of empirical studies and clarity. Some reviewers also suggested the authors better positioning the paper in the literature by discussing more related works. The rebuttal did not address all concerns. Considering the reviewers' concerns, we regret that the paper cannot be recommended for acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere. | train | [
"0wu42WF85fF",
"lTbtJSawLAg",
"qe0o9yR1-Qk",
"zgIie0kIY57"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present some analysis and describes a new methodology on how to do forward/inverse causal inferences with tensor analysis. There are no empricial results nor theoratic analysis to back the authors' claims. This paper is hard to read. Even though I read multiple times, I'm not sure if I fully understa... | [
3,
3,
5,
3
] | [
2,
3,
3,
3
] | [
"iclr_2022_5ziLr3pWz77",
"iclr_2022_5ziLr3pWz77",
"iclr_2022_5ziLr3pWz77",
"iclr_2022_5ziLr3pWz77"
] |
iclr_2022_wQ7RCayXUSl | Why so pessimistic? Estimating uncertainties for offline RL through ensembles, and why their independence matters. | In order to achieve strong performance in offline reinforcement learning (RL), it is necessary to act conservatively with respect to confident lower-bounds on anticipated values of actions. Thus, a valuable approach would be to obtain high quality uncertainty estimates on action values. In current supervised learning literature, state-of-the-art approaches to uncertainty estimation and calibration rely on ensembling methods. In this work, we aim to transfer the success of ensembles from supervised learning to the setting of batch RL. We propose, MSG, a model-free dynamic programming based offline RL method that trains an ensemble of independent Q-functions, and updates a policy to act conservatively with respect to the uncertainties derived from the ensemble. Theoretically, by referring to the literature on infinite-width neural networks, we demonstrate the crucial dependence of the quality of uncertainty on the manner in which ensembling is performed, a phenomenon that arises due to the dynamic programming nature of RL and overlooked by existing offline RL methods. Our theoretical predictions are corroborated by pedagogical examples on toy MDPs, as well as empirical comparisons in benchmark continuous control domains. In the more challenging domains of the D4RL offline RL benchmark, MSG significantly surpasses highly well-tuned state-of-the-art methods in batch RL. Motivated by the success of MSG, we investigate whether efficient approximations to ensembles can be as effective. We demonstrate that while efficient variants outperform current state-of-the-art, they do not match MSG with deep ensembles. We hope our work engenders increased focus into deep network uncertainty estimation techniques directed for reinforcement learning. | Reject | The paper attacks an interesting problem: accurately estimating uncertainties in action-value estimates in offline RL. It proposes a method based on ensembles of Q functions, where we alternately train an ensemble to estimate Q(s,a) for the current policy, and then adjust our policy based on the mean and uncertainty in this ensemble. By choosing mean + \beta * [standard deviation] as the basis for our policy updates, we can be either conservative (\beta < 0) or optimistic (beta > 0). The paper analyzes the ensemble training using the Gaussian process (NTK) view of deep nets.
The largest weakness of the paper is a lack of rigor in its analysis. While its main topic is uncertainty in Q estimates, the paper does not specify a valid probabilistic model on which such uncertainty estimates could be based. The theorems analyze only a part of the algorithm (policy evaluation), and don't take into account the interplay between this evaluation and any policy updates. The theorems also do not show that the computed output distribution is relevant to the actual uncertainty of the algorithm; e.g., they do not describe a prior for which the ensemble approximates the correct posterior (nor any other similar notion). Despite these omissions, the theorems are nonetheless presented as providing a reason to trust the output of the algorithm.
On the other hand, there definitely is valuable material in the paper; the experiments are interesting (and would be even more interesting if we could compare to some notion of a correct answer for at least the small ones), and the intuition and analysis could be enlightening if presented more clearly and formally, with a better description of the connection between theory and practice. Unfortunately, the paper as written doesn't enable the reader to accurately understand and assess the contributions. | train | [
"k7FGPflmub6",
"fb3yoGAgvVc",
"P_jOYf1fQRq",
"uTuwn0WQal7",
"Qfx11GpwZxX",
"gN6hlIhZUQ0",
"lUsrYz9SMm8",
"yA9TAWH17Ka",
"jvwxl4kj7KQ",
"IJaESCEfbk6",
"LQqzi9f5fb",
"T1pzSjUgBmj",
"L5QZnfE-Qi4",
"a3wVnc6mekz",
"d8f1ZB_8y-H",
"1d6vnOUfRD",
"PRDSQuMYfZo",
"nlpOoKrTE-3",
"hShcXYqIySh... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
"This paper provided a model-free algorithm for offline-RL. It provided Theorem 5.1 to justify its uncertainty quantifiers. It also conducted experiments to validate its algorithm. Strengths. The algorithm in this paper is model-free, which means it can be applied to realistic settings.\nIt used \n\nWeakness. The t... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
5,
3,
5
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
5,
2,
4
] | [
"iclr_2022_wQ7RCayXUSl",
"iclr_2022_wQ7RCayXUSl",
"nlpOoKrTE-3",
"8qx2gsnF0Ao",
"gN6hlIhZUQ0",
"LQqzi9f5fb",
"d8f1ZB_8y-H",
"T1pzSjUgBmj",
"k7FGPflmub6",
"hShcXYqIySh",
"lUsrYz9SMm8",
"L5QZnfE-Qi4",
"1d6vnOUfRD",
"PRDSQuMYfZo",
"a3wVnc6mekz",
"iclr_2022_wQ7RCayXUSl",
"iclr_2022_wQ7RC... |
iclr_2022__4D8IVs7yO8 | Dense-to-Sparse Gate for Mixture-of-Experts | Mixture-of-experts (MoE) is becoming popular due to its success in improving the model quality, especially in Transformers. By routing tokens with a sparse gate to a few experts that each only contains part of the full model, MoE keeps the model size unchanged and significantly reduces per-token computation, which effectively scales neural networks. However, we found that the current approach of jointly training experts and the sparse gate introduces a negative impact on model accuracy, diminishing the efficiency of expensive large-scale model training. In this work, we proposed $\texttt{Dense-To-Sparse}$ gate (DTS-Gate) for MoE training. Specifically, instead of using a permanent sparse gate, DTS-Gate begins as a dense gate that routes tokens to all experts, then gradually and adaptively becomes sparser while routes to fewer experts. MoE with DTS-Gate naturally decouples the training of experts and the sparse gate by training all experts at first and then learning the sparse gate. Our code is available at https://anonymous.4open.science/r/MoE-3D0D/README.md/README.moe.md. | Reject | This paper proposes a simple approach to improve the robustness of training a sparsely gated mixture-of-experts model, which at a high level simply consists in training initially as a dense gated model, to better warm start a final phase of sparse training. Results are presented to highlight the potential benefits of this approach.
The authors have provided a detailed response and updated results, in response to the reviews. Each reviewer has also responded at least once to the author response. Despite that engagement, all reviewers are leaning towards rejection (though there is one reviewer with a rating of 6, they regardless state that "I'm confident this will make a great resubmission at a future venue", indicating they actually support rejection).
The reviewers point out that the proposed method is not really novel, pointing to an existing recent paper. Even without that prior work, I would also argue that the proposed approach is conceptually straightforward and has benefits that were fairly predictable and not particularly surprising. Given the generally lukewarm reception from the reviewers, I think there is a legitimate concern to be had here about this work's potential for impact.
Though the review process has definitely improved the paper's manuscript since its submission, I unfortunately could not find a reason to dissent from the reviewers' consensus that this submission is not ready to be published. Therefore recommend it be rejected at this time. | val | [
"cRtOIrXPKt",
"7Sdeb664-c",
"z9uOpr273bF",
"QVE2rfVMZEG",
"TtFgvKuo-er",
"IexGNk6_AdW",
"pkVGI9z4CK",
"q78EZD-tyXJ",
"YsVg7iC4Kt6",
"qp01JFEhtzt",
"c_PrS2bKGF",
"zQEwwHf3RIu",
"jspmLDgFkEr",
"-trbQq7EWIx",
"GSTaeFG-3gA",
"NhlDKsMoY5",
"qyczq-vJS7I",
"CbBRAX6Jc2y"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" Dear Reviewer wm4x: \n\nThanks for your kind suggestions and bringing our attention to the concurrent work of DSelectK (https://arxiv.org/pdf/2106.03760.pdf). \n\nRegarding the similarity between our work DTS and DSelectK, we believe they are different essentially as they are trying to address **two totally di... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"z9uOpr273bF",
"q78EZD-tyXJ",
"QVE2rfVMZEG",
"qyczq-vJS7I",
"iclr_2022__4D8IVs7yO8",
"jspmLDgFkEr",
"zQEwwHf3RIu",
"-trbQq7EWIx",
"qp01JFEhtzt",
"iclr_2022__4D8IVs7yO8",
"YsVg7iC4Kt6",
"CbBRAX6Jc2y",
"TtFgvKuo-er",
"NhlDKsMoY5",
"qp01JFEhtzt",
"iclr_2022__4D8IVs7yO8",
"iclr_2022__4D8... |
iclr_2022_S6eHczgYpnu | Fast and Sample-Efficient Domain Adaptation for Autoencoder-Based End-to-End Communication | The problem of domain adaptation conventionally considers the setting where a source domain has plenty of labeled data, and a target domain (with a different data distribution) has plenty of unlabeled data but none or very limited labeled data. In this paper, we address the setting where the target domain has only limited labeled data from a distribution that is expected to change frequently. We first propose a fast and light-weight method for adapting a Gaussian mixture density network (MDN) using only a small set of target domain samples. This method is well-suited for the setting where the distribution of target data changes rapidly (e.g., a wireless channel), making it challenging to collect a large number of samples and retrain. We then apply the proposed MDN adaptation method to the problem of end-of-end learning of a communication autoencoder, which jointly learns the encoder, decoder, and a channel networks to minimize the decoding error rate. However, the error rate of an autoencoder trained on a particular (source) channel distribution can degrade as the channel distribution changes frequently, not allowing enough time for data collection and retraining of the autoencoder to the target channel distribution. We propose a method for adapting the autoencoder without modifying the encoder and decoder neural networks, and adapting only the MDN model of the channel. The method utilizes feature transformations at the decoder to compensate for changes in the channel distribution, and effectively present to the decoder samples close to the source distribution. Experimental evaluation on simulated datasets and real mmWave wireless channels demonstrate that the proposed methods can adapt the MDN model using very limited number of samples, and improve or maintain the error rate of the autoencoder under changing channel conditions. | Reject | The paper proposes a domain adaptation method that is specific to auto-encoder and wireless domain. The proposed method shows solid gain over baseline and is simple. There were multiple complaints on reviewer's side regarding clarity of the abstract and application of the work and related work that was responded to by the authors during the rebuttal period. However, the modifications were not enough to address all concerns. Mainly, the assumptions for the method to work such as source and target having the same number of components and diagonal covariance. It would help the paper to discuss the cases where the model fails. In addition, the paper will benefit from a stronger baseline. | train | [
"GSBdSahFHj",
"8zjnCLn5rr3",
"pW0kfuVXjJA",
"QasLeUNRSqt",
"JoEGUqrrxxp",
"WgockiNzLA3",
"v07wnX5R408",
"65e0ppow9bL",
"g6z_VWQoynl",
"9fCxFZTqNpI",
"eM1iqkrWbct",
"34hLlhpW9_t",
"D89ZYR4X-Ur",
"LxJWdo6ME7Q",
"ixACQoF8Hm9",
"ThRVxo8VOf",
"WDyK2BYVYBn"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author's response has addressed most of my previous concerns with this paper. I already gave a positive recommendation, so there is no need to change.",
" We thank all the reviewers for their time and valuable feedback on our paper. We have revised the paper based on the feedback and attempted to incorporat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3,
4
] | [
"v07wnX5R408",
"iclr_2022_S6eHczgYpnu",
"QasLeUNRSqt",
"JoEGUqrrxxp",
"WDyK2BYVYBn",
"D89ZYR4X-Ur",
"65e0ppow9bL",
"ixACQoF8Hm9",
"9fCxFZTqNpI",
"ThRVxo8VOf",
"34hLlhpW9_t",
"LxJWdo6ME7Q",
"iclr_2022_S6eHczgYpnu",
"iclr_2022_S6eHczgYpnu",
"iclr_2022_S6eHczgYpnu",
"iclr_2022_S6eHczgYpnu... |
iclr_2022_Nfl-iXa-y7R | Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models | Overparameterized neural networks generalize well but are expensive to train. Ideally one would like to reduce their computational cost while retaining their generalization benefits. Sparse model training is a simple and promising approach to achieve this, but there remain challenges as existing methods struggle with accuracy loss, slow training runtime, or difficulty in sparsifying all model components. The core problem is that searching for a sparsity mask over a discrete set of sparse matrices is difficult and expensive. To address this, our main insight is to optimize over a continuous superset of sparse matrices with a fixed structure known as products of butterfly matrices. As butterfly matrices are not hardware efficient, we propose simple variants of butterfly (block and flat) to take advantage of modern hardware. Our method (Pixelated Butterfly) uses a simple fixed sparsity pattern based on flat block butterfly and low-rank matrices to sparsify most network layers (e.g., attention, MLP). We empirically validate that Pixelated Butterfly is $3\times$ faster than Butterfly and speeds up training to achieve favorable accuracy--efficiency tradeoffs. On the ImageNet classification and WikiText-103 language modeling tasks, our sparse models train up to 2.3$\times$ faster than the dense MLP-Mixer, Vision Transformer, and GPT-2 small with no drop in accuracy. | Accept (Spotlight) | This is an intriguing work that introduces a novel sparse training technique. The core insight is a novel reparametrization or sparsity pattern based on the so-called butterfly matrices that enables fast training and good generalization. The theory is solid and useful. Most importantly, the method is novel and is likely to become impactful. Understanding better what contributes to the excellent performance is an interesting question for future work. In agreement with all the reviewers, it is my pleasure to accept the work. | test | [
"Bvk5HVTzC7F",
"Ub-3Vry6a3p",
"2lEpuT8uOo6",
"JeQ2rEe8xlA",
"Z3tE30QyCSB",
"wzZQf4TxFxg",
"AauPz-S1gWG",
"7D1N_fEBk0T",
"_QuDfVplef3",
"Cdn1ke3ZvWk",
"wSvsKqSpSYA"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your insightful feedback! We have carefully thought through all your great questions and added corresponding experiments and detailed discussions to answer them in the updated paper. We provide details below:\n\n**Q: Both for image classification and language modeling experiments, it would be great ... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"_QuDfVplef3",
"JeQ2rEe8xlA",
"iclr_2022_Nfl-iXa-y7R",
"Z3tE30QyCSB",
"2lEpuT8uOo6",
"iclr_2022_Nfl-iXa-y7R",
"wSvsKqSpSYA",
"Cdn1ke3ZvWk",
"iclr_2022_Nfl-iXa-y7R",
"iclr_2022_Nfl-iXa-y7R",
"iclr_2022_Nfl-iXa-y7R"
] |
iclr_2022_JHXjK94yH-y | Explore and Control with Adversarial Surprise | Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering. However, a central challenge in unsupervised RL is to extract behaviors that meaningfully affect the world and cover the range of possible outcomes, without getting distracted by inherently unpredictable, uncontrollable, and stochastic elements in the environment. To this end, we propose an unsupervised RL method designed for high-dimensional, stochastic environments based on an adversarial game between two policies (which we call Explore and Control) controlling a single body and competing over the amount of observation entropy the agent experiences. The Explore agent seeks out states that maximally surprise the Control agent, which in turn aims to minimize surprise, and thereby manipulate the environment to return to familiar and predictable states. The competition between these two policies drives them to seek out increasingly surprising parts of the environment while learning to gain mastery over them. We show formally that the resulting algorithm maximizes coverage of the underlying state in block MDPs with stochastic observations, providing theoretical backing to our hypothesis that this procedure avoids uncontrollable and stochastic distractions. Our experiments further demonstrate that Adversarial Surprise leads to the emergence of complex and meaningful skills, and outperforms state-of-the-art unsupervised reinforcement learning methods in terms of both exploration and zero-shot transfer to downstream tasks. | Reject | The idea of having two policies with opposing strategies, one aiming to maximize a notion of surprise whereas the other tries to minimize it, is an interesting one. However, even after the author rebuttal, all reviewers have lingering concerns about the evaluation protocol. In addition, there are remaining questions about the bonuses used; there are concerns that these only work for very specific domains. For these reasons, I'm recommending rejection. I encourage the authors to carefully read the concerns of the reviewers about evaluation and consider using a different evaluation protocol for a future version of this work. | val | [
"gkWHcn3wIA",
"l2q0c3_V9vs",
"I6QV4zqocoP",
"9WjBPBBrJJr",
"jvDzYw3yVTT",
"UDOFpMy5Qxu",
"L5cZF1GmpNH",
"fFkOwG7VUCK",
"M0TUHUA99YK",
"MEvnLD5KvOF",
"Jakg-xu81kU",
"u49Ua9bS1Hr",
"UrPJfl7cCO"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a method, called Adversarial Surprise (AS), for unsupervised reinforcement learning. AS employs a two-player, adversarial, sequential procedure in which an Explore player tries to maximize the approximate entropy of the observations, whereas a Control player tries to minimize this same entrop... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_JHXjK94yH-y",
"fFkOwG7VUCK",
"jvDzYw3yVTT",
"M0TUHUA99YK",
"UDOFpMy5Qxu",
"MEvnLD5KvOF",
"fFkOwG7VUCK",
"Jakg-xu81kU",
"gkWHcn3wIA",
"UrPJfl7cCO",
"u49Ua9bS1Hr",
"iclr_2022_JHXjK94yH-y",
"iclr_2022_JHXjK94yH-y"
] |
iclr_2022_TQ75Md-FqQp | Efficient and Modular Implicit Differentiation | Automatic differentiation (autodiff) has revolutionized machine learning. It allows expressing complex computations by composing elementary ones in creative ways and removes the tedious burden of computing their derivatives by hand. More recently, differentiation of optimization problem solutions has attracted a great deal of research, with applications as a layer in a neural network, and in bi-level optimization, including hyper-parameter optimization. However, the formulae for these derivatives often involves a tedious manual derivation and implementation. In this paper, we propose a unified, efficient and modular approach for implicit differentiation of optimization problems. In our approach, the user defines directly in Python a function $F$ capturing the optimality conditions of the problem to be differentiated. Once this is done, we leverage autodiff of $F$ to automatically differentiate the optimization problem. This way, our approach combines the benefits of implicit differentiation and autodiff. We show that seemingly simple principles allow to recover all recently proposed implicit differentiation methods and create new ones easily. We describe in details a JAX implementation of our framework and demonstrate the ease of differentiating through optimization problems thanks to it on four diverse tasks: hyperparameter optimization of multiclass SVMs, dataset distillation, task-driven dictionary learning and sensitivity analysis of molecular dynamics. | Reject | This paper was particularly discussed between the reviewers, the AC and SAC. A last minute reviewer was also called to clarify some issues raised, as one of the reviews never got into the system.
The paper was overall perceived as well written and well presented, and that the software contribution of implicit differentiation techniques is a nice asset for the community, especially its modularity.
The stability guaranty constitutes a nice (though straightforward) result providing a theoretical ground for the proposed approach.
Yet, the paper is often loose on mathematical justifications, in particular on minimal validity assumptions. Details on when the proposed framework could fail would be of interest, both on theoretical and practical parts. A discussion on the minimal assumptions required for validity of the approach should be highlighted more in the text.
Furthermore, the paper lacks discussions and comparisons with concurrent works,
for instance how would the framework compare with existing estimates for implicit differentiation or for unrolling. This could be improved along with providing more analysis on the implementation efficiency.
On the practical part, a high level description the software details would also be much beneficial.
A core discussion focused around what should be expected of this type of paper (i.e., "implementation issues, parallelization, software platforms, hardware" papers as suggested by Q3Lr)
A point of concern was the novelty aspects in the discussion phase was the novelty of the proposed framework: even if the contribution is the framework introduced, this is not new per se (the literature on implicit differentiation now contains a considerable amount of results and implementation examples).
The relevance of the work, both on theoretical and computational aspects, beyond the development of a computational library was found difficult to assess by several reviewers.
Overall, the reviewers judged the novelty and the paper's contribution more on the software side. Hence, a core discussion could focus on aspects expected for code oriented papers (i.e., implementation issues, parallelization, hardware, etc.).
Following the long discussion phase (more than 30 posts on OpenReview) and the aforementioned comments, the paper was rejected.
We encourage the authors to submit a revised version in a future conference or possibly to a software oriented journal, such as JMLR MLOSS or JOSS for instance. | train | [
"RhJZa60UULK",
"5rbw5hN3cs",
"FmsJwZUxv_v",
"WorRlDln2RG",
"IHhSlKYOMb",
"wQTyRy-gPH",
"OeZyEsY2Q-y",
"dJa3ERqd2Pt",
"hBPP6x5P5Hz",
"10EAREM_HUq",
"B0OGQp6SLd",
"IwKMNAl6nic",
"TWJeqkh0P3W"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the comments and for agreeing that our work significantly lowers the barrier of entry to implicit differentiation-based optimization.\n\nSince most of the reviewer’s objection against the paper seems to revolve around the use of the adjective “unified” in the abstract, we decided to remo... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
10,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"WorRlDln2RG",
"IHhSlKYOMb",
"hBPP6x5P5Hz",
"OeZyEsY2Q-y",
"10EAREM_HUq",
"iclr_2022_TQ75Md-FqQp",
"dJa3ERqd2Pt",
"B0OGQp6SLd",
"IwKMNAl6nic",
"wQTyRy-gPH",
"TWJeqkh0P3W",
"iclr_2022_TQ75Md-FqQp",
"iclr_2022_TQ75Md-FqQp"
] |
iclr_2022_1bEaEzGwfhP | Learning to Model Editing Processes | Most existing sequence generation models produce outputs in one pass, usually left-to-right. However, this is in contrast with a more natural approach that humans use in generating content; iterative refinement and editing. Recent work has introduced edit-based models for various tasks (such as neural machine translation and text style transfer), but these generally model a single edit. In this work, we propose to model editing processes, modeling the whole process of iteratively generating sequences. We form a conceptual framework to describe the likelihood of multi-step edits, and describe neural models that can learn a generative model of sequences based on these multi-step edits. We introduce baseline results and metrics on this task, finding that modeling editing processes improves performance on a variety of axes on both our proposed task and related downstream tasks compared to previous single-step models of edits. | Reject | This paper introduces a novel task (i.e., modelling the iterative process of editing sequences) and proposes a Transformer-based architecture to address it tractably. The paper also elects a number of metrics that are argued to shed enough light onto the merits of the proposed architecture.
In our view, the current version is not ready for acceptance. Here are some of the reasons I'd highlight:
* It is not entirely clear to us that the task in consideration has enough substance to grant acceptance nor that it speaks to a large enough audience. Perhaps the challenges identified here are more general and the developments for this task can be extended to related generation problems? If so, this is something one could consider for a revised version of the paper.
* The motivation does not seem to align well with the datasets used to demonstrate the task. Perhaps the difficulty to find a dataset that matches the motivation is an indication that the task and its challenges are tad too specific.
* It's the impression of more or less everyone involved that the paper lacks comparisons, and that the evaluation is not thorough enough, and the rebuttal did not ease our concerns sufficiently. | train | [
"gzl0gWtrawB",
"gmZXIdAjE2R",
"ptFJXIjTe2",
"B1xQo7EffRp",
"-p3njA4e90j",
"vloTq8MYLHA",
"bbwimoj7UY0",
"EF-pXRnr5Lk",
"xjw9FFpgYpq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Your comments do clarify the paper. I have a better understanding of the paper. There is still remaining question about the quality evaluation. It would be much supportive if you could provide human evaluation comparing the quality of these methods.",
"The paper presents a new problem of modelling the series ed... | [
-1,
5,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"ptFJXIjTe2",
"iclr_2022_1bEaEzGwfhP",
"gmZXIdAjE2R",
"bbwimoj7UY0",
"xjw9FFpgYpq",
"EF-pXRnr5Lk",
"iclr_2022_1bEaEzGwfhP",
"iclr_2022_1bEaEzGwfhP",
"iclr_2022_1bEaEzGwfhP"
] |
iclr_2022_L1L2G43k14n | WHY FLATNESS DOES AND DOES NOT CORRELATE WITH GENERALIZATION FOR DEEP NEURAL NETWORKS | The intuition that local flatness of the loss landscape is correlated with better generalization for deep neural networks (DNNs) has been explored for decades, spawning many different flatness measures. Recently, this link with generalization has been called into question by a demonstration that many measures of flatness are vulnerable to parameter re-scaling which arbitrarily changes their value without changing neural network outputs. Here we show that, in addition, some popular variants of SGD such as Adam and Entropy-SGD, can also break the flatness-generalization correlation. As an alternative to flatness measures, we use a function based picture and propose using the log of Bayesian prior upon initialization, $\log P(f)$, as a predictor of the generalization when a DNN converges on function $f$ after training to zero error. The prior is directly proportional to the Bayesian posterior for functions that give zero error on a test set. For the case of image classification, we show that $\log P(f)$ is a significantly more robust predictor of generalization than flatness measures are. Whilst local flatness measures fail under parameter re-scaling, the prior/posterior, which is global quantity, remains invariant under re-scaling. Moreover, the correlation with generalization as a function of data complexity remains good for different variants of SGD. | Reject | This paper proposes the use of Bayesian prior upon initialization for predicting generalization performance of a neural network, and empirically shows that it can outperform flatness-based measures. Understanding the underlying reasons that control generalization performance on neural networks is of great theoretical and practical importance, and reviewers find efforts in this direction valuable. However, they believe the submission in current state is not ready for publication.
Specifically, ZCFx believes the setup considered in the paper does not resemble a realistic situation, which makes claims about the Bayesian prior being a more robust predictor than flatness unsubstantiated. ZCFx appreicates authors' response and clarifications, but finds the concerns unresolved. ohft believes the paper is weak in a certain aspects, such as comparing across different architectures (including number of parameters), and comparing with SAM optimizer whose goal is to find flat minima and has shown to greatly improve the generalization performance. ohft acknowledged reading authors' response but the response did not help with changing the score of the paper. r1hF has some reservations about the novelty of the work and the limited experiments, which remained unresolved. r1hF suggests that the authors revise the paper to emphasize on the author's contribution in light of the previous work.
Based on reviewers' feedback, I suggest authors to resubmit after revising the draft to address the issues raised above. | train | [
"Ttbds-GFbHW",
"5DIv5GG0rs0",
"JU7G2uDUHYY",
"Q4kUXZMiCpi",
"joHMT_cOgJF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the correlation between flatness and generalization in deep neural networks and show that, consists with some previous studies, the correlation could sometimes be broken. As an alternative, it propose a new measure based on the Bayesian prior upon initialization, and empirically demonstrate this... | [
5,
6,
5,
3,
3
] | [
4,
3,
3,
4,
4
] | [
"iclr_2022_L1L2G43k14n",
"iclr_2022_L1L2G43k14n",
"iclr_2022_L1L2G43k14n",
"iclr_2022_L1L2G43k14n",
"iclr_2022_L1L2G43k14n"
] |
iclr_2022_eGd34W56KIT | SPARK: co-exploring model SPArsity and low-RanKness for compact neural networks | Sparsification and low-rank decomposition are two important techniques for deep neural network (DNN) compression. To date, these two popular yet distinct approaches are typically used in a separate way; while their efficient integration for better compression performance is little explored. In this paper we perform systematic co-exploration on the model sparsity and low-rankness towards compact neural networks. We first investigate and analyze several important design factors for the joint pruning and low-rank factorization, including operational sequence, low-rank format, and optimization objective. Based on the observations and outcomes from our analysis, we then propose SPARK, a unified DNN compression framework that can simultaneously capture model SPArsity and low-RanKness in an efficient way. Empirical experiments demonstrate very promising performance of our proposed solution. Notably, on CIFAR-10 dataset, our approach can bring 1.25%, 1.02% and 0.16% accuracy increase over the baseline ResNet-20, ResNet-56 and DenseNet-40 models, respectively, and meanwhile the storage and computational costs are reduced by 70.4% and 71.1% (for ResNet-20), 37.5% and 39.3% (for ResNet-56) and 52.4% and 61.3% (for DenseNet-40), respectively. On ImageNet dataset, our approach can enable 0.52% accuracy increase over baseline model with 48.7% fewer parameters. | Reject | The paper proposes a neural network compression technique based on sparse and low-rank approximations. The paper received mixed reviews, with one accept, one reject, and two borderline accepts. Most reviewers have appreciated the effort conducted for the evaluation. Three reviewers are nevertheless worried about the limited novelty and two of them found the positioning in the literature unclear with many missing references. In particular, one reviewer makes a strong case against the accceptance of the paper.
The authors have made a significant effort to address the issues raised by the reviewers with a very long rebuttal. The area chair has read in details the responses, the points raised by the reviewers, and the paper itself. He/she tend to agree with the issues raised by the reviewers about the positioning of the paper in the literature and the missing baselines. The rebuttal was very helpful and addresses some of the concerns. There are still some remaining issues
- the discussion about related work is relegated to an appendix. Yet, it is critical for positioning the paper and a discussion within the main paper would be more appropriate.
- there is no assessment of the statistical significance of the results. Hyper-parameters are fixed to some ad-hoc values and it is unclear what the effect of different hyper-parameter choices is upon the method and other baselines.
- for reproductibility purposes, providing code with the submission would be very helpful, especially given the empirical nature of the contribution.
Overall, this is a borderline case, which, unfortunately, would require additional work before being ready for acceptance. | train | [
"E24wZOZyXAH",
"b9tlqusH3Jg",
"UgjObwCOPHO",
"WMas7RWAb95",
"RvTv25pJ9ci",
"zF5w281cN5v",
"qmCN2RRMCO",
"bxq2DgO3fpc",
"74neyN3slyK",
"mbi7h5csQXy",
"rmt2mDwOD7h",
"MvY3gc4ybC8",
"lQFRV5IwZsn",
"dH8BZNKO20a",
"lqzf9ZpFAee",
"uyXgS18j7Xl",
"cI_8yU8-Ssp",
"mahkXYAN8DU",
"qJNAZ18zyT... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
"This paper presents a survey of methods for enforcing low-rankness and sparsity in neural network weights and proposes SPARK, an alternating algorithm for creating low-rank and sparse weight tensors from a pre-trained network. Baseline method accuracies are retained or slightly improved while parameter count is re... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"iclr_2022_eGd34W56KIT",
"qmCN2RRMCO",
"bxq2DgO3fpc",
"kxa16tR9DV",
"f2KU39e3BPE",
"aa6feUOAyh",
"cI_8yU8-Ssp",
"MvY3gc4ybC8",
"mbi7h5csQXy",
"631hDAQUuol",
"f2KU39e3BPE",
"2s73fonxsmC",
"f2KU39e3BPE",
"lQFRV5IwZsn",
"f2KU39e3BPE",
"f2KU39e3BPE",
"2s73fonxsmC",
"2s73fonxsmC",
"B3... |
iclr_2022_vnF5gDNvcKX | Variance Reduced Domain Randomization for Policy Gradient | By introducing randomness on environment parameters that fundamentally affect the dynamics, domain randomization (DR) imposes diversity to the policy trained by deep reinforcement learning, and thus improves its capability of generalization. The randomization of environments, however, introduces another source of variability for the estimate of policy gradients, in addition to the already high variance due to trajectory sampling. Therefore, with standard state-dependent baselines, the policy gradient methods may still suffer high variance, causing low sample efficiency during the training of DR. In this paper, we theoretically derive a bias-free and state/environment-dependent optimal baseline for DR, and analytically show its ability to achieve further variance reduction over the standard constant and state-dependent baselines for DR. We further propose a variance reduced domain randomization (VRDR) approach for policy gradient methods, to strike a tradeoff between the variance reduction and computational complexity in practice. By dividing the entire space of environments into some subspaces and estimating the state/subspace-dependent baseline, VRDR enjoys a theoretical guarantee of faster convergence than the state-dependent baseline. We conduct empirical evaluations on six robot control tasks with randomized dynamics. The results demonstrate that VRDR can consistently accelerate the convergence of policy training in all tasks, and achieve even higher rewards in some specific tasks. | Reject | While the reviewers appreciated the clarity of the work, there is a concern about the meaning of the proposed result and method. It is known that adding knowledge about an additional variable, in this case the environment, leads to a lower variance estimate. What is not known is the practical impact of using this new baseline or perhaps some other intuition stemming from that use of the baseline (for instance the origin of the variance). However, the results shown are not that compelling, a point which was raised by the reviewers, making the work below the bar for publication. | train | [
"lC7b940AQXb",
"VJE-Ma66I3w",
"Zvl-GU6sXtm",
"mJtls49Futj",
"bZCRLHrbF0G",
"NFU34nIlFnt",
"nIjgz52AN5q",
"R7Srt2m9E2Z",
"D22FsPn4DV6",
"6ueMGHu-Pyf",
"dYgZLPAUsV",
"x6Wi8vSGMvo",
"ThinlS1gxP4",
"cwo1j6PU895",
"BgApRXbYkFe",
"tTlzipxlQdf",
"LBEfbPZEkT4",
"U-4s2aWiqid",
"4j78SJS5sJ... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" **Comment 10**: \"And in fig. 10, in later stages, VRDR has similar if not higher variance. Could you elaborate what's your opinions on that?\"\n\n**Response**: Although VRDR did not show the best performance in Pendulum2D, we validated that the curve of variance is consistent with the curve of its average score ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"Zvl-GU6sXtm",
"Zvl-GU6sXtm",
"6ueMGHu-Pyf",
"iclr_2022_vnF5gDNvcKX",
"R7Srt2m9E2Z",
"iclr_2022_vnF5gDNvcKX",
"mJtls49Futj",
"aCFETA6HTk",
"TownUggomtG",
"tlNfoFH0mfg",
"mJtls49Futj",
"cwo1j6PU895",
"cwo1j6PU895",
"U-4s2aWiqid",
"tTlzipxlQdf",
"LBEfbPZEkT4",
"4j78SJS5sJ8",
"dYgZLPA... |
iclr_2022_Ivku4TZgEly | Exploring unfairness in Integrated Gradients based attribution methods | Numerous methods have attempted to explain and interpret predictions made by machine learning models in terms of their inputs. Known as “attribution methods” they notably include the Integrated Gradients method and its variants.These are based upon the theory of Shapley Values, a rigorous method of fair allocation according to mathematical axioms. Integrated Gradients has axioms derived from this heritage with the implication of a similar rigorous, intuitive notion of fairness. We explore the difference between Integrated Gradients and more direct expressions of Shapley Values in deep learning and find Integrated Gradients’ guarantees of fairness weaker; in certain conditions it can give wholly unrepresentative results. Integrated Gradients requires a choice of “baseline”, a hyperparameter that represents the ‘zero attribution’ case. Research has shown that baseline choice critically affects attribution quality, and increasingly effective baselines have been developed. Using purpose-designed scenarios we identify sources of inaccuracy both from specific baselines and inherent to the method itself, sensitive to input distribution and loss landscape. Failure modes are identified for baselines including Zero, Mean,Additive Gaussian Noise, and the state of the art Expected Gradients. We develop a new method, Integrated Certainty Gradients, that we show avoids the failures in these challenging scenarios. By augmenting the input space with “certainty”information, and training with random degradation of input features, the model learns to predict with varying amounts of incomplete information, supporting a zero-information case which becomes a natural baseline. We identify the axiomatic origin of unfairness in Integrated Gradients, which has been overlooked in past research. | Reject | This paper analyzes analyze the fairness of Integrated Gradient based attribution methods. The authors exploit SHAP and BShap, two approaches based on the theory of Shapley Values, as the reference of "fair" methods. Specifically, they present an "attribution transfer" phenomenon in which the Integrated Gradients are affected by some sharply fluctuated area across the integration path, thereby deviating from the ''fair'' attribution methods. To avoid the attribution transfer issue, they further propose Integrated Certainty Gradients (ICG) method, where the integration path does not pass through the original fluctuated input space. Experiments are performed to demonstrate the advantages of ICG in avoiding attribution transfer. While the basic premise of the work is interesting, many conceptual details remain unclear and experimental evaluation can also be improved (please see detailed reviewer comments below). Given this, we are unable to recommend acceptance at this time. We hope the authors find the reviews helpful. | test | [
"LiEY4ZZ1B_a",
"2dRE6T79R2p",
"e5M0G2bTKg6",
"Pwt0b6IrtZ_",
"B3C5eW8g-HO"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nWe have submitted a further revision.\n\nIn addition to the previous changes:\n\n- We trained a ResNet50 architecture with certainty on the Imagenette dataset to demonstrate ICG in a realistic setting. Reasonable attribution results are generated. Compared to Expected Gradients we find the resu... | [
-1,
-1,
5,
5,
5
] | [
-1,
-1,
3,
4,
2
] | [
"iclr_2022_Ivku4TZgEly",
"iclr_2022_Ivku4TZgEly",
"iclr_2022_Ivku4TZgEly",
"iclr_2022_Ivku4TZgEly",
"iclr_2022_Ivku4TZgEly"
] |
iclr_2022_Muwg-ncP_ec | Exact Stochastic Newton Method for Deep Learning: the feedforward networks case. | The inclusion of second-order information into Deep Learning optimization has drawn consistent interest as a way forward to improve upon gradient descent methods. Estimating the second-order update is often convoluted and computationally expensive, which drastically limits its usage scope and forces the use of various truncations and approximations.
This work demonstrates that it is possible to solve the Newton direction in the stochastic case exactly. We consider feedforward networks as a base model, build a second-order Lagrangian which we call Sifrian, and provide a closed-form formula for the exact stochastic Newton direction under some monotonicity and regularization conditions. We propose a convexity correction to escape saddle points, and we reconsider the intrinsic stochasticity of the online learning process to improve upon the formulas. We finally compare the performance of the developed solution with well-established training methods and show its viability as a training method for Deep Learning. | Reject | This paper presents a second-order optimization algorithm for neural nets which extends LeCun's classic Lagrangian framework. The paper derives a method for computing the exact Newton step for a single training example for a multilayer perceptron. It then describes approximations that can be used to extend the method to more examples.
The authors claim to have spotted factual errors in the reviews. However, I've looked into the issues, and I find myself agreeing with the reviewers on each of those points (or, if there are misunderstandings, they result from a lack of clarity in the paper rather than insufficient scientific computing background on the part of the reviewers).
The authors claim to have solved a longstanding problem by giving an efficient method for calculating the stochastic Newton step (for a single training example). However, it's not clear this is very useful; as a reviewer points out, estimating the curvature with a single example can't give a very accurate estimate. Once the method is extended to batches, more approximations are required. I also agree with the reviewers that the later parts of the methods section appear a bit rushed.
As the reviewers point out, in the experimental comparisons, the proposed method seems to underperform SGD with momentum even in terms of epochs, which is the setting where second-order methods usually shine. Other second-order optimizers (e.g. K-FAC) have been shown to outperform first-order methods in terms of both wall clock time and epochs, so epochwise improvement seems like the minimum bar for a second-order optimization paper. | train | [
"J_XyMTOqZCc",
"wIwK9lwbAl",
"ubbf-An-8hM",
"WCsKuqm6h_",
"tdsD4zZRmLb",
"mlI3bDJztfA",
"GUheCP5X44m",
"dCbIKT9wNQA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This manuscript proposes to apply a Newton-type algorithm for training feed-forward neural network models.\nWhen the regularizer added to the optimization problem satisfies certain additional assumptions, the algorithm can be implemented efficiently with additional assumptions on the activation function, including... | [
3,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"iclr_2022_Muwg-ncP_ec",
"J_XyMTOqZCc",
"dCbIKT9wNQA",
"GUheCP5X44m",
"mlI3bDJztfA",
"iclr_2022_Muwg-ncP_ec",
"iclr_2022_Muwg-ncP_ec",
"iclr_2022_Muwg-ncP_ec"
] |
iclr_2022_R2aCiGQ9Qc | Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks | In node classification tasks, heterophily and oversmoothing are two problems that can hurt the performance of graph convolutional neural networks (GCNs). The heterophily problem refers to the model's inability to handle heterophilous graphs where neighboring nodes belong to different classes; the oversmoothing problem refers to the model's degenerated performance with increasing number of layers. These two seemingly unrelated problems have been studied mostly independently, but there is recent empirical evidence that solving one problem may benefit the other.
In this work, beyond empirical observations, we aim to: (1) analyze the heterophily and oversmoothing problems from a unified theoretical perspective, (2) identify the common causes of the two problems based on our theories, and (3) propose simple yet effective strategies to address the common causes. In our theoretical analysis, we show that the common causes of the heterophily and oversmoothing problems---namely, the relative degree of a node (compared to its neighbors) and its heterophily level---trigger the node representations in consecutive layers to "move" closer to the original decision boundary, which increases the misclassification rate of node labels under certain constraints. We theoretically show that: (1) Nodes with high heterophily have a higher misclassification rate. (2) Even with low heterophily, degree disparity in a node's neighborhood can influence the movements of node representations and result in a "pseudo-heterophily" situation, which helps to explain oversmoothing. (3) Allowing not only positive, but also negative messages during message passing can help counteract the common causes of the two problems. Based on our theoretical insights, we propose simple modifications to the GCN architecture (i.e., learned degree corrections and signed messages), and we show that they alleviate the heteorophily and oversmoothing problems with extensive experiments on nine real networks. Compared to other approaches, which tend to work well in either heterophily or oversmoothing, our modified GCN model performs well in both problems. | Reject | This paper focuses on investigating the relations between the heterophily and over-smoothness problem. However, the relationship is not clear.
The over-smoothness problem considers the features and the adjacency matrix, while the heterophily incorporates the adjacency matrix and the labels. They have different views on the graph. It may not be treated as the same coin. Besides, the stacked aggregations lead to indistinguishable node representations and poor performance in the over-smoothing problem. The same phenomenon appears in the heterophily problem because the features in different classes are falsely mixed, leading to indistinguishable nodes [2]. They have the same phenomenon but different origins. It may be not a necessity to combine these two problems.
Besides, MADGap[1] is proposed to evaluate the over-smoothness problem. It is unreliable to use the accuracy and the degree to measure this problem. Therefore, in section 3, the relations between node degrees and the homophily ratio cannot infer the relations between the heterophily and over-smoothness problem.
As a result, the authors should carefully re-organize their paper and results.
A suggestion is to pack the submission as a new method to learn from heterophily instead of trying to make such a close relationship with over-smoothing.
- [1] Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View. AAAI 2020
- [2] Beyond homophily in graph neural networks: Current limitations and effective designs. NeurIPS 2020 | test | [
"-jQZeXiQKT3",
"Z4qr4gQtYPe",
"bVEsfvw99gq",
"hYy544uKuak",
"x1n4DQP9Umh",
"QaUlqSSOS4y",
"8vnvIYihyeU",
"YHtaYu-iWt",
"t-gAJIIJ6Gx",
"s3UTv1e90u3",
"Hvr_gzDXJ5",
"P2hKS37d7xb",
"1nEO7LCQVv",
"DHsggCBPXAk",
"2L3Nr-JVjBX",
"36saRQtKJc",
"HY_420MgDyk",
"OPR8d8EAup",
"dkS4HhHm8T4",
... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer:\n\nWe have made great efforts in updating the theories so that they are more rigorous in the writing. And we also relax our assumptions and modify the proofs accordingly to make the theories more general.\n\nWe would like to know how you think about the current theory or if there's anything you wou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
3,
3,
3
] | [
"t-gAJIIJ6Gx",
"YHtaYu-iWt",
"hYy544uKuak",
"s3UTv1e90u3",
"QaUlqSSOS4y",
"Hvr_gzDXJ5",
"iclr_2022_R2aCiGQ9Qc",
"36saRQtKJc",
"OPR8d8EAup",
"dkS4HhHm8T4",
"ifErN7Z2rcl",
"iclr_2022_R2aCiGQ9Qc",
"iclr_2022_R2aCiGQ9Qc",
"iclr_2022_R2aCiGQ9Qc",
"OPR8d8EAup",
"iclr_2022_R2aCiGQ9Qc",
"icl... |
iclr_2022_GIBm-_kax6 | Expected Improvement-based Contextual Bandits | The expected improvement (EI) is a popular technique to handle the tradeoff between exploration and exploitation under uncertainty. However, compared to other techniques as Upper Confidence Bound (UCB) and Thompson Sampling (TS), the theoretical properties of EI have not been well studied even for non-contextual settings such as standard bandit and Bayesian optimization. In this paper, we introduce and study the EI technique as a new tool for the contextual bandit problem which is a generalization of the standard bandit. We propose two novel EI-based algorithms for this problem, one when the reward function is assumed to be linear and the other when no assumption is made about the reward function other than it being bounded. With a linear reward function, we demonstrate that our algorithm achieves a near-optimal regret. In particular, our regret bound reduces a factor of $\sqrt{\text{log}(T)}$ compared to the popular OFUL algorithm \citep{Abbasi11} which uses the UCB approach, and reduces a factor of $\sqrt{d\text{log}(T)}$ compared to another popular algorithm \citep{agrawal13} which uses the TS approach. Here $T$ is the horizon and $d$ is the feature vector dimension. Further, when no assumptions are made about the form of reward, we use deep neural networks to model the reward function. We prove that this algorithm also achieves a near-optimal regret. Finally, we provide an empirical evaluation of the algorithms on both synthetic functions and various benchmark datasets. Our experiments show that our algorithms work well and consistently outperform existing approaches. | Reject | Summary: The paper introduces and studies the expected improvement (EI) technique as a way to balance exploration and exploitation for the contextual bandit problem. The authors propose two EI-based algorithms for linear bandits and for neural bandits for a general class of reward functions. The paper presents regret bounds for both methods and shows the experimental results to support their theoretical claims.
Discussion: The reviewers have identified technical issues in the regret bound of LinEL which has been now corrected. Similarly, reviewers have had difficulty assessing the correctness of the paper due to typos and unclear exposition, and raise concerns regarding the amount of corrections that were necessary to reach the current stage. There is no consensus between the reviewers, and some would feel more comfortable if the paper could go through another round of review after major revision.
Reviewer UDHj points that after corrections, "the regret has an additional $O(\sqrt{\ln T})$ dependency compared with the regret of LinUCB and Thompson Sampling." and this should be discussed in the updated version.
Reviewer co1L suggests to compare to the bounds in "The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits". The authors responded that " To our knowledge, the analysis of the optimal asymptotic regret for contextual bandits as we consider (where the context may be controlled by an adversary) is still an open problem.". In fact, this is the topic of several recent works including:
* "Asymptotically Optimal Information-Directed Sampling" COLT 2021
* "An asymptotically optimal primal-dual incremental algorithm for contextual linear bandits", NeurIPS 2021
The connections of the present work with these two references are strong and should be discussed in more depth. I believe it is a more important discussion than the comparison with the regret bound of LinTS which is yet another problem.
The reviewers have appreciated the originality of the ideas and for that reason we encourage the authors to revise their draft and submit to a future venue.
Recommendation: Reject. | train | [
"pmFiSGH0oz",
"c0ErgDUPEZM",
"ZJj40XiJuRk",
"gZp0Sj64f55",
"elil5jrynSk",
"K1pp8VuE7J-",
"jllDpqfSQGD",
"0EQ553RlbXD",
"XfvmotjKulD",
"8xLkZOh-rNr",
"I5RWI_Ve0rx",
"sF2IHzIVaGK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper introduces and studies the expected improvement (EI) technique as a way to balance exploration and exploitation for the contextual bandit problem. The authors propose two EI-based algorithms for linear bandits and for neural bandits for a general class of reward functions. The paper presents regret bound... | [
6,
6,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
4,
4,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_GIBm-_kax6",
"iclr_2022_GIBm-_kax6",
"iclr_2022_GIBm-_kax6",
"0EQ553RlbXD",
"iclr_2022_GIBm-_kax6",
"c0ErgDUPEZM",
"elil5jrynSk",
"ZJj40XiJuRk",
"pmFiSGH0oz",
"sF2IHzIVaGK",
"iclr_2022_GIBm-_kax6",
"iclr_2022_GIBm-_kax6"
] |
iclr_2022_wfRZkDvxOqj | Multi-Task Neural Processes | Neural processes have recently emerged as a class of powerful neural latent variable models that combine the strengths of neural networks and stochastic processes. As they can encode contextual data in the network's function space, they offer a new way to model task relatedness in multi-task learning. To study its potential, we develop multi-task neural processes, a new variant of neural processes for multi-task learning. In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task. To do so, we derive the function priors in a hierarchical Bayesian inference framework, which enables each task to incorporate the shared knowledge provided by related tasks into its context of the prediction function. Our multi-task neural processes methodologically expand the scope of vanilla neural processes and provide a new way of exploring task relatedness in function spaces for multi-task learning. The proposed multi-task neural processes are capable of learning multiple tasks with limited labeled data and in the presence of domain shift. We perform extensive experimental evaluations on several benchmarks for the multi-task regression and classification tasks. The results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning and superior performance in multi-task classification and brain image segmentation. | Reject | This paper extends neural processes (NPs) to the multi-task setting (MTNPs). The approach uses a hierarchical Bayesian construction, where the latent variables of an NP are conditioned on a set of global task-specific context variables. This allows the NP to share knowledge across related tasks.
There were a few issues raised in the reviews. Consistently, the reviewers noted that the writing could be improved. There were variables, like the context variables M, that lacked explanation. There was also confusion between the use of a Gaussian likelihood for classification vs regression. These were resolved with the author’s response and updated draft.
There were also requests for additional experiments and baselines: 1) a synthetic task, to which the authors included a 1D regression task. 2) More baselines against other multi-task methods, to which the authors included a comparison to Guo et al., 2020, MTAN, and multi-task Gaussian Processes.
Finally, there were questions around whether MTNPs are valid stochastic processes. This has been proven theoretically by the authors, albeit in the appendix.
Currently, this paper remains borderline. The main remaining criticisms are a) A desire for more experiments and analysis to highlight the particular strengths of the approach. b) That the approach is a straightforward extension of NPs, and may not be sufficiently novel. c) That the authors include more baselines from the recent multi-task literature (Yu et al. and Sun et al.). In the end it was determined that the paper does not quite meet the bar for acceptance. I think in future submissions, it would be worthwhile to further highlight MTNP’s performance in the low-data regime, where it particularly seems to do well, and to complete the full set of comparisons (e.g., Sun et al. and Yu et al.) that were requested by the reviewers. | train | [
"rSbKBouiopW",
"yotQ06EF92v",
"JR-4KegJu1D",
"qftnt-IJ2gt",
"jNji4kywbL",
"rJcSWoM9VxJ",
"cT9W531-Gq0",
"UOiwglX8Gwo",
"jj0IHap-tm2",
"c2kvgZf0LMA",
"0M4RCPWnSE1",
"g_WcXXhLDX7",
"MihxQW2hbF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your response, engagement and encouragement. \n\n**A1:**\nWe choose Yu et. al, ICML2005 as our baseline because they also adopt a hierarchical architecture to model task relatedness, which can provide a head-to-head comparison with our model. \nWe have looked into the paper (Alaa \\& van der Scha... | [
-1,
5,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"JR-4KegJu1D",
"iclr_2022_wfRZkDvxOqj",
"UOiwglX8Gwo",
"iclr_2022_wfRZkDvxOqj",
"c2kvgZf0LMA",
"yotQ06EF92v",
"iclr_2022_wfRZkDvxOqj",
"0M4RCPWnSE1",
"g_WcXXhLDX7",
"qftnt-IJ2gt",
"MihxQW2hbF",
"iclr_2022_wfRZkDvxOqj",
"iclr_2022_wfRZkDvxOqj"
] |
iclr_2022_tG8QrhMwEqS | Adaptive Activation-based Structured Pruning | Pruning is a promising approach to compress complex deep learning models in order to deploy them on resource-constrained edge devices. However, many existing pruning solutions are based on unstructured pruning, which yield models that cannot efficiently run on commodity hardware, and require users to manually explore and tune the pruning process, which is time consuming and often leads to sub-optimal results. To address these limitations, this paper presents an adaptive, activation-based, structured pruning approach to automatically and efficiently generate small, accurate, and hardware-efficient models that meet user requirements. First, it proposes iterative structured pruning using activation-based attention feature maps to effectively identify and prune unimportant filters. Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks. A comprehensive evaluation shows that the proposed method can substantially outperform the state-of-the-art structured pruning works on CIFAR-10 and ImageNet datasets. For example, on ResNet-56 with CIFAR-10, without any accuracy drop, our method achieves the largest parameter reduction (79.11%), outperforming the related works by 22.81% to 66.07%, and the largest FLOPs reduction (70.13%), outperforming the related works by 14.13% to 26.53%. | Reject | The paper proposes a strategy for incrementally pruning deep learning models based on activation values. The approach can satisfy different kinds of requirements, trading off between accuracy and sparsity.
The approach seems promising and seems to have competitive performance. However, the method is described by reviewers as a combination of ideas that have been proposed in the literature, and the experimental evaluation relies too much on a dataset considered too small to be reliable in such experiments --- CIFAR10. We do not expect substantial experiments within the rebuttal period: such comparisons with relevant SOTA methods should have been present in the submission. Moreover, the strategy proposed for selecting a threshold seems to rely on some doubtful assumptions, and there are no benchmarks on actual runtime.
The writing has improved based on reviewer input, and the reviewers are satisfied with this aspect. I would still add that I would prefer some clarity in the method presentation: is there a quantity being optimized? is there a value we can monitor to ensure our reimplementation is correct? etc. In addition I would like to ask authors in the next revision to be mindful to the difference between `\citet` and `\citep` in author-year citations -- see e.g. the first two ones in 3.1. | test | [
"cLdBWCliQcN",
"0edlJDXef-_",
"747N5kDrz7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes iterative structured pruning methods using activation-based attention feature maps and an adaptive threshold selection strategy. Inspired by attention transfer, Activation-based attention feature maps are constructed as the important evaluation of filters in each layer. Adaptive threshold selec... | [
3,
5,
5
] | [
5,
3,
3
] | [
"iclr_2022_tG8QrhMwEqS",
"iclr_2022_tG8QrhMwEqS",
"iclr_2022_tG8QrhMwEqS"
] |
iclr_2022_wClmeg9u7G | Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees | Variational inequalities in general and saddle point problems in particular are increasingly relevant in machine learning applications, including adversarial learning, GANs, transport and robust optimization. With increasing data and problem sizes necessary to train high performing models across these and other applications, it is necessary to rely on parallel and distributed computing. However, in distributed training, communication among the compute nodes is a key bottleneck during training, and this problem is exacerbated for high dimensional and over-parameterized models models. Due to these considerations, it is important to equip existing methods with strategies that would allow to reduce the volume of transmitted information during training while obtaining a model of comparable quality. In this paper, we present the first theoretically grounded distributed methods for solving variational inequalities and saddle point problems using compressed communication: MASHA1 and MASHA2. Our theory and methods allow for the use of both unbiased (such as Rand$k$; MASHA1) and contractive (such as Top$k$; MASHA2) compressors. We empirically validate our conclusions using two experimental setups: a standard bilinear min-max problem, and large-scale distributed adversarial training of transformers. | Reject | In this paper, the authors consider two algorithms for solving (strongly) monotone variational inequalities with compressed communication guarantees, MASHA1 and MASHA2. MASHA1 is a variant of a recent algorithm proposed by Alacaoglu and Malitsky, while MASHA2 is a variant of MASHA1 that relies on contractive compressors (by contrast, MASHA1 only involves unbiased compressors). The authors then show that
- MASHA1 converges at a linear rate (in terms of distance to a solution squared), and at a $1/k$ rate when taking its ergodic averge (in terms of the standard VI gap function).
- MASHA2 converges at a linear rate (in terms of distance to a solution squared).
Even though the paper's premise is interesting, the reviewers raised several concerns which were only partially addressed by the authors' rebuttal. One such concern is that the improvement over existing methods is a multiplicative factor of the order of $\mathcal{O}(\sqrt{1/q + 1/M})$ in terms of communication complexity (number of transmitted bits) for the RandK compressor, which was not deemed sufficiently substantive in a VI setting (relative to e.g., wall-clock time, which is not discussed).
After the discussion with the reviewers during the rebuttal phase, the paper was not championed and it was decided to make a borderline "reject" recommendation. At the same time, I would strongly urge the authors to resubmit a properly revised version of their paper at the next opportunity (describing in more detail the innovations from the template method of Alacaoglu and Malitsky, as well as including a more comprehensive cost-benefit discussion of the stated improvements for the RandK/TopK compressors). | train | [
"-nc99KgEoht",
"WaFHyRWs9Ae",
"9PHHw2g3kPk",
"Xa6bf072MY",
"ZhjTnvRCN2c",
"nUIiLxOIEGq",
"7Xc7oA6IWiB",
"s-nRt2DzksQ",
"n8CKurBDfuy",
"b4DbDnKGiky",
"xt9mCMZbff",
"bzEbSZpSIQg",
"y3tuTZ7NZ7M",
"UG0VN2oF4wf",
"i-_b4kSP2dN",
"5PqJr5rtaNn",
"_zz7ArQfApc",
"MKcuzWsrLUC",
"lhZzUJEpW9L... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
" Dear Reviewer,\n\n- We fail to see the point of this comment. How is this relevant to our work? Talking about QSGD, which is a now a famous optimization method, albeit with a suboptimal (since it's old and many works improve on it), and in the case of Theorem 3.5 incorrect, analysis **is not the most efficient w... | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"9PHHw2g3kPk",
"Xa6bf072MY",
"ZhjTnvRCN2c",
"xt9mCMZbff",
"Xa6bf072MY",
"iclr_2022_wClmeg9u7G",
"WCg4r3AADM",
"iclr_2022_wClmeg9u7G",
"WCg4r3AADM",
"WCg4r3AADM",
"WCg4r3AADM",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
"YeLgdYTLL0n",
... |
iclr_2022_gex-2G2bLdh | Hinge Policy Optimization: Rethinking Policy Improvement and Reinterpreting PPO | Policy optimization is a fundamental principle for designing reinforcement learning algorithms, and one example is the proximal policy optimization algorithm with a clipped surrogate objective (PPO-clip), which has been popularly used in deep reinforcement learning due to its simplicity and effectiveness. Despite its superior empirical performance, PPO-clip has not been justified via theoretical proof up to date. This paper proposes to rethink policy optimization and reinterpret the theory of PPO-clip based on hinge policy optimization (HPO), called to improve policy by hinge loss in this paper. Specifically, we first identify sufficient conditions of state-wise policy improvement and then rethink policy update as solving a large-margin classification problem with hinge loss. By leveraging various types of classifiers, the proposed design opens up a whole new family of policy-based algorithms, including the PPO-clip as a special case. Based on this construct, we prove that these algorithms asymptotically attain a globally optimal policy. To our knowledge, this is the first ever that can prove global convergence to an optimal policy for a variant of PPO-clip. We corroborate the performance of a variety of HPO algorithms through experiments and an ablation study. | Reject | All reviewers agreed that analysis of PPO is interesting.
During the discussion, however, there was an agreement that the current work is too thin in novelty and contribution: it provides only convergence analysis under very strong assumptions, and heavily builds on techniques from prior works. Meanwhile, for conventional policy gradient, recent works provided convergence rates.
As one reviewer pointed out - this work does not further our theoretical understanding on why PPO is better than vanilla policy gradient, as all the established results hold for policy gradient, even with less assumptions.
I encourage the authors to strengthen their paper by relaxing Assumption 4 (perhaps based on the robust classification idea raised in the discussion), and by further providing rate results. | train | [
"pYEvFgNwndH",
"6if0KuGa6qT",
"-rmhxwR4lzp",
"3PrvzBbw0n_",
"h0ZLP0AHqD",
"WPDNfBoqbc",
"uG5tj4EH9xE",
"makt0Lmddan",
"fyDrH6r5FAY",
"KhIzk3gO--J",
"Fh1UuYopnZ",
"qO6oyRa6KOS",
"f2ONKE20W4u",
"Wwh5-94hJIn",
"HGFy4GX4r58",
"LWI-_btOr86",
"npVpe0CwIBp",
"OCYqlafhAiK"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the follow-up question. Despite that we do not have a formal proof for this label-robust formulation yet, we expect this label-robust formulation to be more accessible than other existing formulations of robust classification since the original HPO loss in Eq(11) is a special case with $... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
2
] | [
"WPDNfBoqbc",
"h0ZLP0AHqD",
"h0ZLP0AHqD",
"HGFy4GX4r58",
"HGFy4GX4r58",
"uG5tj4EH9xE",
"makt0Lmddan",
"Fh1UuYopnZ",
"HGFy4GX4r58",
"HGFy4GX4r58",
"LWI-_btOr86",
"OCYqlafhAiK",
"npVpe0CwIBp",
"npVpe0CwIBp",
"iclr_2022_gex-2G2bLdh",
"iclr_2022_gex-2G2bLdh",
"iclr_2022_gex-2G2bLdh",
"... |
iclr_2022_zhynF6JnC4q | Adaptive Q-learning for Interaction-Limited Reinforcement Learning | Conventional reinforcement learning (RL) needs an environment to collect fresh data, which is impractical when an online interaction is costly.
Offline RL provides an alternative solution by directly learning from the logged dataset. However, it usually yields unsatisfactory performance due to a pessimistic update scheme or/and the low quality of logged datasets.
Moreover, how to evaluate the policy under the offline setting is also a challenging problem.
In this paper, we propose a unified framework called Adaptive Q-learning for effectively taking advantage of offline and online learning.
Specifically, we explicitly consider the difference between the online and offline data and apply an adaptive update scheme accordingly, i.e., a pessimistic update strategy for the offline dataset and a greedy or no pessimistic update scheme for the online dataset.
When combining both, we can apply very limited online exploration steps to achieve expert performance even when the offline dataset is poor, e.g., random dataset.
Such a framework provides a unified way to mix the offline and online RL and gain the best of both worlds.
To understand our framework better, we then provide an initialization following our framework's setting.
Extensive experiments are done to verify the effectiveness of our proposed method. | Reject | In this paper, the authors studied reinforcement learning applications that have access to both online and offline data (with limited online interaction though). In order to handle the mixture of online and offline data efficiently, the authors proposed a new paradigm called adaptive Q-learning, which treats offline and online data differently (as reflected by whether pessimism is implemented or not). The effectiveness of the proposed paradigm has been tested empirically. The reviewers have raised concerns about the sufficiency and significance of the experiments conducted in the paper, and pointed out that the proposed algorithmic idea is a somewhat incremental change over existing ones. The changes the authors promised to make will make the paper stronger. | train | [
"JCEmdwY_Wjr",
"dBfH9BJUpoh",
"ja1dtKo2Ehb",
"y5dVY4N5ljm",
"P0zF5A5odq",
"qEC8J6gA-3B",
"XOAyRsroqRg",
"C3EI9qOi5YZ",
"XgZ1PvCE4YS",
"ksmWYq-b-T",
"0hbpFlSbCXs",
"Y8px7UNV0Ma",
"O1PzZV75LFC",
"56sR7wcCboW"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a mixed offline-online RL approach for which they design an algorithm. They propose to maintain 2 separate replay buffers, one for online data and one for offline data, to allow them to sample either an online or offline batch of data when doing an update step, and tailor the loss function base... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2022_zhynF6JnC4q",
"Y8px7UNV0Ma",
"y5dVY4N5ljm",
"C3EI9qOi5YZ",
"XOAyRsroqRg",
"56sR7wcCboW",
"C3EI9qOi5YZ",
"JCEmdwY_Wjr",
"iclr_2022_zhynF6JnC4q",
"qEC8J6gA-3B",
"O1PzZV75LFC",
"iclr_2022_zhynF6JnC4q",
"iclr_2022_zhynF6JnC4q",
"iclr_2022_zhynF6JnC4q"
] |
iclr_2022_QbFfqWAEmMr | LASSO: Latent Sub-spaces Orientation for Domain Generalization | To achieve a satisfactory generalization performance on prediction tasks in an unseen domain, existing domain generalization (DG) approaches often rely on the strict assumption of fixed domain-invariant features and common hypotheses learned from a set of training domains. While it is a natural and important premise to ground generalization capacity on the target domain, we argue that this assumption could be overly strict and sub-optimal. It is particularly evident when source domains share little information or the target domains leverages information from selective source domains in a compositional way instead of relying on a unique invariant hypothesis across all source domains. Unlike most existing approaches, instead of constructing a single hypothesis shared among domains, we propose a LAtent Sub-Space Orientation (LASSO) method that explores diverse latent sub-spaces and learning individual hypotheses on those sub-spaces. Moreover, in LASSO, since the latent sub-spaces are formed by the label-informative features captured in source domains, they allow us to project target examples onto appropriate sub-spaces, while preserving crucial label-informative features for the label prediction. Finally, we empirically evaluate our method on several well-known DG benchmarks, where it achieves state-of-the-art results. | Reject | This paper proposes a novel method for improving domain generalization based on the idea of learning different subspaces for each domain. Authors provide theoretical analysis related to their proposal and further evaluate their proposed method on a subset of DomainBed benchmark.
**Strong Points:**
- The paper is well-written.
- The proposed method is novel.
- Authors provide theoretical analysis in support of their proposal.
- The theoretical results seem to be correct.
- Empirical evaluation shows that the proposed method improves over baselines on a subset of datasets included in the DomainBed benchmark.
**Weak Points:**
- The complexity of the theoretical results makes it very difficult for the reader to get any intuition about the underlying mechanisms at play.
- The theoretical analysis is disconnected from the proposed algorithm. It is hard to see how one could end up proposing such an algorithm following the theoretical results. I suggest that authors would consider reorganizing the paper with less emphasis on the theoretical part, perhaps simplifying the theoretical results and pushing the rest to appendix.
- The empirical evaluation can be improved significantly. Domain generalization is a very well-established area at this point. WILDS is a carefully designed and well-known benchmark and showing improvement in that benchmark would be very convincing but unfortunately authors do not discuss or even refer to it. They instead report their results on a subset of datasets used in DomainBed benchmark. The DomainBed benchmark is less challenging than WILDS but even following DomainBed closely and reporting the 3 evaluation metrics on all 7 datasets would have been satisfying. However, authors only report the results on 3 datasets. Reporting the results on a diverse group of datasets is particularly important in the case of Domain Generalization because we know that many methods are able to show improvements on a few datasets but it is challenging to beat the baselines on a significant majority of datasets.
**Final Decision Rationale**:
This is a borderline paper. On one hand, the proposed method is interesting and novel. On the other hand, the theoretical contributions are very limited and the empirical evaluation is not strong enough for acceptance. Given that all weak points mentioned above can be addressed, I recommend rejection and I sincerely hope that authors would strengthen their paper by addressing them before resubmitting their work. | val | [
"cyu9tWrp2J4",
"95F5UrrWWHU",
"6vrkpTybUhn",
"QibheObhJ5y",
"gle3EDLWXv2",
"aI2MmnF_-0",
"e5StVL_y7LX",
"dw87roy3CRf",
"KXWOXYiInCb",
"DhJ3ZLU3oIG",
"XHiWrG3qime",
"WurxQCmBy44",
"j484Q9yUEuw",
"lqvMOAigH2k",
"plkMoJ8ZTb3",
"k8EOQgktH0",
"BspsVHycRI",
"ZcsNvSB5Bq8",
"ZMi_PFcaueK"... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Many thanks for your reconsideration. We really appreciate it and will definitely update those discussions in the final version of this work. \n\nIf it is not too much trouble to ask, could you please also update the score in the original review to reflect your current positive decision?\n\nBest regards,\n\nAutho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"6vrkpTybUhn",
"dw87roy3CRf",
"aI2MmnF_-0",
"iclr_2022_QbFfqWAEmMr",
"XHiWrG3qime",
"e5StVL_y7LX",
"lqvMOAigH2k",
"plkMoJ8ZTb3",
"iclr_2022_QbFfqWAEmMr",
"iclr_2022_QbFfqWAEmMr",
"k8EOQgktH0",
"j484Q9yUEuw",
"ekKSAjGQVn",
"6P97a8Awj8",
"8Uk0CpP6fMg",
"XUBLhkacNXQ",
"BgQ2k64N1rx",
"... |
iclr_2022_o6dG7nVYDS | Finding lost DG: Explaining domain generalization via model complexity | The domain generalization (DG) problem setting challenges a model trained on multiple known data distributions to generalise well on unseen data distributions. Due to its practical importance, a large number of methods have been proposed to address this challenge. However most of this work is empirical, as the DG problem is hard to model formally; and recent evaluations have cast doubt on existing methods’ practical efficacy -- in particular compared to a well chosen empirical risk minimisation baseline.
We present a novel learning-theoretic generalisation bound for DG that bounds novel domain generalisation performance in terms of the model’s Rademacher complexity. Based on this, we conjecture that the causal factor behind existing methods’ efficacy or lack thereof is a variant of the standard empirical risk-predictor complexity tradeoff, and demonstrate that their performance variability can be explained in these terms. Algorithmically, this analysis suggests that domain generalisation should be achieved by simply performing regularised ERM with a leave-one-domain-out cross-validation objective. Empirical results on the DomainBed benchmark corroborate this. | Reject | This paper provides a learning theoretic account of domain generalization in which domains themselves are treated as data, generated from some domain generating distribution. All of the reviewers were positive about this approach and found it interesting. There were, however, a couple of critiques raised by reviewers that lead me to recommend that it is rejected:
- the theory provided in this paper does not remotely apply to the datasets that are used in the experiments. While, I agree with one of the author responses that DG benchmarks exist with many domains, DomainBed has very few domains, and it's not clear that their theory is a remotely satisfactory account of the experimental results presented in the paper.
- Despite some back and forth on the wording and positioning of the paper, I think the writing still does not give enough credit to worst-case analyses of DG. | train | [
"xFHkGdXfEpX",
"AllYCzz2-aK",
"RqoT8j15Nay",
"SFVNIpvLToH",
"UkYYRHotDly",
"CSgut2UdFkf",
"sLtoH9qg2JZ",
"ha8V1kii3eq",
"Cu5wZjFRLdQ",
"VED-LTuiVO3",
"UdycJ4gZViK",
"WYdAzlfqfry",
"0TcScqfmluI",
"DS8kx2W97lU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of domain generalization (DG), wherein predictors are trained on a related set of training domains and evaluated on an unseen test domain. The authors first present a learning-theoretic bound on the performance of an average-case formulation for DG, and then present a set of exper... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_o6dG7nVYDS",
"Cu5wZjFRLdQ",
"ha8V1kii3eq",
"UdycJ4gZViK",
"iclr_2022_o6dG7nVYDS",
"sLtoH9qg2JZ",
"VED-LTuiVO3",
"0TcScqfmluI",
"xFHkGdXfEpX",
"DS8kx2W97lU",
"WYdAzlfqfry",
"iclr_2022_o6dG7nVYDS",
"iclr_2022_o6dG7nVYDS",
"iclr_2022_o6dG7nVYDS"
] |
iclr_2022_saNgDizIODl | NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural Networks | This paper proposes a fast and scalable method for uncertainty quantification of machine learning models' predictions. First, we show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution. Importantly, the approach allows to disentangle explicitly \textit{aleatoric} and \textit{epistemic} uncertainties. The resulting method works directly in the feature space. However, one can apply it to any neural network by considering an embedding of the data induced by the network. We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets, such as MNIST, SVHN, CIFAR-100 and several versions of ImageNet. | Reject | The paper proposes a simple approach to quantify uncertainty in "deterministic" neural networks, not unlike the works of SNGP, DDU, and DUE, where one only performs one forward pass rather than in an ensemble or Monte Carlo sample. In particular, they propose a kernel-based method on a network's logits to estimate uncertainty, obtaining data and model uncertainty estimates separately using a bound on Bayes risk.
While I agree with the relevance of the problem, there's a shared concern among reviewers across both technical novelty and experimental validation---particularly compared to prior work that can be difficult to understand the key distinguishing factor. I recommend the authors use the reviewers' feedback to enhance their preprint should they aim to submit to a later venue. | test | [
"6iki8sfHZAh",
"zj0yNWcQQur",
"SNOgZu5_Ou",
"eg5fpqckkUm",
"72y2tBWk__W",
"P78G2xeTG6z",
"fFikSvJik3a",
"80991Mv30kC",
"LAmm0XoWepB",
"LQE6GgrhxG_",
"EmRxFTCRQP_",
"yBLZvDjjAD",
"uS_nMplJdeS"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for your comments. We additionally performed a series of experiments with SNGP and DUQ methods. \n\nTo benchmark the DUQ on ImageNet we had to make some modifications to the original paper code. The original paper proposes to train DUQ end-to-end with gradient penalty, but we failed to make it converg... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"P78G2xeTG6z",
"SNOgZu5_Ou",
"72y2tBWk__W",
"80991Mv30kC",
"LAmm0XoWepB",
"EmRxFTCRQP_",
"iclr_2022_saNgDizIODl",
"LQE6GgrhxG_",
"uS_nMplJdeS",
"fFikSvJik3a",
"yBLZvDjjAD",
"iclr_2022_saNgDizIODl",
"iclr_2022_saNgDizIODl"
] |
iclr_2022_GlN8MUkciwi | Learning Context-Adapted Video-Text Retrieval by Attending to User Comments | Learning strong representations for multi-modal retrieval is an important problem for many applications, such as recommendation and search. Current benchmarks and even datasets are often manually constructed and consist of mostly clean samples where all modalities are well-correlated with the content. Thus, current video-text retrieval literature largely focuses on video titles or audio transcripts, while ignoring user comments, since users often tend to discuss topics only vaguely related to the video.
In this paper we present a novel method that learns meaningful representations from videos, titles and comments, which are abundant on the internet. Due to the nature of user comments, we introduce an attention-based mechanism that allows the model to disregard text with irrelevant content.
In our experiments, we demonstrate that, by using comments, our method is able to learn better, more contextualised, representations, while also achieving competitive results on standard video-text retrieval benchmarks.
| Reject | This paper focuses on how to improve video-text retrieval via using additional user comments, and uses an attention mechanism to filter out the irrelevant comments. The main contribution is a context adapter module that allows learning from the auxiliary modality through an attention mechanism. The reviewers appreciated the overall idea's intuition and well-written paper, but they also felt that the technical novelty is incremental, and that the treatment of user comments should be more intuitive via the dialogue thread structure. There were also concerns about the applicability of the context adapter module to more realistic scenarios with much longer videos, where the number of comments is very large, and where number of distractor comments is larger than the non-distractor ones. | train | [
"o7oHQrpAJ_h",
"5BNT6E4d42",
"me08J42G9Ox",
"ygzPpddTI3R",
"XF2cReHFu-z",
"qql2GbHhnU8",
"G5bxmzOwxOG",
"EQY99L-2lHi",
"nFOXmqhx3-5",
"v-adime4iNU",
"ZOXU2VGvZeJ",
"ZamHy5RJAHW",
"OAbPc1sii0H",
"7Ecf4o9Bvb",
"PRjjVeVzgu0",
"EketF6f50PG"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad we have cleared your concerns! If there is nothing left, we would appreciate if you could adjust your scores after taking the rebuttal into account!",
" > Edit after response: Thank you for your detailed response. I would like to agree again that the proposed method has some merit. However, my main ... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"ygzPpddTI3R",
"me08J42G9Ox",
"iclr_2022_GlN8MUkciwi",
"v-adime4iNU",
"iclr_2022_GlN8MUkciwi",
"ZamHy5RJAHW",
"v-adime4iNU",
"ZOXU2VGvZeJ",
"OAbPc1sii0H",
"EketF6f50PG",
"PRjjVeVzgu0",
"me08J42G9Ox",
"7Ecf4o9Bvb",
"iclr_2022_GlN8MUkciwi",
"iclr_2022_GlN8MUkciwi",
"iclr_2022_GlN8MUkciwi... |
iclr_2022_rMbLORc8oS | SemiRetro: Semi-template framework boosts deep retrosynthesis prediction | Retrosynthesis brings scientific and societal benefits by inferring possible reaction routes toward novel molecules. Recently, template-based (TB) and template-free (TF) molecule graph learning methods have shown promising results to solve this problem. TB methods are more accurate using pre-encoded reaction templates, and TF methods are more scalable by decomposing retrosynthesis into subproblems, i.e., center identification and synthon completion. To combine both advantages of TB and TF, we suggest breaking a full-template into several semi-templates and embedding them into the two-step TF framework. Since many semi-templates are reduplicative, the template redundancy can be reduced while the essential chemical knowledge is still preserved to facilitate synthon completion. We call our method SemiRetro and introduce a directed relational graph attention (DRGAT) layer to extract expressive features for better center identification. Experimental results show that SemiRetro significantly outperforms both existing TB and TF methods. In scalability, SemiRetro covers 96.9\% data using 150 semi-templates, while previous template-based GLN requires 11,647 templates to cover 93.3\% data. In top-1 accuracy, SemiRetro exceeds template-free G2G 3.4\% (class known) and 6.4\% (class unknown). Besides, SemiReto has better interpretability and training efficiency than existing methods. | Reject | This paper proposes a new semiretro algorithm by combining the two major approaches of retrysynthesis, the template based method and the template free method - breaking a full-template into several semi-templates and embedding them into the two-step template-free framework. They also obtained state of the art performance in this task based on the recent GNN architecture. Although all reviewers were satisfied with the idea and excellent performance of this paper, the possibility of information leakage in their experiments was raised through the discussion period, and the authors seem to agree to this to some extent.
In conclusion, it is difficult to say that accurate and rigorous experimental verification of the proposed method has been made yet. I encourage the authors to resubmit the paper after correcting errors in their experiments. | train | [
"eCIj3i4dXDx",
"mzXxGuQYRk",
"hNADFaCBwi",
"UNgFnZcwFX",
"4P1A9_pjC8n",
"j2lYiA8cN1v",
"rxIPfd6hkfH",
"rc4dMveb5R",
"a1Cid-n5wcM",
"sqLhECxbDFy",
"OvigrSF7Hq6",
"7FJLjmSW-Wy",
"QAVZy9Wdg8g",
"mQheZ7z8a8",
"GEBC9QGzKzl",
"7BERHx5nZDI",
"DO6VMovqfwa",
"IZGXBTm1rpC",
"iyAfFOc2SlH",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_revi... | [
" ## Reducing score due to concerns about data leakage\nUnfortunately, as a result of the rebuttal/discussion period I have decided to reduce my overall and correctness scores. This is because Reviewer XXdN and the authors identified a data leak issue in the current implementation (see the comment and resulting thr... | [
-1,
3,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"mzXxGuQYRk",
"iclr_2022_rMbLORc8oS",
"iclr_2022_rMbLORc8oS",
"iclr_2022_rMbLORc8oS",
"j2lYiA8cN1v",
"rxIPfd6hkfH",
"rc4dMveb5R",
"a1Cid-n5wcM",
"sqLhECxbDFy",
"OvigrSF7Hq6",
"7FJLjmSW-Wy",
"QAVZy9Wdg8g",
"mQheZ7z8a8",
"GEBC9QGzKzl",
"7BERHx5nZDI",
"IZGXBTm1rpC",
"B1E-DBw2IN6",
"dr... |
iclr_2022_b30Yre8MzuN | NeuroSED: Learning Subgraph Similarity via Graph Neural Networks | Subgraph similarity search is a fundamental operator in graph analysis. In this framework, given a query graph and a graph database, the goal is to identify subgraphs of the database graphs that are structurally similar to the query. Subgraph edit distance (SED) is one of the most expressive measures of subgraph similarity. In this work, we study the problem of learning SED from a training set of graph pairs and their SED values. Towards that end, we design a novel siamese graph neural network called NeuroSED, which learns an embedding space with a rich structure reminiscent of SED. With the help of a specially crafted inductive bias, NeuroSED not only enables high accuracy but also ensures that the predicted SED, like true SED, satisfies triangle inequality. The design is generic enough to also model graph edit distance (GED), while ensuring that the predicted GED space is metric, like the true GED space. Extensive experiments on real graph datasets, for both SED and GED, establish that NeuroSED achieves $\approx 2$ times lower RMSE than the state of the art and is $\approx 18$ times faster than the fastest baseline. Further, owing to its pair-independent embeddings and theoretical properties, NeuroSED allows orders-of-magnitude faster graph/subgraph retrieval. | Reject | The paper proposes a new method for subgraph similarity search by learning embeddings via a GNN-based approach to reflect the edit distance between subgraphs. Reviewers highlighted that the paper proposes an intuitive and promising approach to an interesting problem and provides a good balance between theoretical and empirical results. However, reviewers raised also concerns regarding the significance of technical contributions, limited analysis (e.g, performance on large-scale graphs, baselines, evaluation) and comparison to related work. After author response and discussion, reviewers did not come to a full agreement with two reviewers indicating weak acceptance and two reviewers indicating (weak) reject. Taking rebuttal and discussion into account, I agree with the viewpoint that the paper is not yet ready for acceptance at ICLR as it would require an additional revision to fully address the raised concerns. However, I encourage the authors to revise and resubmit their manuscript based on the feedback from this reviewing round. | train | [
"ezpB2buvbe",
"XVHoagg_p7",
"_Y-zYOsp1a-",
"5MzO2iSLqaR",
"aHoun-rRu9H",
"5SNrTgQqiqB",
"eYHzRUXB38y",
"2ST_YRYx04X",
"iejV0kSrl6-",
"gUPm8hZUXdh",
"-a46xULgS1r",
"OAmOQN6sME",
"8q-CSU-jn6s",
"svD6eW0rm9I"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the problem of graph/subgraph similarity search in terms of the edit distance, termed GED/SED, respectively. Unfortunately, both GED and SED are NP hard, with exponential search space, making the cost prohibitive on even moderately large graphs/queries. Instead, the authors apply a neural netwo... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_b30Yre8MzuN",
"_Y-zYOsp1a-",
"-a46xULgS1r",
"aHoun-rRu9H",
"5SNrTgQqiqB",
"ezpB2buvbe",
"2ST_YRYx04X",
"iejV0kSrl6-",
"8q-CSU-jn6s",
"svD6eW0rm9I",
"OAmOQN6sME",
"iclr_2022_b30Yre8MzuN",
"iclr_2022_b30Yre8MzuN",
"iclr_2022_b30Yre8MzuN"
] |
iclr_2022_q2DCMRTvdZ- | Picking up the pieces: separately evaluating supernet training and architecture selection | Differentiable Neural Architecture Search (NAS) has emerged as a simple and efficient method for the automated design of neural networks. Recent research has demonstrated improvements on various aspects on the original algorithm (DARTS), but comparative evaluation of these advances remains costly and difficult. We frame supernet NAS as a two-stage search, decoupling the training of the supernet from the extraction of a final design from the supernet. We propose a set of metrics which utilize benchmark data sets to evaluate each stage of the search process independently. We demonstrate two metrics measuring separately the quality of the supernet's shared weights and the quality of the learned sampling distribution, as well as corresponding statistics approximating the reliance of the second stage search on these components of the supernet. These metrics facilitate both more robust evaluation of NAS algorithms and provide practical method for designing complete NAS algorithms from separate supernet training and architecture selection techniques. | Reject | All reviewers recommended rejection, and I agree.
I encourage the authors to follow the reviewers' recommendation and resubmit. | train | [
"xpMR5ha4quH",
"uZDjJw40ne",
"EAgXXjHBgBd",
"yLkiQs6yoHP",
"XXPR1glNXNT",
"n6UvGsGyi83",
"xuLhEtzKl-5",
"dNFH-gyubKa",
"L9OGGTbsAdU",
"l4fX3vi3gD-",
"R6rqny9Sot"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors study the question of evaluating differentiable methods for neural architecture search (NAS). Differentiable techniques are popular in the NAS community, and many recent papers have given criticisms or improvements to various parts of the DARTS algorithm and related algorithms. The authors propose a ne... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"iclr_2022_q2DCMRTvdZ-",
"xuLhEtzKl-5",
"R6rqny9Sot",
"l4fX3vi3gD-",
"L9OGGTbsAdU",
"dNFH-gyubKa",
"xpMR5ha4quH",
"iclr_2022_q2DCMRTvdZ-",
"iclr_2022_q2DCMRTvdZ-",
"iclr_2022_q2DCMRTvdZ-",
"iclr_2022_q2DCMRTvdZ-"
] |
iclr_2022_O2s9k4h0x7L | A Deep Latent Space Model for Directed Graph Representation Learning | Graph representation learning is a fundamental problem for modeling relational data and benefits a number of downstream applications. Traditional Bayesian-based random graph models and recent deep learning based methods are complementary to each other in interpretability and scalability. To take the advantages of both models, some combined methods have been proposed. However, existing models are mainly designed for \textit{undirected graphs}, while a large portion of real-world graphs are directed. The focus of this paper is on \textit{directed graphs}. We propose a Deep Latent Space Model (DLSM) for directed graphs to incorporate the traditional latent space random graph model into deep learning frameworks via a hierarchical variational auto-encoder architecture. To adapt to directed graphs, our model generates multiple highly interpretable latent variables as node representations, and the interpretability of representing node influences is theoretically proved. Moreover, our model achieves good scalability for large graphs via the fast stochastic gradient variational Bayes inference algorithm. The experimental results on real-world graphs demonstrate that our proposed model achieves the state-of-the-art performances on link prediction and community detection tasks while generating interpretable node representations. | Reject | This to me looks like quality work not yet adequately developed, and thus is borderline work. The authors seem to have achieved a good result: equalling SotA SEAL (although, one reviewer did preliminary experiments and could not match this) with a sophisticated algorithm using a variety of Bayesian tricks, a more scalable algorithm, and one potentially adapted to further tasks. However, not all of these impressive feats are adequately demonstrated in this paper, though many had parts included in the rewrite. So I'd say the paper needs a rewrite and more focussed experimental work to broaden the presentation of empirial performance, for instance to node classification.
I certaintly appreciated the use of IBP and Dirichlet models within the system, so would love to see the work further developed.
The reviewers agreed in several aspects: (1) more experimental work, for instance on better and larger benchmark data, (2) better presentation and discussion of the theory, (3) better discussion of the motivation for the model (as per reviewer D8S8), and oftentimes linked to the ablation study to support this, which you have done some of (4) additional connections to recent related work in graph representation learning on link prediction works
The authors have done a good job or addressing many of the reviewers concerns, ultimately lifting the paper from Reject to Borderline Negative, but I think more work is needed. | train | [
"ZWNpV6MSq0M",
"0gRLIQWQA9H",
"w9PKp0lldEk",
"keS38rwuX4P",
"VoMP8geZm6e",
"Uz7eUad9C9z",
"rI1I1o2Rp1B",
"XQT5wohhcPQ",
"wJi3Lx1Y5k",
"nPx6d6vy3Td",
"SAk-1Jjh3x",
"DIX9WZMtsNn",
"T8QFb0WL-rC",
"y51ycOcFVwr",
"oAtbZn_KaZ5",
"H0xMfjXVsHr",
"XvnxHnSH-MP"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers’ suggestions of evaluating on OGB and we consider updating the experimental results in the future. As the reviewer have said, scalability is indeed another challenging issue and remains an open research question of generative methods of graph representation learning.\n\nWe agree that there ... | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"VoMP8geZm6e",
"VoMP8geZm6e",
"T8QFb0WL-rC",
"iclr_2022_O2s9k4h0x7L",
"Uz7eUad9C9z",
"wJi3Lx1Y5k",
"XQT5wohhcPQ",
"iclr_2022_O2s9k4h0x7L",
"SAk-1Jjh3x",
"XvnxHnSH-MP",
"keS38rwuX4P",
"H0xMfjXVsHr",
"H0xMfjXVsHr",
"XQT5wohhcPQ",
"keS38rwuX4P",
"iclr_2022_O2s9k4h0x7L",
"iclr_2022_O2s9k... |
iclr_2022_in1ynkrXyMH | Introspective Learning : A Two-Stage approach for Inference in Neural Networks | In this paper, we advocate for two stages in a neural network's decision making process. The first is the existing feed-forward inference framework where patterns in given data are sensed and associated with previously learned patterns. The second stage is a slower reflection stage where we ask the network to reflect on its feed-forward decision by considering and evaluating all available choices. Together, we term the two stages as introspective learning. We use gradients of trained neural networks as a measurement of this reflection. We perceptually visualize the explanations from both stages to provide a visual grounding to introspection. For the application of recognition, we show that an introspective network is $4\%$ more robust and $42\%$ less prone to calibration errors when generalizing to noisy data. We also illustrate the value of introspective networks in downstream tasks that require generalizability and calibration including active learning and out-of-distribution detection. Finally, we ground the proposed machine introspection to human introspection in the application of image quality assessment. | Reject | The reviewers all appreciated the novel concept behind the work. I agree with this, I think the principles behind the work are novel and interesting, and I would encourage the authors to improve the validation of this method and publish it in the future.
However, reviewers also raised a number of issues with the current paper: (1) the evaluation appears a bit preliminary, and could be improved significantly with additional datasets and more ablations/comparisons; (2) it's not clear if the improvements from the method are especially significant; (3) the writing could be improved (I do see that the authors made a significant number of changes and improved parts of the paper in response to reviewer concerns to a degree). Probably the writing issues could be fixed, but the skepticism about the experiment results seems harder to address, and while I recognize that the authors made an effort to point some existing ablations in the paper that do address parts of what the reviewers raised, I do think that in the balance the experimental results leave the validation of the work as somewhat borderline.
While less important for the decision, I found that the paper is somewhat overselling the contribution in the opening -- while the particular concept of using gradients as features in this way is interesting, similar ideas have been proposed in the past, and the paper would probably be better if it was more clearly positioned in the context of prior work rather than trying to present a new "framework" like this. It kind of feels like it's biting off too much in the opening, and then delivering a comparatively more modest (but novel and interesting!) technical component. | test | [
"I1s0jiRtwX0",
"Yu3QtqHjIIG",
"-FS65OORU-H",
"RbJP0BrKflm",
"tDZasCvn5zU",
"8V8tkrUUVbK",
"AMAB8aW3-Pu",
"KYoc71ugd_O",
"ZJ08YDHdcht",
"KY2lzbPs8dA",
"lYTaN7SHpqk",
"pvT6MT4OOya"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the time to address some of my concerns and the changes to the paper. While these changes improve the quality of the paper, I maintain my original assessment of the paper. ",
" **I'm also curious what would happen if the introspection network is trained on a separate validation set.**\n\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"8V8tkrUUVbK",
"-FS65OORU-H",
"lYTaN7SHpqk",
"tDZasCvn5zU",
"pvT6MT4OOya",
"KY2lzbPs8dA",
"ZJ08YDHdcht",
"iclr_2022_in1ynkrXyMH",
"iclr_2022_in1ynkrXyMH",
"iclr_2022_in1ynkrXyMH",
"iclr_2022_in1ynkrXyMH",
"iclr_2022_in1ynkrXyMH"
] |
iclr_2022_o86_622j0sb | Imperceptible Black-box Attack via Refining in Salient Region | Deep neural networks are vulnerable to adversarial examples, even in the black-box setting where the attacker only has query access to the model output. Recent studies have devised successful black-box attacks with high query efficiency. However, such performance often comes at the cost of the imperceptibility of adversarial attacks, which is essential for attackers. To address this issue, in this paper we propose to use segmentation priors for black-box attacks such that the perturbations are limited in the salient region. We find that state-of-the-art black-box attacks equipped with segmentation priors can achieve much better imperceptibility performance with little reduction in query efficiency and success rate. We further propose the Saliency Attack, a new gradient-free black-box attack that can further improve the imperceptibility by refining perturbations in the salient region. Experimental results show that the perturbations generated by our approach are much more imperceptible than the ones generated by other attacks, and are interpretable to some extent. Furthermore, our approach is found to be more robust to detection-based defense, which demonstrates its efficacy as well. | Reject | In this paper, the authors propose to use segmentation priors for black-box attacks such that the perturbations are limited in the salient region. They also find that state-of-the-art black-box attacks equipped with segmentation priors can achieve much better imperceptibility performance with little reduction in query efficiency and success rate. Hence, the auithors propose the Saliency Attack, a new gradient-free black-box attack, that can further improve the imperceptibility by refining perturbations in the salient region.
The reviewers think that the proposed method is simple and important, and the authors have responded properly to some comments.
However, the reviewers still are not satisfied with the experimental evaluation and comparisons, as the authors can only try to compare with other ideas and test more models in the future.
In summary, I think the manuscript at its current staus cannot be accepted. | train | [
"MQNRDEfFs6d",
"loJwYYhwwIf",
"yV4BwFFlrOs",
"3leYq7WD_no",
"7D9svU5z29t",
"f60zuKwDcWE",
"ZpbRwgNaZnJ",
"DmLL913v21E",
"uZ2TI1tvbIP",
"I3tXlhLuYSt",
"5kzQipa-pdU",
"VHoAqgj4N98",
"s_0iyYVFiYG",
"BaHEQGsYeo",
"I3qabbBLowe"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your comment and advices. We will consider your suggested work (Croce et al., 2020) and other $L_0$ attacks in future work.",
" Thanks for the further clarifications.\n\nAbout the comparison in terms of $l_0$-norm, on ImageNet it is possible to achieve around 100% of success rate with much ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"loJwYYhwwIf",
"7D9svU5z29t",
"I3qabbBLowe",
"f60zuKwDcWE",
"5kzQipa-pdU",
"I3tXlhLuYSt",
"iclr_2022_o86_622j0sb",
"BaHEQGsYeo",
"s_0iyYVFiYG",
"ZpbRwgNaZnJ",
"DmLL913v21E",
"iclr_2022_o86_622j0sb",
"iclr_2022_o86_622j0sb",
"iclr_2022_o86_622j0sb",
"iclr_2022_o86_622j0sb"
] |
iclr_2022_YfFWrndRGQx | Multi-Objective Online Learning | This paper presents a systematic study of multi-objective online learning. We first formulate the framework of Multi-Objective Online Convex Optimization, which encompasses a novel multi-objective dynamic regret in the unconstrained max-min form. We show that it is equivalent to the regret commonly used in the zero-order multi-objective bandit setting and overcomes the problem that the latter is hard to optimize via first-order gradient-based methods. Then we propose the Online Mirror Multiple Descent algorithm with two variants, which computes the composite gradient using either the vanilla min-norm solver or a newly designed $L_1$-regularized min-norm solver. We further derive regret bounds of both variants and show that the $L_1$-regularized variant enjoys a lower bound. Extensive experiments demonstrate the effectiveness of the proposed algorithm and verify the theoretical advantage of the $L_1$-regularized variant. | Reject | This paper looks at a formulation of online multi-objective optimization problem.
All reviewers agree on the score, 6, which is quite rare but is not really informative; none of them are very excited about the paper, but they all find it interesting.
I have read it as well myself. The paper is rather clear and well written. I have three majors concerns.
1) I am not fully convinced by the objective R_{MOD} as it reduces to the dynamic regret in the single objective problem, as the later cannot be minimized unless we make strong stationarity assumption. This is obviously the case here (see Assumption 2). Then the choice of parameters would depend on some "stationarity" quantity (V_T). I am not really enthusiastic about this either.
2) The analysis is rather classical once the problem is reduced to a single-objective, so the analysis is not really breathtaking. Yet I admit that I quite enjoyed reading about this reduction, the idea is quite neat.
3) Multi-objective online optimization has already been considered in online learning, but the related works did not really mention it. For instance, Blackwell approachability is such an example [1,2,3] (yet I am not sure that it can cover the Pareto front idea). It would be interesting to see how those approach compares (notably, the online mirror descent has been widely studied in that case).
All in all, I do understand the reviewers, and this paper is certainly borderline, but I do not think it reaches the acceptance bar yet. As a consequence, I would rather recommend rejection this year.
[1] J. Abernethy, P. Bartlett, D. Hazan. Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:27-46, 2011.
[2] V. Perchet. Approachability, regret and calibration: Implications and equivalences, Journal of Dynamics & Games,181-254, 2014.
[3] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Beyond regret. Proceedings of the 24th Annual Conference on Learning Theory, PMLR, 19:559–594, 2011. | train | [
"4FauvBG44g3",
"LrdZwbcBy4W",
"QClXhRoiX9i",
"WlJeWHnCnb",
"z-uSIiS5UdJ",
"zmff9zirk5s",
"-vz01nFpccY",
"7bMa2TzbWLG",
"NZVfhPdjMWb",
"B_PMrRHutH4",
"ZYhHxOiCkb",
"i-7SNDgcTi9",
"8HgzjuMiGj",
"l_Z65KNYFhT",
"SHg6mCRb9y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. \n\nI think that the detailed analysis for the static regret, given in Section D, can improve the quality of the paper. Indeed, the fact that this analysis is no longer PSG is indeed quite intriguing, but in essence, we get a bound in $O(\\sqrt{T})$, which is reassuring. For the dynamic ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"-vz01nFpccY",
"iclr_2022_YfFWrndRGQx",
"iclr_2022_YfFWrndRGQx",
"iclr_2022_YfFWrndRGQx",
"SHg6mCRb9y",
"iclr_2022_YfFWrndRGQx",
"LrdZwbcBy4W",
"QClXhRoiX9i",
"l_Z65KNYFhT",
"ZYhHxOiCkb",
"iclr_2022_YfFWrndRGQx",
"8HgzjuMiGj",
"iclr_2022_YfFWrndRGQx",
"iclr_2022_YfFWrndRGQx",
"iclr_2022_... |
iclr_2022__xxbJ7oSJXX | Offline Reinforcement Learning with Resource Constrained Online Deployment | Offline reinforcement learning is used to train policies in scenarios where real-time access to the environment is expensive or impossible.
As a natural consequence of these harsh conditions, an agent may lack the resources to fully observe the online environment before taking an action. We dub this situation the resource-constrained setting. This leads to situations where the offline dataset (available for training) can contain fully processed features (using powerful language models, image models, complex sensors, etc.) which are not available when actions are actually taken online.
This disconnect leads to an interesting and unexplored problem in offline RL: Is it possible to use a richly processed offline dataset to train a policy which has access to fewer features in the online environment?
In this work, we introduce and formalize this novel resource-constrained problem setting. We highlight the performance gap between policies trained using the full offline dataset and policies trained using limited features.
We address this performance gap with a policy transfer algorithm which first trains a teacher agent using the offline dataset where features are fully available, and then transfers this knowledge to a student agent that only uses the resource-constrained features. To better capture the challenge of this setting, we propose a data collection procedure: Resource Constrained-Datasets for RL (RC-D4RL). We evaluate our transfer algorithm on RC-D4RL and the popular D4RL benchmarks and observe consistent improvement over the baseline (TD3+BC without transfer). | Reject | The authors propose the resource constrained offline RL problem where the offline dataset contains extra features that are not available online. The goal is to use these extra features to improve performance during deployment. They propose a simple modification to TD3-BC in the continuous control setting and a simple modification to CQL in the discrete setting. They evaluate their proposed approaches on D4RL, RC-D4RL (a novel dataset that they introduce for resource constrained offline RL), Atari, and a proprietary real-life Ads problem.
Initial reviews identified the following concerns:
* While the exact problem is novel, the idea of having access to privileged features at training time that are not available at deployment has been explored in supervised learning and online RL. The reviewers were not clear how considering the offline RL setting interacts specifically with the privileged features to produce an interesting setting.
* The baseline simply trains on the limited feature set. Unsurprisingly, using the extra features can improve performance. In light of the previous point, reviewers asked for more substantial baselines, suggesting BC on the teacher and predicting the missing features as some possibilities.
* The set of tasks was too limited.
The authors provided a substantial response:
* Experiments on Ads data
* Experiments on Atari with CQL as the base algorithm
* Additional baselines on RC-D4RL HalfCheetah-v2 datasets (BC on teacher and predictive)
* Additional analysis
I commend the authors on the hard work they did preparing this response. It is quite substantial and does improve the paper significantly. However, reviewers and I still have a number of concerns:
* The additional baselines are appreciated, however, the results are mixed. The additional baselines are a step in the right direction, but they need to be evaluated beyond a single dataset. It is hard to evaluate the results without reasonable baselines. I agree and think that even though the specific problem is novel, the idea of transfer learning is not, so it is reasonable to require that we have more extensive baselines. Furthermore, while the authors argue that their method has an edge on the more practical dataset, that is based on a very limited evaluation. Probing this further is important.
* The CQL modification is quite different than the TD3+BC modification. The performance of the modification for CQL is not significantly better than CQL. What should we make of this?
* For the Ads dataset, all hyperparameter settings except Transfer(0, 1) show the same performance. This seems surprising as even Transfer(0.1, 0.9) shows no difference. Finally, Transfer(0, 1) beating Transfer(1, 0) 7/10 times is not statistically significant.
At this time, the paper is not ready for publication, but the paper is moving in the right direction and I encourage the authors to submit a revised version to a future venue. | val | [
"x4T-B2fXMPd",
"8qKq_i293X",
"BxjxJYCrfbP",
"UGp2Hc-nBpy",
"QcETN6VHomG",
"HnZLfLcVaZ",
"Y-W9w8S-0lA",
"h83JGJdo6me",
"c0MZaGuPjC6",
"6NMd--j_1zm",
"PMEGd1D7K1D",
"WSd8bAJ6DKp",
"Y8ck-s2D5Qg",
"_zz-7Wes_ML",
"B5wx92Zj7hd",
"oM0Ff7z6IJh",
"3UpR45XeNE",
"zx3gzc9aLW",
"XpZyAIyW2Sv",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" We thank the reviewer for the comments and updating the score of the paper. However, we find it unfortunate that the reviewer still rates it a weak reject. \n\n> results are mixed...\n\nWe do agree that there are cases where the True-BC agent outperforms the proposed algorithm. However, the gap between the teache... | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Y-W9w8S-0lA",
"QcETN6VHomG",
"UGp2Hc-nBpy",
"B5wx92Zj7hd",
"6NMd--j_1zm",
"iclr_2022__xxbJ7oSJXX",
"Y8ck-s2D5Qg",
"c0MZaGuPjC6",
"_zz-7Wes_ML",
"hY787MjW3A",
"iclr_2022__xxbJ7oSJXX",
"_zz-7Wes_ML",
"HnZLfLcVaZ",
"FijF0zuE4Y-",
"WSd8bAJ6DKp",
"FKpW4QdwOV0",
"iclr_2022__xxbJ7oSJXX",
... |
iclr_2022_CQzlxFVcmw1 | Message Function Search for Hyper-relational Knowledge Graph | Recently, the hyper-relational knowledge graph (HKG) has attracted much attention due to its widespread existence and potential applications. The pioneer works have adapted powerful graph neural networks (GNNs) to embed HKGs by proposing domain-specific message functions. These message functions for HKG embedding are utilized to learn relational representations and capture the correlation between entities and relations of HKGs. However, these works often manually design and fix structures and operators of message functions, which makes them difficult to handle complex and diverse relational patterns in various HKGs (i.e., data patterns). To overcome these shortcomings, we plan to develop a method to dynamically search suitable message functions that can adapt to patterns of the given HKG. Unfortunately, it is not trivial to design an expressive search space and an efficient search algorithm to make the search effective and efficient. In this paper, we first unify a search space of message functions that enables both structures and operators to be searchable. Especially, the classic KG/HKG models and message functions of existing GNNs can be instantiated as special cases in the proposed search space. Then, we design an efficient search algorithm to search the message function and other GNN components for any given HKGs. Through empirical study, we show that the searched message functions are data-dependent, and can achieve leading performance in link/relation prediction tasks on benchmark data sets. | Reject | The paper studies neural architecture search for hyper-relational knowledge graphs (HKGs). A search space is put-forth, and it is searched with a differentiable search algorithm. The paper is technically strong. However, there are some concerns about the narrow scope of the problem/solution, given that other more general formulations have also been studied. | train | [
"-B9laf7TQVg",
"kGeq0_XZ_Y",
"mPozZ4v8kNU",
"8LXpIlrVE5L",
"U1uwsqxqyP",
"UQxij5HlgeP",
"tSFOWxh4jRx",
"tCh2dpuKWrD",
"K8fePE8fic-"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose to conduct neural architecture search (NAS) for hyper-relational knowledge graphs (HKGs). Compared with normal graphs, HKGs can better model the complex relationships between different entities. Specifically, a novel search space is proposed inspired by recent message-passing GNN... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_CQzlxFVcmw1",
"mPozZ4v8kNU",
"K8fePE8fic-",
"-B9laf7TQVg",
"kGeq0_XZ_Y",
"tSFOWxh4jRx",
"tCh2dpuKWrD",
"iclr_2022_CQzlxFVcmw1",
"iclr_2022_CQzlxFVcmw1"
] |
iclr_2022_fJIrkNKGBNI | Effective Polynomial Filter Adaptation for Graph Neural Networks | Graph Neural Networks (GNNs) exploit signals from node features and the input graph topology to improve node classification task performance. However, these models tend to perform poorly on heterophilic graphs, where connected nodes have different labels. Recently proposed GNNs work across graphs having varying levels of homophily. Among these, models relying on polynomial graph filters have shown promise. We observe that solutions to these polynomial graph filter models are also solutions to an overdetermined system of equations. It suggests that in some instances, the model needs to learn a reasonably high order polynomial. On investigation, we find the proposed models ineffective at learning such polynomials due to their designs. To mitigate this issue, we perform an eigendecomposition of the graph and propose to learn multiple adaptive polynomial filters acting on different subsets of the spectrum. We theoretically and empirically show that our proposed model learns a better filter, thereby improving classification accuracy. We study various aspects of our proposed model including, dependency on the number of eigencomponents utilized, latent polynomial filters learned, and performance of the individual polynomials on the node classification task. We further show that our model is scalable by evaluating over large graphs. Our model achieves performance gains of up to 10% over the state-of-the-art models and outperforms existing polynomial filter-based approaches in general. | Reject | This paper proposes to apply a piece-wise polynomial filter on the spectral corresponding to the graph convolution to enhance the model expressivity of graph neural networks. The effectiveness of the proposed model is investigated through numerical experiments and it was shown that the method achieves fairly nice performances.
This paper gives a natural extension to the usual adaptive Generalized PageRank approaches to more expressive piece-wise polynomial filters. However, the reviewers are not enthusiastic on this paper. This is mainly because of the following concerns: (1) Since it requires diagonalization of the aggregation operator, it requires much more computational burden than the usual polynomial filters, which prevents the method from being applied to data with much more large size. (2) The choice of the filter could be more investigated, in particular, the complexity-expressivity trade-off (in other words, bias-variance trade-off) could be discussed more, for example, by theoretical work.
In summary, the paper seems not to be well matured for being published in ICLR conference. | test | [
"4Nt6KFcHQCg",
"a_xJtE6BvEW",
"Ltc238NWOKA",
"WLR7dxswbZt",
"8l7PY7M376v",
"Y-3VlzyZ2lb",
"Z-Md7dOH7Ef",
"HSBv0eFIGcH",
"ZIVaCKpFC-U",
"fgbbgEK7Vg",
"7ZzZr9W0Sd4",
"UrQCRwNDcHR",
"pXM5cuKEyFk",
"8YofuXz6EGu",
"kfCzU2-ZGWN",
"NSDUlpCovwI",
"8TKhGJVO9A",
"WftNb5AFoPs",
"yJscreAgEZM... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official... | [
" Dear Reviewers, \n\nWe thank you for reviewing our work. Additionally, we are thankful to **Reviewers eu6Q, SUfF, and kGGE** for participating in rebuttal phase and providing feedback. We believe we have addressed all the raised concerns thoroughly and have updated the appendix section to reflect them. We promise... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
... | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
... | [
"iclr_2022_fJIrkNKGBNI",
"iclr_2022_fJIrkNKGBNI",
"K9cX_2nXeq",
"iclr_2022_fJIrkNKGBNI",
"Z-Md7dOH7Ef",
"Z-Md7dOH7Ef",
"HSBv0eFIGcH",
"pXM5cuKEyFk",
"voAyMHFhvP5",
"8y9NkXMOq0y",
"fgbbgEK7Vg",
"kfCzU2-ZGWN",
"VI_0sazp52I",
"LwBY8yIf8Aq",
"eM-IfAI8j7D",
"a_xJtE6BvEW",
"8y9NkXMOq0y",
... |
iclr_2022_a61qArWbjw_ | Scalable multimodal variational autoencoders with surrogate joint posterior | To obtain a joint representation from multimodal data in variational autoencoders (VAEs), it is important to infer the representation from arbitrary subsets of modalities after learning. A scalable way to achieve this is to aggregate the inferences of each modality as experts. A state-of-the-art approach to learning this aggregation of experts is to encourage all modalities to be reconstructed and cross-generated from arbitrary subsets. However, this learning may be insufficient if cross-generation is difficult. Furthermore, to evaluate its objective function, exponential generation paths concerning the number of modalities are required. To alleviate these problems, we propose to explicitly minimize the divergence between inferences from arbitrary subsets and the surrogate joint posterior that approximates the true joint posterior. We also proposed using a gradient origin network, a deep generative model that learns inferences without using an inference network, thereby reducing the need for additional parameters by introducing the surrogate posterior. We demonstrate that our method performs better than existing scalable multimodal VAEs in inference and generation.
| Reject | PAPER: This paper proposes a method to learn joint representations from potentially missing data when (1) cross-generation may be difficult, and/or (2) with large number of modalities. This is achieved by minimizing the divergence between a surrogate joint posterior and inferences from arbitrary subsets.
DISCUSSION: The reviews and discussion brought many relevant issues and concerns. The authors submitted a revised version that improved the clarity of the paper and added an important experiment with PolyMNIST. In their responses, authors also addressed some misunderstanding about JMVAE-KL. The comparison with a relatively similar work, from Sutter et al., 2020, was only mentioned in the related work, with no direct comparisons. Also, the authors did not directly address the issue of studying tradeoffs between quality of generated samples and their coherences. It should also be noted that the advantage of the proposed SMVAE is marginal when the number of modalities increases, for the latent representation experiments on PolyMNIST.
SUMMARY: Enthusiasm for this paper was not unanimous. The reviewers brought some concerns about its differentiation with priori work, such as Sutton et al., 2020, and about a more detailed analysis of the tradeoffs. While the clarity of the paper improved during the revision, a good number of issues remained. I am leaning towards rejection. | train | [
"qJpk6zKVZY",
"y7SLldjDyPv",
"-vKIlMrQEp6",
"kzSR7jW2YhJ",
"3dgsPrIYC7",
"AinfWwZjTnt",
"-0TSxuF3RlT",
"4HMmGAc4u8R",
"uU0t3z5Hmyt",
"v9jYVvRTlz7",
"eOl2Qphhet",
"lyp2OizwpSN"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are grateful for your reply. However, we would disagree with your statement that the JMVAE's KL divergence terms are included in the ELBO. According to the original JMVAE paper [1], the objective function (Equation (4)) is the ELBO with multimodal input *plus* the KL divergence terms between the all-modality p... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
3,
6,
5
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"y7SLldjDyPv",
"4HMmGAc4u8R",
"-0TSxuF3RlT",
"iclr_2022_a61qArWbjw_",
"iclr_2022_a61qArWbjw_",
"v9jYVvRTlz7",
"kzSR7jW2YhJ",
"eOl2Qphhet",
"lyp2OizwpSN",
"iclr_2022_a61qArWbjw_",
"iclr_2022_a61qArWbjw_",
"iclr_2022_a61qArWbjw_"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.