paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_dHd6pU-8_fF
L-SR1 Adaptive Regularization by Cubics for Deep Learning
Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points and poor local minima. To avoid these points, directions of negative curvature can be utilized, which requires computing the second-derivative matrix. In Deep Neural Networks (DNNs), the number of variables ($n$) can be of the order of tens of millions, making the Hessian impractical to store ($\mathcal{O}(n^2)$) and to invert ($\mathcal{O}(n^3)$). Alternatively, quasi-Newton methods compute Hessian approximations that do not have the same computational requirements. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss function in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a Limited-Memory Symmetric Rank-1 quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified Adaptive Regularized Cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as L-BFGS.
Reject
This paper presents an adaptive gradient method for neural net training inspired by L-BFGS. All of the reviewers recommend rejection. They raise concerns about the amount of novelty, the clarity of the writing, and the experimental comparisons. I encourage the authors to take the reviewers' comments into account and improve the submission for the next cycle.
val
[ "WTF2kXGfwy", "cXAKcBQSZ5S", "yMRQM_MxYF", "Qu3pM8gUb3A", "TxEjCwQxPr", "UZfUxqpYAoF", "_Y0vFRy2C14", "3r0muNC4K-b", "27PXFiPBBpe", "vDfjtFPt0_p", "RWMYDUhKnQ", "uhArid_ksj_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their rebuttal and clearly, this paper needs a lot more work to be ready for publication. Therefore I will keep my rating. I encourage the authors to resubmit their paper when it is ready.", " I thank the authors for their comments. These partially address my concerns, but the specific ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2022_dHd6pU-8_fF", "iclr_2022_dHd6pU-8_fF", "UZfUxqpYAoF", "iclr_2022_dHd6pU-8_fF", "27PXFiPBBpe", "uhArid_ksj_", "RWMYDUhKnQ", "vDfjtFPt0_p", "iclr_2022_dHd6pU-8_fF", "iclr_2022_dHd6pU-8_fF", "iclr_2022_dHd6pU-8_fF", "iclr_2022_dHd6pU-8_fF" ]
iclr_2022_ArY-zkyHI_l
Resilience to Multiple Attacks via Adversarially Trained MIMO Ensembles
While ensemble methods have been widely used for robustness against random perturbations (\ie the average case), ensemble approaches for robustness against adversarial perturbations (\ie the worst case) have remained elusive despite multiple prior attempts. We show that ensemble methods can improve adversarial robustness to multiple attacks if the ensemble is \emph{adversarially diverse}, which is defined by two properties: 1) the sub-models are adversarially robust themselves and yet 2) adversarial attacks do not transfer easily between sub-models. While at first glance, creating such an ensemble would seem computationally expensive, we demonstrate that an adversarially diverse ensemble can be trained with minimal computational overhead via a Multiple-Input Multiple-Output (MIMO) model. Specifically, we propose to train a MIMO model with adversarial training ({\emph{MAT}}), where each sub-model can be trained on a different attack type. When computing gradients for generating adversarial examples during training, we use the gradient with respect to the ensemble objective. This has a two-fold benefit: 1) it only requires 1 backward pass and 2) the cross-gradient information between the models promotes robustness against transferable attacks. We empirically demonstrate that {\emph{MAT}} produces an ensemble of models that is adversarially diverse and significantly improves performance over single models or vanilla ensembles while being comparable to previous state-of-the-art methods. On MNIST, we obtain $99.5\%$ clean accuracy and ($88.6\%, 57.1\%,71.6\%$) against $(\ell_\infty, \ell_2, \ell_1)$ attacks, and on CIFAR10, we achieve $79.7\%$ clean accuracy and ($47.9\%, 61.8\%,47.6\%$) against $(\ell_\infty, \ell_2, \ell_1)$ attacks, which are comparable to previous state-of-the-art methods.
Reject
This paper proposes a new ensemble training method for improving adversarial robustness to multiple attacks (e.g., $\ell_2$, $\ell_1$ and $\ell_\infty$). Specifically, authors adopt the recent Multi-Input Multi-Output (MIMO) ensemble architecture for computational efficiency. Then, the authors construct the adversarial examples using the outputs of multiple attacks simultaneously. With these examples, standard adversarial training is conducted on MIMO ensemble. All reviewers are on the negative side. AC agrees with reviewers’ concerns on limited novelty and insufficient empirical evaluation. AC also thinks that the improvement is not that significant compared to the existing method, especially concerning the real-world dataset. Overall, AC recommends rejection.
val
[ "ThvV6mor86", "g_1XeGa6rcY", "ib_ti3ujmD7", "o-KmuVb40ln", "HEzr3g6XZzP", "OhoMOpsiV-o", "KRB5T6i5mC", "qHw2nNEgtTb", "inMakA3c1W5", "hxFwwKRsrN7", "d4JJkQ9H1fP", "NfNiNhD8hU", "BJeP507Mujc", "HnCQOqh1Ow", "8-_FDvR9PdC", "DI808Z7wgMX" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their thorough responses. \n\nWhile the authors have improved their submission, some of my main concerns remain, for example:\n1. Given that the paper is primarily empirical (and that MNIST is considered by recent AT papers somewhat toy setup), adding results on a different d...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "HEzr3g6XZzP", "o-KmuVb40ln", "iclr_2022_ArY-zkyHI_l", "DI808Z7wgMX", "8-_FDvR9PdC", "8-_FDvR9PdC", "DI808Z7wgMX", "DI808Z7wgMX", "HnCQOqh1Ow", "HnCQOqh1Ow", "ib_ti3ujmD7", "8-_FDvR9PdC", "8-_FDvR9PdC", "iclr_2022_ArY-zkyHI_l", "iclr_2022_ArY-zkyHI_l", "iclr_2022_ArY-zkyHI_l" ]
iclr_2022_ab7fanwXWu
Accelerating Optimization using Neural Reparametrization
We tackle the problem of accelerating certain optimization problems related to steady states in ODE and energy minimization problems common in physics. We reparametrize the optimization variables as the output of a neural network. We then find the conditions under which this neural reparameterization could speed up convergence rates during gradient descent. We find that to get the maximum speed up the neural network needs to be a special graph convolutional network (GCN) with its aggregation function constructed from the gradients of the loss function. We show the utility of our method on two different optimization problems on graphs and point-clouds.
Reject
This paper proposes speeding up certain optimization problems common in physics by reparameterizing their parameters as the output of a graph neural network. The reviewers appreciate the idea, but are not convinced enough to recommend the paper for acceptance. They point out the following weaknesses: * The method amounts to linear preconditioning, and hence it's reasonable to expect a fairly complete comparison to the many linear preconditioning approaches that have been proposed previousl. The reviewers are not satisfied with the currently provided comparison. * The main idea is not presented clearly enough. In particular, it's not obvious the proposed method is best described as neural reparameterization, since it seems to amount to linear preconditioning. * The experiments are not persuasive enough: The presented problems may not be relevant to all of the target audience of ICLR, and the experimental evaluation does not seem sufficiently exhaustive. The suggested areas of improvement provided by the reviewers seem reasonable to me: I therefore recommend not accepting the paper in its current form. To make the paper more accessible and appealing, the authors may consider rewriting the paper to more closely match the perspective taken by the reviewers, and to provide a more thorough comparison to the previous approaches and the existing literature.
train
[ "MubVrM-UBC-", "fu80nnjShj2", "9QspbYnoQMu", "snMM-JavQXA", "M-2eYyCPtJs", "1m6uPFm0qi3", "4zqtK3BAzX_", "_D5MPrcQ2F", "rwp5lBPYtEK", "PQZfcid-wxU", "QnLh8vaBQ0O", "0FTrk1mPD0V", "u8w1ug2H_y", "xdpgZPpFM4T" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for clarifying my questions; I find these answers satisfactory, so I will keep my score as-is. ", " I'd like to thank the authors for their very detailed responses and clarifications, that are very helpful for my understanding of their work. While to my mind the current version of this paper is not qu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "M-2eYyCPtJs", "9QspbYnoQMu", "iclr_2022_ab7fanwXWu", "xdpgZPpFM4T", "u8w1ug2H_y", "QnLh8vaBQ0O", "QnLh8vaBQ0O", "0FTrk1mPD0V", "QnLh8vaBQ0O", "0FTrk1mPD0V", "iclr_2022_ab7fanwXWu", "iclr_2022_ab7fanwXWu", "iclr_2022_ab7fanwXWu", "iclr_2022_ab7fanwXWu" ]
iclr_2022_Xb2YyVApEj6
MaiT: integrating spatial locality into image transformers with attention masks
Though image transformers have shown competitive results with convolutional neural networks in computer vision tasks, lacking inductive biases such as locality still poses problems in terms of model efficiency especially for embedded applications. In this work, we address this issue by introducing attention masks to incorporate spatial locality into self-attention heads of transformers. Local dependencies are captured with masked attention heads along with global dependencies captured by original unmasked attention heads. With Masked attention image Transformer – MaiT, top-1 accuracy increases by up to 1.0\% compared to DeiT, without extra parameters, computation, or external training data. Moreover, attention masks regulate the training of attention maps, which facilitates the convergence and improves the accuracy of deeper transformers. Masked attention heads guide the model to focus on local information in early layers and promote diverse attention maps in latter layers. Deep MaiT improves the top-1 accuracy by up to 1.5\% compared to CaiT with fewer parameters and less FLOPs. Encoding locality with attention masks requires no extra parameter or structural change, and thus it can be combined with other techniques for further improvement in vision transformers.
Reject
The paper presents a masking strategy to introduce the locality bias into the vision transformers. The experiments show the effectiveness of considering such inductive bias. The reviewers agreed on the importance of the research question and the simplicity of the algorithm. MaiT also has a straight-forward sparse attention extension that performs on the complexity of $O(n)$ rather than $O(n^2)$. The reviewers also listed some common concerns of the paper: (1) The novelty of such a masking approach is relatively low. I don't think the ALS or the soft masking adding too much contribution to that. Similar ideas have been explored in a number of papers. (2) Reviewers also raise concerns about the experiments. Inductive biases often help more in small settings (fewer parameters and FGLOPs) and gain less in the large settings. When comparing with the STOA models, I think this is basically the trend shown in the paper as well. While I appreciate the authors’ efforts in including more comparisons, I have to say I really don’t think the performance gain is significant enough especially in the large settings. Needless to say that there are many other ways of encoding the same locality bias into the model. Based on the reviewers' judgements and my own opinion, I therefore recommend rejection of this paper.
train
[ "lEvaWNx23bt", "u-zU12lbtdw", "9q5_hRGJ_HF", "_rgkw3lWaYm", "NIxYMJ0cN97", "qh3T2JINoqs", "54XSDA6mO6_", "uK2CQLY3HS", "0ct5koujqpE" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. **Comparison with Swin**: Both Swin and MaiT extract spatial locality from patches within a window, however they are different in the follow aspects: \n* Patches outside the window still contribute to attention weights for MaiT. \n* Within each layer, Swin uses larger **non-overlapping windows** (7x7) while Ma...
[ -1, -1, -1, -1, -1, 5, 5, 1, 3 ]
[ -1, -1, -1, -1, -1, 3, 5, 5, 5 ]
[ "54XSDA6mO6_", "0ct5koujqpE", "uK2CQLY3HS", "qh3T2JINoqs", "iclr_2022_Xb2YyVApEj6", "iclr_2022_Xb2YyVApEj6", "iclr_2022_Xb2YyVApEj6", "iclr_2022_Xb2YyVApEj6", "iclr_2022_Xb2YyVApEj6" ]
iclr_2022_jT5vnpqlrSN
GIR Framework: Learning Graph Positional Embeddings with Anchor Indication and Path Encoding
The majority of existing graph neural networks (GNNs) following the message passing neural network (MPNN) pattern have limited power in capturing position information for a given node. To solve such problems, recent works exploit positioning nodes with selected anchors, mostly in a process that first explicitly assign distances information and then perform message passing encoding. However, this two-stage strategy may ignore potentially useful interaction between intermediate results of the distance computing and encoding stages. In this work, we propose a novel framework which follows the anchor-based idea and aims at conveying distance information implicitly along the MPNN message passing steps for encoding position information, node attributes, and graph structure in a more flexible way. Specifically, we first leverage a simple anchor indication strategy to enable the position-aware ability for well-designed MPNNs. Then, following this strategy, we propose the Graph Inference Representation (GIR) model, which acts as a generalization of MPNNs with a more specific propagation path design for position-aware scenarios.  Meanwhile, we theoretically and empirically explore the ability of the proposed framework to get position-aware embeddings, and experimental results show that our proposed method generally outperforms previous position-aware GNN methods.
Reject
This paper proposes to encode positions of nodes in graphs by anchor-based GNN with customized message passing steps. All reviewers raised significant concerns on this paper, including novelty of the message passing steps, experiments, writing and clarity, etc. The authors have actively responded to reviewer comments, but many of the concerns are still not addressed. Thus, the paper needs some work in order to be competitive.
train
[ "grHATYhRDfx", "ci79E39KNj", "LRVOdNhF-TH", "Nx1tc1Q8shx", "eLHJ-OCx2Zy", "09Zfrdj4miY", "5UsXaF1-fet", "mDfC-KpOn82", "lEP-Kofx7uL", "EGf0RC5veva", "ZD8y9c-8kMt", "IBBjjONDAp", "EIWJkjD72Y5", "Upbya72-K8", "IeC4uNsjmYr" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the reply, here we address the concerns.\n\nAs presented in Section 4.2, the GIR framework is composed of message propagation paths with a lower bound as BFS from anchors, and the capability of indicating anchors. Though a simple strategy, the MPNN with anchor ID labeling (MPNN-A) is an ...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ci79E39KNj", "iclr_2022_jT5vnpqlrSN", "eLHJ-OCx2Zy", "iclr_2022_jT5vnpqlrSN", "lEP-Kofx7uL", "iclr_2022_jT5vnpqlrSN", "ci79E39KNj", "ci79E39KNj", "IeC4uNsjmYr", "Upbya72-K8", "EIWJkjD72Y5", "iclr_2022_jT5vnpqlrSN", "iclr_2022_jT5vnpqlrSN", "iclr_2022_jT5vnpqlrSN", "iclr_2022_jT5vnpqlrSN...
iclr_2022_-Nf6TikpjQ
Multi-agent Performative Prediction: From Global Stability and Optimality to Chaos
The recent framework of performative prediction is aimed at capturing settings where predictions influence the target/outcome they try to predict. In this paper, we introduce a natural multi-agent version of this framework, where multiple decision makers try to predict the same outcome. We showcase that such competition can result in interesting phenomena by proving the possibility of phase transitions from stability to instability and eventually chaos. Specifically, we present settings of multi-agent performative prediction where under sufficient conditions, their dynamics lead to global stability and optimality. In the opposite direction, when the agents are not sufficiently cautious in their learning/updates rates, we show that instability and in fact formal chaos is possible. We complement our theoretical predictions with simulations showcasing the predictive power of our results.
Reject
In this paper, the authors extend the performative prediction framework of Perdomo et al. (2020) to a multi-agent, game-theoretic setting, and they examine how and when multi-agent performative learning may lead to performative stability/optimality. The authors' results and contributions can be summarized as follows: - They consider a multi-agent location-scale distribution map with parameters constrained in a simplex, and they study the dynamics of an exponentiated gradient descent algorithm (EGDA for short) inspired by Kivinen and Warmuth (1997). - If the learning rate is small enough, the authors show that EGDA converges to a performatively stable point (under the same assumptions that guarantee existence of a convex potential). - On the other hand, if the learning rate is large, the algorithm behaves chaotically. The reviewers' initial assessment was mixed, but after the authors' rebuttal, some concerns were partially addressed and the scores of the paper were upgraded to borderline positive. On a point-by-point basis, the authors' result on the convergence of EGDA with a small learning rate was appreciated by the reviewers, but it was not otherwise deemed significant enough relative to existing convergence results for gradient methods. Instead, most of the discussion centered on the authors' result on chaos (Theorem 4.6), which was viewed as the most significant contribution of the paper. However, continued discussion between committee members revealed that this result follows directly from Theorem 3.11 and Corollary 3.12 of the arxiv preprint "A family of chaotic maps from game theory" by T. Chotibut, F. Falniowski, M. Misiurewicz, and G. Piliouras <https://arxiv.org/abs/1807.06831>, which is not discussed in the paper. [As was pointed out, the update map (7) of the paper coincides with the update rule (7) of the arxiv preprint, and the proof techniques are likewise identical.] This overlap with previous work was considered a "big omission" and it pushed the paper below the acceptance threshold. In the end, the paper was not supported by any of the reviewers, so a "reject" recommendation was made.
train
[ "p2sYdgV0Vgn", "KcAN9i4I6Kx", "c1cZg0ARZ_i", "A90nBmg6Hm8", "lfRVH3VjP3f", "Ia8gV9vAK67", "lPcd4uGGCWB", "MN7GKmpBdC3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studied the different behaviours of using exponentiated gradient descent (Def. 2.3) in linear regression with different learning rates. The setting is called performative prediction, which can be viewed as a special case of reinforcement learning (after the model makes a prediction, the environment retu...
[ 6, 6, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_-Nf6TikpjQ", "iclr_2022_-Nf6TikpjQ", "A90nBmg6Hm8", "KcAN9i4I6Kx", "c1cZg0ARZ_i", "p2sYdgV0Vgn", "MN7GKmpBdC3", "iclr_2022_-Nf6TikpjQ" ]
iclr_2022_hk3Cxc2laT-
Clustered Task-Aware Meta-Learning by Learning from Learning Paths
To enable effective learning of new tasks with only few samples, meta-learning acquires common knowledge from the existing tasks with a globally shared meta-learner. To further address the problem of task heterogeneity, recent developments balance between customization and generalization by incorporating task clustering to generate the task-aware modulation to be applied on the global meta-learner. However, these methods learn task representation mostly from the features of input data, while the task-specific optimization process with respect to the base-learner model is often neglected. In this work, we propose a Clustered Task-Aware Meta-Learning (CTML) framework with task representation learned from its own learning path. We first conduct a rehearsed task learning from the common initialization, and collect a set of geometric quantities that adequately describes this learning path. By inputting this set of values into a meta path learner, we automatically abstract path representation optimized for the downstream clustering and modulation. To further save the computational cost incurred by the additional rehearsed learning, we devise a shortcut tunnel to directly map between the path and feature cluster assignments. Extensive experiments on two real-world application domains: few-shot image classification and cold-start recommendation demonstrate the superiority of CTML compared to state-of-the-art baselines.
Reject
This paper proposed a cluster-based task-aware meta-learning (CTML) approach with task representation learned from its own learning path. Based on the prior work of feature-based task characterization, it integrates the rehearsed task gradient descent trajectory into task representation, and further improves computational efficiency by learning a different network to estimate the rehearsed task-trajectory characterization from the feature representation. Experiments were conducted on few-shot image classification (meta dataset and miniimagenet) and cold-start recommendation tasks. Reviewers had raised various concerns about the work including technical novelty, the shortcut-tunnel assumption, empirical comparison, scalability issue, more ablations for in-depth analysis, etc. The reviewers and AC appreciate authors for putting good efforts in the rebuttal by replying the review questions carefully and making changes to improve their experiments and paper. Overall, this paper is a borderline case, where reviewers agree some clear merits (well-written, easy to follow, good execution of an interesting idea with code provided, etc). Despite the improvements during rebuttal, some major concerns on the weaknesses still remain (e.g., technical novelty, more convincing justification on the assumption, significance of empirical gains). Therefore, I cannot recommend it for acceptance at its current form, but I hope to see it accepted in the near future after these issues are fully addressed.
train
[ "tyCx-bfDr7W", "3wOCKNFQuY", "LziTBOKwEC1", "Fg6Yw8FjMj7", "t2wGsxIT_Fi", "gUyr9EYMr5r", "reh76LYWo9D", "pzvtdOR-fEn", "MVeZYJPlIAD", "Fzo9v-kTOS1", "loHyslb1SZd", "Ng3M-3ts2h", "DOrZUxT4Hlw", "3DqFbG0luKu", "DbIVDaHZlBd", "VulOQ_ZQXnQ", "ZLKDM18dd5h", "PFhXX6GP1yM", "in25i0VGOma...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", ...
[ " Thank you for the additional results, these are indeed more convincing for the fine-tuning baselines.\n\nRE: Meta-Dataset, which architecture was used for these experiments? If it's resnet-12, perhaps simply switching to resnet-18 will improve further. It would be great to also report the performance on all datas...
[ -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "LziTBOKwEC1", "Fzo9v-kTOS1", "Fzo9v-kTOS1", "t2wGsxIT_Fi", "in25i0VGOma", "iclr_2022_hk3Cxc2laT-", "iclr_2022_hk3Cxc2laT-", "gUyr9EYMr5r", "Ofi5eEN-2DQ", "loHyslb1SZd", "Ng3M-3ts2h", "DOrZUxT4Hlw", "3DqFbG0luKu", "DbIVDaHZlBd", "PFhXX6GP1yM", "PFhXX6GP1yM", "iclr_2022_hk3Cxc2laT-", ...
iclr_2022_a3hQPNqIFk6
Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack
Methods for model explainability have become increasingly critical for testing the fairness and soundness of deep learning. A number of explainability techniques have been developed which use a set of examples to represent a human-interpretable concept in a model's activations. In this work we show that these explainability methods can suffer the same vulnerability to adversarial attacks as the models they are meant to analyze. We demonstrate this phenomenon on two well-known concept-based approaches to the explainability of deep learning models: TCAV and faceted feature visualization. We show that by carefully perturbing the examples of the concept that is being investigated, we can radically change the output of the interpretability method, e.g. showing that stripes are not an important factor in identifying images of a zebra. Our work highlights the fact that in safety-critical applications, there is need for security around not only the machine learning pipeline but also the model interpretation process.
Reject
The work demonstrates that adversarially perturbing inputs can change the output of concept based explainability tools. Reviewers generally agreed that the writing was clear and the experiments were easily understood. Regarding novelty, reviewers noted that there are several existing works which study the adversarial robustness of explainability tools (one even has experiments specifically on concept based explainability tools). As a result, there is not much novelty in the finding that concept based explainability tools are sensitive to adversarial perturbation. Regarding the technical contribution of the algorithm, it is expected that standard optimization approaches (e.g. PGD) would be sufficient to break concept based explainability tools so there is not a clear technical challenge being solved in the work. The work could be improved by refocusing the robustness analysis to derive new insights regarding the behavior of concept based explainability tools. In doing so, it would be beneficial to deemphasize the claims regarding novel security concerns---these methods don't even work reliably in non-adversarial settings, as evidenced by poor out-of-distribution robustness. It is expected that performance will be even worse under adversarial settings.
train
[ "yHyKUBlZ555", "gUZOJveEvz7", "QYFtIufPOL4", "0vy3WG9-6bD", "ajHk5hGF4YY", "5pUs6QY24Oj", "9L_uUy_Bvxf", "cRH73bM9Vrd", "DkL-AetRZv-", "_uxTwWOiL44", "p9ehWvu3Z4", "eQTowh91ojN", "0Rwy9j1UYFI" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank you for your follow-up comments and your time. We include responses below, as well as highlight our changes to section 3.1.\n\n> - As mentioned previously and also pointed out by Reviewer 6y3k, It is essential to evaluate recent concept-based methods [4,5] to demonstrate if the vulnerability of the conce...
[ -1, -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "ajHk5hGF4YY", "5pUs6QY24Oj", "iclr_2022_a3hQPNqIFk6", "iclr_2022_a3hQPNqIFk6", "cRH73bM9Vrd", "DkL-AetRZv-", "iclr_2022_a3hQPNqIFk6", "0vy3WG9-6bD", "0Rwy9j1UYFI", "QYFtIufPOL4", "eQTowh91ojN", "iclr_2022_a3hQPNqIFk6", "iclr_2022_a3hQPNqIFk6" ]
iclr_2022_G33_uTwQiL
Equivariant Vector Field Network for Many-body System Modeling
Modeling many-body systems has been a long-standing challenge in science, from classical and quantum physics to computational biology. Equivariance is a critical physical symmetry for many-body dynamic systems, which enables robust and accurate prediction under arbitrary reference transformations. In light of this, great efforts have been put on encoding this symmetry into deep neural networks, which significantly boosts the prediction performance of down-streaming tasks. Some general equivariant models which are computationally efficient have been proposed, however, these models have no guarantee on the approximation power and may have information loss. In this paper, we leverage insights from the scalarization technique in differential geometry to model many-body systems by learning the gradient vector fields, which are SE(3) and permutation equivariant. Specifically, we propose the Equivariant Vector Field Network (EVFN), which is built on a novel tuple of equivariant basis and the associated scalarization and vectorization layers. Since our tuple equivariant basis forms a complete basis, learning the dynamics with our EVFN has no information loss. We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data, as well as the equilibrium state of small molecules (molecular conformation) evolving as a statistical mechanics system. Experimental results across multiple tasks demonstrate that our model achieves best or competitive performance on baseline models in various types of datasets.
Reject
The paper proposes a symmetry-informed neural network for modelling many-body systems. The network is empirically evaluated in the tasks of predicting Newtonian trajectories and molecular conformations. All four reviewers are critical of the paper and recommend rejection (one weak, three strong). The reviews have flagged weaknesses and quality issues with several aspects of the submission, including the proposed methodology, the novelty of the contribution, and the clarity of the presentation. Although detailed clarifications were provided by the authors, most of the reviewers' concerns remain, and the consensus among reviewers remains to reject the paper. Consequently, the current version of the paper does not appear to meet the quality standards for acceptance to ICLR.
train
[ "snJC2fZcTgy", "BeURY1MgazR", "isjRsiGLXW", "SSopTqUbF_P", "z--7lbjbK23", "QH_MsLtaJkh", "O7Mfk3eo_3", "YxaAWPe_J8S", "eiQpqStS8jT", "KL3aP_kWcdm", "oU7ZD3sute3", "1y4JxYACVd", "U73qfqQC3SZ", "9aEcK4ZWnaU", "szORzfQzTJ2", "gAKRMfEEROx", "dmtn47SKFrn", "wftNPc56n_L", "2KF8c8FJjqs"...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " To rigorously validate the expressive power of the proposed equivariant basis, we applied our scalarization module directly on the `EGNN` model, obtaining the model variant `EGNN+Scalarization`. As shown in the following table, for the molecular generation task (QM9), introducing the scalarization module could si...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "isjRsiGLXW", "iclr_2022_G33_uTwQiL", "KL3aP_kWcdm", "YxaAWPe_J8S", "O7Mfk3eo_3", "eiQpqStS8jT", "szORzfQzTJ2", "wftNPc56n_L", "1y4JxYACVd", "9aEcK4ZWnaU", "e2IgawSKqoZ", "oU7ZD3sute3", "iclr_2022_G33_uTwQiL", "gAKRMfEEROx", "BeURY1MgazR", "Dia_fiq_qF8", "2KF8c8FJjqs", "dmtn47SKFrn...
iclr_2022_Q8OjAGkxwP5
Limitations of Active Learning With Deep Transformer Language Models
Active Learning (AL) has the potential to reduce labeling cost when training natural language processing models, but its effectiveness with the large pretrained transformer language models that power today's NLP is uncertain. We present experiments showing that when applied to modern pretrained models, active learning offers inconsistent and often poor performance. As in prior work, we find that AL sometimes selects harmful "unlearnable" collective outliers, but we discover that some failures have a different explanation: the examples AL selects are informative but also increase training instability, reducing average performance. Our findings suggest that for some datasets this instability can be mitigated by training multiple models and selecting the best on a validation set, which we show impacts relative AL performance comparably to the outlier-pruning technique from prior work while also increasing absolute performance. Our experiments span three pretrained models, ten datasets, and four active learning approaches.
Reject
The paper proposes a novel explanation for the ineffectiveness of active learning (AL), namely, that AL will select unlearnable collective outliers. Reviewers generally find the finding is interesting, but the paper lacks in-depth analyses. There're additional concerns on the experimental setups.
train
[ "pXYMWJwHBex", "pZicUZMjsyw", "75X03mNE2kY", "k4M2RIm6i6D", "cZ50f-Pgla-", "7hjm90--VOP", "hTM1G8xtiA_", "oTKqfO4VRtK", "ETUZ7_m07X", "Jum63Lofw8", "z9EoLgKZNE0", "aaQWkYRFQMn", "J4Z-f50Oo_l", "gaXylHaxXWD", "PRwSOfCZOTS", "NLDANw4h8d" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have updated the score.", "This paper test 4 deep active learning mehods across 10 datasets (including text classification and multi-choice commonsense reasoning) based on pre-trained LMs. It conduct many ablation studies to explore whether some inherent factors of like batch size in active learning, the mode...
[ -1, 5, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ETUZ7_m07X", "iclr_2022_Q8OjAGkxwP5", "7hjm90--VOP", "oTKqfO4VRtK", "iclr_2022_Q8OjAGkxwP5", "Jum63Lofw8", "z9EoLgKZNE0", "NLDANw4h8d", "pZicUZMjsyw", "cZ50f-Pgla-", "PRwSOfCZOTS", "gaXylHaxXWD", "iclr_2022_Q8OjAGkxwP5", "iclr_2022_Q8OjAGkxwP5", "iclr_2022_Q8OjAGkxwP5", "iclr_2022_Q8O...
iclr_2022_Nus6fOfh1HW
On the Relationship between Heterophily and Robustness of Graph Neural Networks
Empirical studies on the robustness of graph neural networks (GNNs) have suggested a relation between the vulnerabilities of GNNs to adversarial attacks and the increased presence of heterophily in perturbed graphs (where edges tend to connect nodes with dissimilar features and labels). In this work, we formalize the relation between heterophily and robustness, bridging two topics previously investigated by separate lines of research. We theoretically and empirically show that for graphs exhibiting homophily (low heterophily), impactful structural attacks always lead to increased levels of heterophily, while for graph with heterophily the change in the homophily level depends on the node degrees. By leveraging these insights, we deduce that a design principle identified to significantly improve predictive performance under heterophily—separate aggregators for ego- and neighbor-embeddings—can also inherently offer increased robustness to GNNs. Our extensive empirical analysis shows that GNNs adopting this design alone can achieve significantly improved empirical and certifiable robustness compared to the best-performing unvaccinated model. Furthermore, models with this design can be readily combined with explicit defense mechanisms to yield improved robustness with up to 18.33% increase in performance under attacks compared to the best-performing vaccinated model.
Reject
This work studies the relation between graph heterophily and the robustness of GNNs and theoretically shows that effective structural attacks on GNNs for homophilous graphs lead to increased heterophily level, while for heterophils graphs they alter the homophily level contingent on node degrees under some specific assumptions. Overall, the findings in the paper are interesting and can be useful for other researchers trying to improve GNNs' robustness on homophilic and heterophilic datasets. After the discussion and rebuttal, the main concerns are: - while the paper has shown some interesting observations, no new methodology was proposed based on these findings. - The authors have attempted to relax assumptions and justified their setup on experiments, however the explanations are still limited. For example, Theorem 1 does not allow attention mechanism, different choices of aggregator, skip-connection, and more GNN layers.
train
[ "0q8j9QB16ri", "wNWyayB8QMt", "RSzzqr_K4R", "wp0y5yNi0c9", "RNXnr8Yr5jU", "ZZS_Y5c1tR", "pHDqQ_Ou4i-", "uTfthOYKGGs", "9cb27si_qg", "vMbap5FNKFd", "8gYWjasaPKP", "s6Bd7uFG6OJ", "JHguSJjPhG", "LMXgxZOI7e_", "IoBzeHmbBv", "nuFtavNxp-i", "latmzw5kqC", "uYElU9yWS6O", "uOP7cVzwFU1", ...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ "In this paper, the authors analyze the relation between GNN robustness against attacks with heterophily and propose a method to improve GNN robustness by separating the aggregators of ego- and neighbor-embeddings. In this paper, the authors analyze the relation between GNN robustness against attacks with heterophi...
[ 5, -1, -1, -1, 5, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, 4, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_Nus6fOfh1HW", "9cb27si_qg", "8gYWjasaPKP", "latmzw5kqC", "iclr_2022_Nus6fOfh1HW", "pHDqQ_Ou4i-", "uYElU9yWS6O", "iclr_2022_Nus6fOfh1HW", "IoBzeHmbBv", "iclr_2022_Nus6fOfh1HW", "tdNIAEEUa94", "JHguSJjPhG", "nuFtavNxp-i", "uTfthOYKGGs", "nuFtavNxp-i", "LMXgxZOI7e_", "0q8j9QB...
iclr_2022_dLTXoSIcrik
Avoiding Overfitting to the Importance Weights in Offline Policy Optimization
Offline policy optimization has a critical impact on many real-world decision-making problems, as online learning is costly and concerning in many applications. Importance sampling and its variants are a widely used type of estimator in offline policy evaluation, which can be helpful to remove assumptions on the chosen function approximations used to represent value functions and process models. In this paper, we identify an important overfitting phenomenon in optimizing the importance weighted return, and propose an algorithm to avoid this overfitting. We provide a theoretical justification of the proposed algorithm through a better per-state-neighborhood normalization condition and show the limitation of previous attempts to this approach through an illustrative example. We further test our proposed method in a healthcare-inspired simulator and a logged dataset collected from real hospitals. These experiments show the proposed method with less overfitting and better test performance compared with state-of-the-art batch reinforcement learning algorithms.
Reject
In this paper, the authors proposed an offline policy optimization algorithm, motivated by an analysis of the upper bound error of importance sampling policy value estimator. Specifically, by the decomposition of the error in a particularly way, the authors identified some error which does not converge. Then, the authors introduce the contraints over feasible actions to avoid the overfitting induced by such errors. Finally, the authors tested the proposed algorithm empirically. The paper is well-motivated and the authors addressed some of the questions in their rebuttals. However, there are still several issues need to be addressed, - The alternative practical estimator with plug-in behavior distribution would perfectly avoid the over-fitting, which is, however, ignored. This is an important and easy-to-implemented competitor. - The pessimistic principle in the face of uncertainty (PFU) has been exploited extensively in offline policy optimization problem. How the proposed algorithm is connected to the PFU has not been discussed carefully, especially in terms of non-asymptotic sample complexity, which makes the paper is not well-positioned. - While the motivation is derived from the unbiased importance sampling estimator, the counterfactual risk minimization in Equation 7 is introduced suddently, without clear justification. - In my opinion, for a better clarification of the paper, the expressiveness of the policy family should not be discussed in this way. I understand the authors would like to avoid any possible degeneration, and explain the asymptotic lossless in terms of policy flexibility. However, the whole point of the paper is trying to introduce some mechanism to avoid the possible overfitting by regularizing the policy family. In other words, the restriction is on purpose and beneficial. I think the argument of policy family expressiveness should be re-considered and re-discussed. Minor: - Markovian vs. non-Markovian baseline comparison is not fair, and more comparison on well-known benchmarks, e.g., OpenAI gym, should be conducted. - The \sigma upper bound should be explicitly provided and verified in practice. In sum, the paper is well-motivated, however, need further improvement to be pulished.
train
[ "YsTPQTVr52X", "RPHGbLY5sA", "k9KpjS2Wr99", "6JI85CdsNb5", "YMjWkpDkACg", "Iamc8uErlv", "s2LffFnbaFJ", "9uRE2uV_Bou" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer’s constructive comments. See the following response to each of the major comment points.\n1) We thank the reviewer’s suggestion. The toy example in the main paper (Example 1) and Appendix (Example 2 and 3) can both show this point. In Example 1, the third term is zero, the first term is a st...
[ -1, 6, -1, -1, -1, 5, 5, 6 ]
[ -1, 3, -1, -1, -1, 4, 4, 3 ]
[ "RPHGbLY5sA", "iclr_2022_dLTXoSIcrik", "s2LffFnbaFJ", "Iamc8uErlv", "9uRE2uV_Bou", "iclr_2022_dLTXoSIcrik", "iclr_2022_dLTXoSIcrik", "iclr_2022_dLTXoSIcrik" ]
iclr_2022_9Cwxjd6nRh
High Fidelity Visualization of What Your Self-Supervised Representation Knows About
Discovering what is learned by neural networks remains a challenge. In self-supervised learning, classification is the most common task used to evaluate how good a representation is. However, relying only on such downstream task can limit our understanding of how much information is contained in the representation of a given input. In this work, we study how to visualize representations learned with self-supervised models. We investigate a simple gradient descend based method to match a target representation and show the limitations of such techniques. We overcome these limitations by developing a representation-conditioned diffusion model (RCDM) that is able to generate high-quality inputs that share commonalities with a given representation. We further demonstrate how our model's generation quality is on par with state-of-the-art generative models and how the representation conditioning brings new avenues to analyze and improve self-supervised models.
Reject
This paper proposes a method for visualizing representations of neural networks trained with self-supervised learning with conditional denoising diffusion probabilistic models. By generating multiple images conditioned on a representation, one can identify what aspects the representation is and is not sensitive to. The proposed method allows for high fidelity generated images that can be used to compare different self-supervised methods and layers. Reviewers agreed that the paper proposed reasonable methodology, targeted an interesting problem of understanding what is learned by self-supervised methods, and presented interesting qualitative evaluations. However, there remained concerns on the novelty of results in comparison to other methods for probing representations (e.g. classification based), subjectiveness of interpretation of the qualitative results, and limited quantifications of the intuition gained from the visualizations. While the authors have argued that the point of the paper is to showcase the merits of qualitative visual analysis method, reviewers found that the presented results were insufficient to demonstrate the value of the proposed approach. A number of ideas were discussed with reviewers on how to highlight the value of visualization which could strengthen the paper in the future. Given the lack of novelty on the conditional generation side, and limited insight gained from the qualitative results, I cannot recommend this paper for acceptance in its current form.
train
[ "d2RvSnrZ1WJ", "8ibBDS1E4v", "qAfb1xqone6", "qKFbi4g1A8N", "J0TApWx-4Oy", "diPzdhPoYqa", "V12hIT3K8iJ", "tF1PfY1Pqm5", "IaVbPiAXIB", "JgwbuewRlBq", "Zh89c5KT-E-", "n2YS-K20vDU", "h8KZkBLY1vQ", "BQhP1rmp6dK", "Wx7CGOt496" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to start by thanking all reviewers for their careful reading of our paper, and for their thoughtful comments, which helped us improve the paper and, we hope, clarify its message.\nThe main and common issue highlighted in the reviews was that the contribution this research makes was unclear. We thus ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_9Cwxjd6nRh", "n2YS-K20vDU", "iclr_2022_9Cwxjd6nRh", "V12hIT3K8iJ", "h8KZkBLY1vQ", "h8KZkBLY1vQ", "qAfb1xqone6", "qAfb1xqone6", "qAfb1xqone6", "BQhP1rmp6dK", "BQhP1rmp6dK", "Wx7CGOt496", "iclr_2022_9Cwxjd6nRh", "iclr_2022_9Cwxjd6nRh", "iclr_2022_9Cwxjd6nRh" ]
iclr_2022_UyBxDoukIB
Teamwork makes von Neumann work:Min-Max Optimization in Two-Team Zero-Sum Games
Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in team zero-sum games. In this class of games, players are split in two teams with payoffs equal within the same team and of opposite sign across the opponent team. Unlike the textbook two player zero-sum games, finding a Nash equilibrium in our class can be shown to be $\textsf{CLS}$-hard, i.e., it is unlikely to have a polynomial time algorithm for computing Nash equilibria. Moreover In this generalized framework, we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent (GDA), its optimistic variant and extra gradient. Specifically, we present a family of team games whose induced utility is non-multilinear with non-attractive $\textit{per-se}$ mixed Nash Equilibria, as strict saddle points of the underlying optimization landscape. Leveraging techniques from control theory, we complement these negative results by designing a modified GDA that converges locally to Nash equilibria. Finally, we discuss connections of our framework with AI architectures with team competition structure like multi-agent generative adversarial networks.
Reject
In this paper, the authors study "team-zero sum games", where two teams are facing each other with opposite objective. The main result is that the complexity of finding equilibrium is CLS, hence probably not polynomial. This result is obtained via a reduction to some congestion games. Three reviewers gave a mild positive score (6) while the fourth one had more concerns. I tend to agree with the first three reviewers, with a a personal opinion around 5-6. The paper is interesting, but could benefit from polishing here and there (I acknowledge that the related work section is more precise after discussion). This said, I also kind of agree with the last reviewer in the sense that the result of this paper is a bit narrow (also not really surprising, but we cannot always have breathtaking results), and I am also not sure that most of the ICLR community will be interested by this kind of result. This is not really a criticism, but this paper is really borderline, and this is what makes it fall into the rejection pile. For instance, I think this paper would be more suited to some other conferences, more concerned about games and computations for instance (or even a journal).
train
[ "Bw3pgEOmlCP", "QH0m5JuafSx", "kgYOk41X18y", "wlPJTjGVA5o", "IYoZgi6cDN5", "gI9JVuID4HO", "-73bd78jAVh", "rKBGfJiyVhT", "Qbe6BPJ7jrS", "2LHwcMnyg3z", "k37jmhlE77t", "1GICltHzXSu", "xFeiDSuU8m", "3O06zs5x4Fg", "P7WoSGmMnLQ", "3QLE9Ap8fpT", "Pj6CV1AGHtn", "562_WidUTGd", "BdyzMZiyZ5...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ "The main contributions of the paper are as follows. First, the authors show show that the computation of Nash equilibrium in two-team zero-sum game is CLS-hard. As a result, GDA and its variants (including optimistic GDA and extragradient) cannot---in general---be used to converge to the Nash equilibrium. Then, th...
[ 6, -1, -1, -1, 3, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2022_UyBxDoukIB", "Qbe6BPJ7jrS", "wlPJTjGVA5o", "gI9JVuID4HO", "iclr_2022_UyBxDoukIB", "1GICltHzXSu", "1GICltHzXSu", "Bw3pgEOmlCP", "Bw3pgEOmlCP", "Bw3pgEOmlCP", "iclr_2022_UyBxDoukIB", "xCY_Z6kmmfx", "3O06zs5x4Fg", "vfI7sT_b5WP", "IYoZgi6cDN5", "Bw3pgEOmlCP", "k37jmhlE77t", ...
iclr_2022_BkIV7EOXkSs
Implicit Regularization of Bregman Proximal Point Algorithm and Mirror Descent on Separable Data
Bregman proximal point algorithm (BPPA), as one of the centerpieces in the optimization toolbox, has been witnessing emerging applications. With a simple and easy-to-implement update rule, the algorithm bears several compelling intuitions for empirical successes, yet rigorous justifications are still largely unexplored. We study the computational properties of BPPA through classification tasks with separable data, and demonstrate provable algorithmic regularization effects associated with BPPA. We show that BPPA attains a non-trivial margin, which closely depends on the condition number of the distance-generating function inducing the Bregman divergence. We further demonstrate that the dependence on the condition number is tight for a class of problems, thus showing the importance of divergence in affecting the quality of the obtained solutions. In addition, we extend our findings to mirror descent (MD), for which we establish similar connections between the margin and Bregman divergence. We demonstrate through a concrete example, and show BPPA/MD converges in direction to the maximal margin solution with respect to the squared Mahalanobis distance. Our theoretical findings are among the first to demonstrate the benign learning properties of BPPA/MD, and also provide strong corroborations for a careful choice of divergence in the algorithmic design.
Reject
The paper extends the analysis of Telgarsky (2013) and Gunasekar et al. (2018) to the Breman proximal point algorithm and to mirror descent. Upper and lower bounds show a dependency on the condition number of the distance generating function used in the Bregman divergence. The paper received lukewarm reviews, also because the topic does not seem to be a good match for this community. In fact, none of the algorithms analyzed seem to be commonly used as optimization algorithms for deep learning, despite of the applications mentioned by the authors. So, I didn't take into account the concerns about the relevance of the results for deep learning people, the complains about missing references from the OMD literature, and the supposed restricted setting. However, even ignoring the above issues, the paper seems to fall squarely on the borderline. Hence, I carefully read it. It seems to me that the analysis heavily builds on previous work, in particular the seminal paper of Telgarsky (2013) and the Fenchel-Young trick in Ji&Telgarsky (2019). The part on the Bregman divergence is novel, but technically speaking it is also straightforward for people in this sub-community. For example, Lemma B.3 is very well-known to any optimization person. Moreover, the curvature of the Bregman divergence is exactly the term one would expect to appear. So, the upper bound seems to be incremental compared to past work and it does not really add much to our understanding of this problem. The matching lower bound is probably the only truly interesting result. However, it still does not exclude the possibility to achieve a better margin when measuring it in a different way. Indeed, measuring the margin according to the (dual) norm appearing in the strong convexity definition of Bregman divergence is not completely justified, but rather it seems a way to make the analysis work coherently. Overall, given the overall lukewarm reviews and my evaluation of the limited novelty of the theoretical results, I recommend rejecting this paper.
train
[ "VNqR69zMvAu", "mpMkhft164Q", "XhSCIAzoEtO", "TWCDAKU3il", "8KdS5ezg7kq", "TUe6kbil42", "jwW36p_ihg", "TUSAPFky0J8", "Llo8j6DYlLe", "eqwY7LhnT57", "OhWTR0L_e2U", "knuAhvsNYJ", "5NjIVwhUidX", "xDUJJLXin1", "6qdjiHFs_i", "0EeterZog2H", "r0opKKl-hex", "Apkx_e8XZNA", "vL0sD_c8Rgb" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all reviewers for providing constructive feedback to improve our current draft. We have made corresponding changes to the updated paper, cleaned up the presentation based on detailed comments. We have also strengthened our experimental part with new CIFAR-100 results. All changes were mark...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "iclr_2022_BkIV7EOXkSs", "8KdS5ezg7kq", "mpMkhft164Q", "xDUJJLXin1", "r0opKKl-hex", "0EeterZog2H", "Llo8j6DYlLe", "r0opKKl-hex", "knuAhvsNYJ", "iclr_2022_BkIV7EOXkSs", "vL0sD_c8Rgb", "5NjIVwhUidX", "eqwY7LhnT57", "Apkx_e8XZNA", "TUe6kbil42", "r0opKKl-hex", "iclr_2022_BkIV7EOXkSs", ...
iclr_2022_By5Uwd_xzNF
Neural Structure Mapping For Learning Abstract Visual Analogies
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
Reject
This manuscript presents an approach to handling abstract visual analogy tasks where panels of drawings are shown with a missing entry. One of a number of candidate drawings must be chosen to complete the panel. Reviewers brought up several concerns: 1. The task performed was made considerably easier by providing additional annotations at training time. This was not the case in the original task in prior work that the manuscript builds on. No convincing explanation was provided as to why this change is critical to accommodate the manuscript's contributions. 2. A key feature of the approach, the adaptive modular design, does not seem to contribute much. The authors rightly point out this may be a limitation of current benchmarks. Reviewers were sympathetic to this view but that leaves the manuscript in a tough spot: a central contribution cannot be evaluated. What is even worrisome is that without evaluating the effectiveness of the adaptive design we cannot know if it is working at all. What if adaptivity is required for some future analogy tasks but it turns out that this approach, despite seeming to be adaptive, falls short? 3. Another contribution, the multi-task encoder does not seem to provide much value in ablation experiments. The manuscript would be improved if this feature was removed or its usefulness was demonstrated. A number of smaller issues were also brought up by reviewers. Throughout the responses to reviewers the authors highlight that their central contribution is incorporating a structure mapping prior "The central contribution of our method is introducing a structure mapping prior ...". I would like to draw the author's attention to the fact that they had to remind 3 of the 4 reviewers to focus on this rather than another aspect of the work. That clearly indicates that the manuscript and work needs a shift in focus. I would suggest that authors double down on their structural mapping prior, eliminate all other features which turned out to be controversial or impossible to evaluate, and demonstrate the utility of their idea in two domains, i.e., including another domain. This would really highlight the core contribution. Unfortunately, what may turn out to be a good idea, the structural mapping prior, is lost among many other complexities. I hope the authors are not discouraged and that we see this line of work again in the future.
val
[ "SaK-gp815ih", "mg3AodDRMxW", "ycC9FSq6_A-", "jd6_mU7RebP", "2ZDCl6WA6B", "2JqXjbHrU5I", "LiuG2KHeAa", "DUe9Tvf_jR", "uNN0a0KaUbc", "1hXQ-FDltyy", "256cfhLvQ-", "YAY9Tb5WDuj", "pz42dWa-LU", "eop9-_PCMp-", "O_5_im6SiMw", "UciJqzcDztU", "ZVWkrHLxoGt" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their clear and thorough response. However, my recommendation remains the same: marginally below the acceptance threshold. \n\nI share the same question as reviewer z4sn, who asks \"...in what sense is analogy inferred?\". In my view, this question is not completely answered in the work or...
[ -1, -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "1hXQ-FDltyy", "jd6_mU7RebP", "iclr_2022_By5Uwd_xzNF", "LiuG2KHeAa", "iclr_2022_By5Uwd_xzNF", "YAY9Tb5WDuj", "pz42dWa-LU", "iclr_2022_By5Uwd_xzNF", "ZVWkrHLxoGt", "256cfhLvQ-", "UciJqzcDztU", "2ZDCl6WA6B", "ycC9FSq6_A-", "O_5_im6SiMw", "iclr_2022_By5Uwd_xzNF", "iclr_2022_By5Uwd_xzNF", ...
iclr_2022_PiDkqc9saaL
Lower Bounds on the Robustness of Fixed Feature Extractors to Test-time Adversaries
Understanding the robustness of machine learning models to adversarial examples generated by test-time adversaries is a problem of great interest. Recent theoretical work has derived lower bounds on how robust \emph{any model} can be, when a data distribution and attacker constraints are specified. However, these bounds only apply to arbitrary classification functions and do not account for specific architectures and models used in practice, such as neural networks. In this paper, we develop a methodology to analyze the robustness of fixed feature extractors, which in turn provide bounds on the robustness of any classifier trained on top of it. In other words, this indicates how robust the representation obtained from that extractor is with respect to a given adversary. Our bounds hold for arbitrary feature extractors. The tightness of these bounds relies on the effectiveness of the method used to find collisions between pairs of perturbed examples at deeper layers. For linear feature extractors, we provide closed-form expressions for collision finding while for arbitrary feature extractors, we propose a bespoke algorithm based on the iterative solution of a convex program that provably finds collisions. We utilize our bounds to identify the layers of robustly trained models that contribute the most to a lack of robustness, as well as compare the same layer across different training methods to provide a quantitative comparison of their relative robustness. Our experiments establish that each of the following lead to a measurable drop in robustness: i) layers that linearly reduce dimension, ii) sparsity induced by ReLU activations and, iii) mismatches in the attacker constraints at train and test time. These findings point towards future design considerations for robust models that arise from our methodology.
Reject
Thank you for your submission to ICLR. The reviewers were split on this paper, with more favoring acceptance but with relatively low confidence. After reading through the paper and reviews, I tend to agree slightly more with the more critical comments. The paper is very much on the borderline, but ultimately 1) the rather incremental nature of the work compared to [Bhagoji, 2021], and 2) the rather small-scale evaluations in the current version, which the field has largely moved on from as they often give overly-optimistic impressions of certified robustness. Ultimately a lot of the extensions (which at this point are fairly standard in most methods for deep network verification), seem like they should really be taken into account in the current paper. For these reasons I lean slightly towards not accepting the paper in its current state.
train
[ "Q-SSlpbxWoL", "SZ64M0zwlAv", "-8xsnHSH2sV", "kJvRYmSY5nD", "BjxC6sFe6dR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present a method for deriving a lower bound on the loss of a specific model architecture under adversarial attack. The method heavily relies on previous work [Bhagoji, 2021]. The main contribution of the paper is therefore extending the concepts presented in [Bhagoji, 2021] for evaluating the input of ...
[ 6, 5, 6, 6, 8 ]
[ 3, 4, 3, 3, 3 ]
[ "iclr_2022_PiDkqc9saaL", "iclr_2022_PiDkqc9saaL", "iclr_2022_PiDkqc9saaL", "iclr_2022_PiDkqc9saaL", "iclr_2022_PiDkqc9saaL" ]
iclr_2022_ZgV2C9NKk6Q
TorchGeo: deep learning with geospatial data
Remotely sensed geospatial data are critical for earth observation applications including precision agriculture, urban planning, disaster monitoring and response, and climate change research, among others. Deep learning methods are particularly promising for modeling many earth observation tasks given the success of deep neural networks in similar computer vision tasks and the sheer volume of remotely sensed imagery available. However, the variance in data collection methods and handling of geospatial metadata make the application of deep learning methodology to remotely sensed data nontrivial. For example, satellite imagery often includes additional spectral bands beyond red, green, and blue and must be joined to other geospatial data sources that can have differing coordinate systems, bounds, and resolutions. To help realize the potential of deep learning for remote sensing applications, we introduce TorchGeo, a Python library for integrating geospatial data into the PyTorch deep learning ecosystem. TorchGeo provides data loaders for a variety of benchmark datasets, composable datasets for generic geospatial data sources, samplers for geospatial data, and transforms that work with multispectral imagery. TorchGeo is also the first library to provide pre-trained models for multispectral satellite imagery, allowing for advances in transfer learning on downstream earth observation tasks with limited labeled data. We use TorchGeo to create reproducible benchmark results on existing datasets, benchmark our proposed method for preprocessing geospatial imagery on-the-fly, and investigate the differences between ImageNet pre-training and in-domain self-supervised pre-training on model performance across several datasets. We aim for TorchGeo to become a new standard for reproducibility and for driving progress at the intersection of deep learning and remotely sensed geospatial data.
Reject
This paper develops a Python library for geospatial data based on Pytorch, TorchGeo. TorchGeo is a useful tool for applying deep learning methods to geospatial data. The reviewers agrees the contribution of this library. It will help machine learning researchers to use geospatial data and help geospatial researchers to apply machine learnig methods. However, the technical contribution is low, and the novelty is not high enough since the results can be achieved by a combination of existing packages.
train
[ "ZNZdPcZX8h", "_rTMywwjrq6", "GUVDD39pUUu", "SWMUzxa-tSI", "6CwfoYoXExlR", "t7I119CV2bCU", "x0wYkle6UU", "_F8DjOoiym1", "eFtmdsoPF5_", "EWQMO0SXq8O", "rsAr_fsWU7B", "Sq8K8nOWn9WO", "GBiQe-e2BiD", "ASKmt-ImucR", "0Y9MJJ0_eyq", "VRfUwZrxzgw", "IxGBMVCXUH-", "F_d0TCSVaH", "AE42v1dFT...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Ah, I see it now.", " We updated the paper most recently on the 23rd, is there something specific that we forgot to include?", " Thank you for the detailed answers and for providing additional datasets.\n\nTorchGeo would certainly be a good software platform for remote sensing applications. However, I am not ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4, 3 ]
[ "_rTMywwjrq6", "SWMUzxa-tSI", "0Y9MJJ0_eyq", "Sq8K8nOWn9WO", "iclr_2022_ZgV2C9NKk6Q", "0Y9MJJ0_eyq", "AE42v1dFTqF", "F_d0TCSVaH", "IxGBMVCXUH-", "IxGBMVCXUH-", "IxGBMVCXUH-", "IxGBMVCXUH-", "0Y9MJJ0_eyq", "VRfUwZrxzgw", "iclr_2022_ZgV2C9NKk6Q", "iclr_2022_ZgV2C9NKk6Q", "iclr_2022_ZgV...
iclr_2022_EXe93Md8RqS
Data Quality Matters For Adversarial Training: An Empirical Study
Multiple intriguing problems are hovering in adversarial training, including robust overfitting, robustness overestimation, and robustness-accuracy trade-off. These problems pose great challenges to both reliable evaluation and practical deployment. Here, we empirically show that these problems share one common cause --- low-quality samples in the dataset. Specifically, we first propose a strategy to measure the data quality based on the learning behaviors of the data during adversarial training and find that low-quality data may not be useful and even detrimental to the adversarial robustness. We then design controlled experiments to investigate the interconnections between data quality and problems in adversarial training. We find that when low-quality data is removed, robust overfitting and robustness overestimation can be largely alleviated; and robustness-accuracy trade-off becomes less significant. These observations not only verify our intuition about data quality but may also open new opportunities to advance adversarial training.
Reject
This paper studies the effect of data quality on adversarial robustness. It focuses on a single measure of data quality (number of times there is a perturbation that is misclassified across training iterations). The authors study the effect of data quality on robust overfitting, robustness-accuracy tradeoffs and "robustness overestimation" (gap between strong and weak attacks). The main conclusions reported are that data quality as measured by their metric plays an important role in all three aspects, and a takeaway is that data of higher quality may improve robustness. While the reviewers appreciated the premise of this work, some concerns remain post rebuttal. For example, few reviewers remain skeptical of the universality of the notion of "data quality" as measured in the paper because different training methods behave differently and the data quality measured in the paper is tailored to a particular training algorithm. Some reviewers also opined that at least one of the practical implications discussed in the rebuttal should be systematically investigated and that it is important to study the effectiveness of different data quality measures, especially for the extra data. Given all this, we are unable to recommend acceptance at this time. We hope the authors find the reviewer feedback helpful.
train
[ "2yTTrzYRWS1", "qgqpJxUwyD3", "OBfkOafsATy", "QigzovzZAtN", "72rrFlQiUjx", "Lx8FUzOErUg", "BHlnJuBqjC-", "q1OXYmF2b_N", "RRjj9HvNQ-1", "vYqk1GTBdjh", "ocWa41xTUx-", "oy7S6KRkWQ", "YNieYIy79db" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a metric for evaluating the learning stability of data and point out that unstably-learned instances are of low-quality for adversarial training. Through extensive controlled experiments, this paper investigates the impact of low-quality data on three issues in adversarial training, i.e., robus...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_EXe93Md8RqS", "72rrFlQiUjx", "iclr_2022_EXe93Md8RqS", "q1OXYmF2b_N", "vYqk1GTBdjh", "RRjj9HvNQ-1", "oy7S6KRkWQ", "ocWa41xTUx-", "YNieYIy79db", "2yTTrzYRWS1", "OBfkOafsATy", "iclr_2022_EXe93Md8RqS", "iclr_2022_EXe93Md8RqS" ]
iclr_2022_AP1MKT37rJ
Should I Run Offline Reinforcement Learning or Behavioral Cloning?
Offline reinforcement learning (RL) algorithms can acquire effective policies by utilizing only previously collected experience, without any online interaction. While it is widely understood that offline RL is able to extract good policies even from highly suboptimal data, in practice offline RL is often used with data that resembles demonstrations. In this case, one can also use behavioral cloning (BC) algorithms, which mimic a subset of the dataset via supervised learning. It seems natural to ask: When should we prefer offline RL over BC? In this paper, our goal is to characterize environments and dataset compositions where offline RL leads to better performance than BC. In particular, we characterize the properties of environments that allow offline RL methods to perform better than BC methods even when only provided with expert data. Additionally, we show that policies trained on suboptimal data that is sufficiently noisy can attain better performance than even BC algorithms with expert data, especially on long-horizon problems. We validate our theoretical results via extensive experiments on both diagnostic and high-dimensional domains including robot manipulation, maze navigation and Atari games, when learning from a variety of data sources. We observe that modern offline RL methods trained on suboptimal, noisy data in sparse reward domains outperform cloning the expert data in several practical problems.
Accept (Poster)
This paper considers helping to decide whether behavior cloning or offline RL is likely to be more effective given a particular offline dataset. The reviewers initially appreciated the importance of insights into this question around how to best leverage an existing dataset. They also had some initial concerns, due in part because the theory is restricted to tabular settings, whereas many challenges typically arise when function approximators are used, the realisticness of the assumptions over the data collection process, and a number of places where further details or clarifications would better situate and strengthen the work. The authors gave very extensive responses to the feedback which made reviewers feel much more confident about the revised paper and resulted in significantly higher scores. Though there remains many interesting areas for future work, this paper makes an interesting contribution that may be of interest to many using batch decision making data.
train
[ "ZYtLSwv0wja", "ZJipOSS461u", "stmTN0QD7U", "229n84zzWmO", "C-eIMsHoPr6", "wYNCq1ncYr", "dcBVBHQUN0k", "KRZmynFU1p1", "l7_QV0nlH5Y", "KurWXfaRHCU", "GxVEvwImPlw", "U19_OziHSsC", "FW6pMaVSgLX", "HOX6_1xq5Hq", "5mpDVS9IgI7", "-dGCXpXL53f", "VrlaL6aGHvn", "zkhPkLmjIu5", "gT9Ly8c60NV...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "The paper considers a setting where we are given access to a dataset of expert or noisy-expert data collected from some MDP and need to decide whether to use either behavior cloning (BC) or offline RL. It conducts a theoretical analysis in a tabular setting showing that offline RL will recover a better policy than...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2022_AP1MKT37rJ", "C-eIMsHoPr6", "KRZmynFU1p1", "iclr_2022_AP1MKT37rJ", "l7_QV0nlH5Y", "iclr_2022_AP1MKT37rJ", "GxVEvwImPlw", "2_F-ZCR9NVA", "KurWXfaRHCU", "FW6pMaVSgLX", "VrlaL6aGHvn", "phuOd-y-u9X", "gT9Ly8c60NV", "wYNCq1ncYr", "ZYtLSwv0wja", "HOX6_1xq5Hq", "-dGCXpXL53f", "...
iclr_2022_mqIeP6qPvta
FoveaTer: Foveated Transformer for Image Classification
Many animals and humans process the visual field with varying spatial resolution (foveated vision) and use peripheral processing to make eye movements and point the fovea to acquire high-resolution information about objects of interest. This architecture results in computationally efficient rapid scene exploration. Recent progress in vision Transformers has brought about new alternatives to the traditionally convolution-reliant computer vision systems. However, the Transformer models do not explicitly model the foveated properties of the visual system nor the interaction between eye movements and the classification task. We propose foveated Transformer (FoveaTer) model, which uses pooling regions and eye movements to perform object classification tasks using a vision Transformer architecture. Our proposed model pools the image features using squared pooling regions, an approximation to the biologically-inspired foveated architecture, and uses the pooled features as an input to a Transformer Network. It decides on subsequent fixation locations based on the attention assigned by the Transformer to various locations from previous and present fixations. The model uses a confidence threshold to stop scene exploration, dynamically allocating more fixation/computational resources to more challenging images. After reaching the stopping criterion, the model makes the final object category decision. We construct a Foveated model using our proposed approach and compare it against a Full-resolution model, which does not contain any pooling. On the ImageNet-100 dataset, our Foveated model achieves the accuracy of the Full-resolution model using only 35% transformer computations and 73% overall computations. Finally, we demonstrate our model's robustness against adversarial attacks, where it outperforms the full-resolution model.
Reject
This paper introduces an architecture that uses pooling regions and eye movements to sequentially build up an object representation. A confidence threshold is used to allow recognition in less time for easier images. There was a lot of disagreement on this paper. Those in favor argued that it is a worthy endeavor to explore new biologically motivated architectures and foveated eye movements are an important aspect of human vision that is worth exploring for computer vision. Another pro was the improved robustness to some adversarial attacks. Those arguing for not accepting the paper, argued that classification performance is not improved over SOTA and that more ablation studies should be done to better understand the role and importance of the various aspects of the model and how they differ from other architectural designs with dilated convolutions instead of the foveation module. I agree that more ablation studies would be useful to better understand the role of the different model components. While I feel that this novel sequential processing algorithm is worth publishing to increase activity in this area, I feel it would be best received after further studies help clarify the importance of different aspects of the model. I recommend resubmission after further analysis.
test
[ "ZBoQQBlrHdr", "CtiQ4U4ksZ6", "yZKGM3wawAx", "ErHXbu0S-el", "TSQQQoYpK2", "dUERAoQ_3UL", "XMVs5dWkLWy", "xcnpfkFNhTs", "4-LJZsDmKzS", "vr0pp91j9VG", "--71wrJ6eDU", "U-sXiy4AAYJ", "VuA4wYO4dF", "GAHorSM4i8", "FaCRBjJjow", "Q6srjYRbkv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I think the author's feedback and useful discussion. I also read comments and discussions of other reviewers, and I want to keep my current rate. It is unfortunate that reviewers do not agree with each other and have large discrepancies in scores. \n\nThe main reason for my decision is it is really struggling for...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "XMVs5dWkLWy", "VuA4wYO4dF", "--71wrJ6eDU", "dUERAoQ_3UL", "iclr_2022_mqIeP6qPvta", "vr0pp91j9VG", "xcnpfkFNhTs", "U-sXiy4AAYJ", "iclr_2022_mqIeP6qPvta", "TSQQQoYpK2", "Q6srjYRbkv", "FaCRBjJjow", "GAHorSM4i8", "iclr_2022_mqIeP6qPvta", "iclr_2022_mqIeP6qPvta", "iclr_2022_mqIeP6qPvta" ]
iclr_2022_ljCoTzUsdS
Distinguishing rule- and exemplar-based generalization in learning systems
Despite the increasing scale of datasets in machine learning, generalization to unseen regions of the data distribution remains crucial. Such extrapolation is by definition underdetermined and is dictated by a learner’s inductive biases. Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate in ways that are inconsistent with our expectations. We investigate two distinct such inductive biases: feature-level bias (differences in which features are more readily learned) and exemplar-vs-rule bias (differences in how these learned features are used for generalization). Exemplar- vs. rule-based generalization has been studied extensively in cognitive psychology, and in this work we present a protocol inspired by these experimental approaches for directly probing this trade-off in learning systems. The measures we propose characterize changes in extrapolation behavior when feature coverage is manipulated in a combinatorial setting. We present empirical results across a range of models and across both expository and real-world image and language domains. We demonstrate that measuring the exemplar-rule trade-off while controlling for feature-level bias provides a more complete picture of extrapolation behavior than existing formalisms. We find that most standard neural network models have a propensity towards exemplar-based extrapolation and discuss the implications of these findings for research on data augmentation, fairness, and systematic generalization.
Reject
The paper proposes a novel protocol for examining the inductive biases in learning systems, by quantifying the exemplar-rule trade-off (as measured by the exemplar-vs-rule propensity (EVR) defined in Eq. (2)) while controlling for feature-level bias. Reviewers mostly agree that the problem studied in this paper is practically relevant and that the two bias measures are potentially interesting and (jointly) more informative than existing measures such as spurious correlation. However, a shared concern among the reviewers (with confidences scores >=3) is the clarity of the exposition (e.g., many key concepts such as the data conditions are informally specified [Section 2 (Reviewer TPBn)], some key messages not clearly conveyed in the main paper [Section 3 (Reviewer RJtk)], and results inconclusive or not sufficiently supported by the experimental results [for both the synthetic setting (Reviewer RJtk) and the real-world setting (Reviewer yoH5)]. Based on the above concerns, the reviewers were not convinced that this work is well supported in its current state to merit acceptance for publication.
train
[ "0L1RBdP2mG", "26h4hnV19sn", "gq8LgNw7wB1", "JMaKLY9B9Ch", "b8I_BOdUhJT", "Vj136N1yAZz", "4zOORkiR8Pq", "9d5bzuiyAm", "HV5SImoQMh0", "QMZCRxoLu_N", "ZelP9eiBMBz", "NykuRS2f7J", "3-0vrSPiplw", "KWROTIezwIx", "Tdmh4Vu7vn", "_WCl8DFjdrQ", "gNz0BNNARxZ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for reading our response and increasing their score!\n\n**> Is it reasonable to assume that a particular model family always exhibits the same level of rule- and exemplar- biases? I guess that will change according to the training data?**\n\nWe agree that it is important to disting...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "gq8LgNw7wB1", "iclr_2022_ljCoTzUsdS", "JMaKLY9B9Ch", "ZelP9eiBMBz", "3-0vrSPiplw", "4zOORkiR8Pq", "9d5bzuiyAm", "gNz0BNNARxZ", "Tdmh4Vu7vn", "26h4hnV19sn", "26h4hnV19sn", "_WCl8DFjdrQ", "_WCl8DFjdrQ", "26h4hnV19sn", "iclr_2022_ljCoTzUsdS", "iclr_2022_ljCoTzUsdS", "iclr_2022_ljCoTzUs...
iclr_2022_WIJVRV7jnTX
Calibrated ensembles - a simple way to mitigate ID-OOD accuracy tradeoffs
We often see undesirable tradeoffs in robust machine learning where out-of-distribution (OOD) accuracy is at odds with in-distribution (ID) accuracy. A ‘robust’ classifier obtained via specialized techniques like removing spurious features has better OOD but worse ID accuracy compared to a ‘standard’ classifier trained via vanilla ERM. On six distribution shift datasets, we find that simply ensembling the standard and robust models is a strong baseline---we match the ID accuracy of a standard model with only a small drop in OOD accuracy compared to the robust model. However, calibrating these models in-domain surprisingly improves the OOD accuracy of the ensemble and completely eliminates the tradeoff and we achieve the best of both ID and OOD accuracy over the original models.
Reject
This is a borderline case and it's quite difficult to decide the recommendation. The paper works on a critically important problem, namely removing or reducing the in-distribution accuracy drop when we need to also take the out-of-distribution accuracy into account. The proposed method is simple and it works, which is great. However, as the reviewers discussed, the demonstrated applications are not very representative, and the authors should consider more popular setups of few-short learning and even other forms of domain generalization. Furthermore, adversarial examples are also OOD (in most cases, since the ID manifolds are thin films and the attacks can easily go out of the ID manifolds), it would be great if adversarial accuracy can be incorporated as a case of OOD accuracy. Since there is still room for improvement, we hope the paper would benefit from a cycle of revisions for a re-submission.
train
[ "NlxWCmknOGr", "df_7f7RX0-k", "h3ObzskyfG8", "NdLHdz2H8P4", "GRv571_1Pbo", "ZMCJpo_BztY", "4_Bqzb60sgH", "-RbDb2h60WS", "9g5cQZ45vOY", "kY32otJ73F", "UeU_BOpdL9n", "Q4xidVyxHF2", "uhaE_z6LIEw", "cihfFvtVdqn", "snKiC3umDTS", "o6ALW_XgD_f", "UiUYOY6xWq", "SGs3p3pvhP2", "wJ6xMoIPnEJ...
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " We just wanted to check if our response addressed your concerns about trying out a stronger recalibration method, and about including results for datasets like WILDS? In short, we showed that vector scaling improves ID calibration, but does not improve OOD calibration or accuracy. We also explained that existing ...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "4_Bqzb60sgH", "-RbDb2h60WS", "4_Bqzb60sgH", "9g5cQZ45vOY", "4_Bqzb60sgH", "-RbDb2h60WS", "kY32otJ73F", "iclr_2022_WIJVRV7jnTX", "iclr_2022_WIJVRV7jnTX", "3TeE-TuoAkA", "o6ALW_XgD_f", "iclr_2022_WIJVRV7jnTX", "cihfFvtVdqn", "UiUYOY6xWq", "-RbDb2h60WS", "Q4xidVyxHF2", "PiaMg76HIG", ...
iclr_2022_e6MWIbNeW1
Trading Quality for Efficiency of Graph Partitioning: An Inductive Method across Graphs
Many applications of network systems can be formulated as several NP-hard combinatorial optimization problems regarding graph partitioning (GP), e.g., modularity maximization and NCut minimization. Due to the NP-hardness, to balance the quality and efficiency of GP remains a challenge. Existing methods use machine learning techniques to obtain high-quality solutions but usually have high complexity. Some fast GP methods adopt heuristic strategies to ensure low runtime but suffer from quality degradation. In contrast to conventional transductive GP methods applied to a static graph, we propose an inductive graph partitioning (IGP) framework across multiple evolving graph snapshots to alleviate the NP-hard challenge. IGP first conducts the offline training of a novel dual graph neural network on historical snapshots to capture the structural properties of a system. The trained model is then generalized to newly generated snapshots for fast high-quality online GP without additional optimization, where a better trade-off between quality and efficiency is achieved. IGP is also a generic framework that can capture the permutation invariant partitioning ground-truth of historical snapshots in the offline training and tackle the online GP on graphs with non-fixed number of nodes and clusters. Experiments on a set of benchmarks demonstrate that IGP achieves competitive quality and efficiency to various state-of-the-art baselines.
Reject
This paper considers an important problem, graph partitioning, from a transductive viewpoint: assuming that the graphs are generated by independent draws from an unknown distribution, learn some parameters in an ``offline” phase, and use these in the ``online” phase (much as in PAC learning). The authors have also answered many of the reviewer questions. In particular, the comparison with existing work is substantial. While I laud the positives of this work and the importance of the transductive approach, I see an issue: as a reviewer points out and as agreed by the authors, the paper does not provide a theoretical guarantee of the quality of the generalization to unseen graphs. It would have been useful, e.g., to consider this on Erdos-Renyi G(n,p) models, stochastic block models etc.
test
[ "Me5dAxWMvNn", "Uc2516PPb8z", "LOirJ0Vr08A", "myMmk9RBsoM", "IQ9qblJNjcg", "99LLEDP5BM5", "x0oHEUt-e8w", "A7d8ByvGitQ", "PIWNwgGUgX", "51I2DAoO3L", "u4iHekVynv4", "053eboQanX", "Dj05i83DIVX", "1-yAherqSDR", "9LV-Wz4ddz5", "XVRpMZ3SYL_", "K2pd2A4X0oj", "z9AJ3yYH0Dp", "ILM6zkEblJU"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " Thank you for your update. \n\nFor your 3rd concerns regarding the node correspondence, on the premise of obeying the page limit (i.e, 9 pages for the main paper), we have tried our best to **add a simple motivating application** for the no node correspondence setting **in 5th paragraph of Section 1**, i.e., cond...
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 5, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 3, 3 ]
[ "Uc2516PPb8z", "99LLEDP5BM5", "iclr_2022_e6MWIbNeW1", "PIWNwgGUgX", "F-uFrDEWNN", "LOirJ0Vr08A", "YX68BTNzUpc", "zubHdq1uS-G", "ad25AMYQDSO", "L5FmdA0AwL", "F-uFrDEWNN", "F-uFrDEWNN", "LOirJ0Vr08A", "LOirJ0Vr08A", "LOirJ0Vr08A", "YX68BTNzUpc", "YX68BTNzUpc", "YX68BTNzUpc", "YX68B...
iclr_2022_Yr_1QZaRqmv
Decision Tree Algorithms for MDP
Decision trees are robust modeling tools in machine learning with human-interpretable representations. The curse of dimensionality of Markov Decision Process (MDP) makes exact solution methods computationally intractable in practice for large state-action spaces. In this paper, we show that even for problems with large state space, when the solution policy of the MDP can be represented by a tree-like structure, our proposed algorithm retrieves a tree of the solution policy of the MDP in computationally tractable time. Our algorithm uses a tree growing strategy to incrementally disaggregate the state space solving smaller MDP instances with Linear Programming. These ideas can be extended to experience based RL problems as an alternative to black-box based policies.
Reject
The reviewers identified missing comparisons to existing baselines (Deep RL and other tree-based RL methods) as well as simplistic experiments as the main limitations of the paper. While the authors could address some of the issues raised by the reviewers, the missing comparisons and too simple experiments remain. I therefore agree with (most of) the reviewers that the paper can not be published at its current state.
val
[ "GT3Xoezw2AM", "43TMjbM6Sy", "C8CyUWzZux", "c1XAdmn3Yti", "zIGtGXGgWv6", "RBJzfEPxcia", "EMlB0akIot8", "3tTqDd57HHq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a method to solve MDPs with large state-space and moderately sized action-space, where a decision tree can represent the optimal policy. \nThe decision tree is learned by iteratively partitioning the state space such that the approximation of the policy by the expanded decision tree improves th...
[ 6, 5, -1, -1, -1, -1, 3, 3 ]
[ 3, 4, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2022_Yr_1QZaRqmv", "iclr_2022_Yr_1QZaRqmv", "GT3Xoezw2AM", "3tTqDd57HHq", "43TMjbM6Sy", "EMlB0akIot8", "iclr_2022_Yr_1QZaRqmv", "iclr_2022_Yr_1QZaRqmv" ]
iclr_2022_zfmB5vgfaCt
TransSlowDown: Efficiency Attacks on Neural Machine Translation Systems
Neural machine translation (NMT) systems have received massive attention from academia and industry. Despite a rich set of work focusing on improving NMT systems’ accuracy, the less explored topic of efficiency is also important to NMT systems because of the real-time demand of translation applications. In this paper, we observe an inherent property of the NMT system, that is, NMT systems’ efficiency is related to the output length instead of the input length. Such property results in a new attack surface of the NMT system—an adversary can slightly changing inputs to incur a significant amount of redundant computations in NMT systems. Such abuse of NMT systems’ computational resources is analogous to denial-of-service attacks. Abuse of NMT systems’ computing resources will affect the service quality (e.g., prolong response to users’ translation requests) and even make the translation service unavailable (e.g., running out of resources such as batteries of mobile devices). To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems. To demonstrate the effectiveness of TranSlowDown, we conduct a systematic evaluation on three public-available NMT systems: Google T5, Facebook Fairseq, and Helsinki-NLP translator. The experimental results show that TranSlowDown increases NMT systems’ response latency up to 1232%and 1056% on Intel CPU and Nvidia GPU respectively by inserting only three characters into existing input sentences. Our results also show that the adversarial examples generated byTranSlowDowncan consume more than 30 times battery power than the original benign example. Such results suggest that further research is required for protecting NMT systems against efficiency-oriented threats.
Reject
The papers studies a novel problem and proposes an interesting algorithm. That said, the reviewers question the motivation of the paper. That is whether this method presents a viable attack on existing MT systems. The attack is not black box and MT systems often have an output length threshold beyond which the output is trimmed. Given the motivational concerns, I recommend that the paper is revised and resubmitted to other venues.
train
[ "ohdqaObqKdC", "2D8h2zqT-ft", "2b8j-EIARkd", "mWyNMOX5kLH", "J9sIe2Na_jr", "V4GCz4JuZKn", "p_UyKCZ2gm", "h5gbQwiJKf", "lTB9JBrm6yN" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The general response to reviewers' comments.\nWe add some experiments to address the reviewers' concerns, the details are put in the following link.\nhttps://openreview.net/attachment?id=zfmB5vgfaCt&name=supplementary_material", " We thank the reviewer for the feedback on our work. The response to all the queri...
[ -1, -1, -1, -1, -1, 3, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "iclr_2022_zfmB5vgfaCt", "V4GCz4JuZKn", "p_UyKCZ2gm", "h5gbQwiJKf", "lTB9JBrm6yN", "iclr_2022_zfmB5vgfaCt", "iclr_2022_zfmB5vgfaCt", "iclr_2022_zfmB5vgfaCt", "iclr_2022_zfmB5vgfaCt" ]
iclr_2022_dn4B7Mes2z
The Low-Rank Simplicity Bias in Deep Networks
Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations that investigate the hypothesis that deeper networks are inductively biased to find solutions with lower rank embeddings. We conjecture that this bias exists because the volume of functions that maps to low-rank embedding increases with depth. We show empirically that our claim holds true on finite width linear and non-linear models and show that these are the solutions that generalize well. We then show that the low-rank simplicity bias exists even after training, using a wide variety of commonly used optimizers. We found this phenomenon to be resilient to initialization, hyper-parameters, and learning methods. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance without changing the effective model capacity. Practically, we demonstrate that simply linearly over-parameterizing standard models at training time can improve performance on image classification tasks, including ImageNet.
Reject
This paper experimentally investigate the inductive bias of deep neural networks tending to produce low rank embeddings of data, which is important to explain why over-parameterized DNN can generalize. In particular, the paper empirically finds that deeper networks are more likely to produce lower rank embedding through thorough numerical experiments with different network architectures, hyperparameters and so on. The authors also proposed a linear over-parameterization technique to induce low-rank bias and empirically justifies its effectiveness. Overall, this paper is well written, and the numerical experiments are carefully executed. However, the main drawback of the paper is that the low-rank inductive bias itself is a well known phenomenon and this paper gives a kind of additional confirmation to it. I acknowledge that there are several differences from existing papers, but overall we see rather limited insight from the results. Indeed, some of existing studies gave theoretical analyses to understand "why it happens", but this paper does not give a sufficiently novel insight to reveal the reason. To summarize the decision, this paper lacks deeper insight compared with existing work although the authors did a good job to execute through experiments. Therefore, it is a bit below the acceptance threshold.
train
[ "gupQlJbAGlg", "w8ki5NdMOEy", "i_GOeJ6cUDH", "oXhbOf8jMto", "91kYndL8T1o", "dGXEjKKXxPW", "scA9Sniqgga", "2cT9CZ8brr", "8Akc-LKgE-K", "e4wnbDp5vU" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the feedback. Indeed we missed some prior work on infinite width networks and fixed, non-learned kernels; we appreciate the pointers and will add discussion of these works and the nuances they provide, especially regarding Conjecture 1. However, we think the reviewer has misinterpreted our main find...
[ -1, 6, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "e4wnbDp5vU", "iclr_2022_dn4B7Mes2z", "2cT9CZ8brr", "w8ki5NdMOEy", "w8ki5NdMOEy", "2cT9CZ8brr", "8Akc-LKgE-K", "iclr_2022_dn4B7Mes2z", "iclr_2022_dn4B7Mes2z", "iclr_2022_dn4B7Mes2z" ]
iclr_2022_e0TRvNWsVIH
Learning Representation for Bayesian Optimization with Collision-free Regularization
Bayesian Optimization has been challenged by the large-scale and high-dimensional datasets, which are common in real-world scenarios. Recent works attempt to handle such input by applying neural networks ahead of the classical Gaussian process to learn a (low-dimensional) latent representation. We show that even with proper network design, such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space, leading to degraded optimization performance. To address this issue, we propose LOCo, an efficient deep Bayesian optimization framework which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to the objective value to be Lipschitz continuous. LOCo takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for LOCo by inspecting the regret of this dynamic-embedding-based Bayesian optimization algorithm, where the neural network is iteratively retrained with the regularizer. Our empirical results further demonstrate the effectiveness of LOCo on several synthetic and real-world benchmark Bayesian optimization tasks.
Reject
In this paper, the problem of identifying a low-dimensional latent space for high-dimensional Bayesian optimization (BO) is considered. In particular, the authors focus on the problem of collision, where different points in the original space become identical in the latent space, and propose a regularization method to avoid this problem. Latent space identification for high-dimensional Bayesian optimization is an interesting and the authors' approach sounds reasonable. However, many reviewers pointed out that the discussion and results in the paper do not provide sufficient evidence for the authors' claims. Therefore, we have to conclude that the paper cannot be accepted at this time.
val
[ "ZALNptbQuS", "PqohPLtIBu_", "WBF1XqdJjdD", "-n5WGjQqB-l" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Despite its success, Gaussian process based Bayesian optimization still struggles in high dimensional search spaces. Current approaches aim to learn an embedding to optimize the objective in a low dimensional continuous latent space. This paper provides evidence that with current approaches, different data points ...
[ 5, 3, 3, 5 ]
[ 4, 5, 4, 3 ]
[ "iclr_2022_e0TRvNWsVIH", "iclr_2022_e0TRvNWsVIH", "iclr_2022_e0TRvNWsVIH", "iclr_2022_e0TRvNWsVIH" ]
iclr_2022_H7Edu1_IZgR
Transformers are Meta-Reinforcement Learners
The transformer architecture and variants presented a remarkable success across many machine learning tasks in recent years. This success is intrinsically related to the capability of handling long sequences and the presence of context-dependent weights from the attention mechanism. We argue that these capabilities suit the central role of a Meta-Reinforcement Learning algorithm. Indeed, a meta-RL agent needs to infer the task from a sequence of trajectories. Furthermore, it requires a fast adaptation strategy to adapt its policy for a new task - which can be achieved using the self-attention mechanism. In this work, we present TrMRL (Transformers for Meta-Reinforcement Learning), a meta-RL agent that mimics the memory reinstatement mechanism using the transformer architecture. It associates the recent past of working memories to build an episodic memory recursively through the transformer layers. This memory works as a proxy to the current task, and we condition a policy head on it. We conducted experiments in high-dimensional continuous control environments for locomotion and dexterous manipulation. Results show that TrMRL achieves or surpasses state-of-the-art performance, sample efficiency, and out-of-distribution generalization in these environments.
Reject
At a high level, the novelty of this paper is limited: RL2 with transformers instead of RNNs. The emphasis is then placed on the experimental evaluation. Unfortunately, the reviewers felt that the experimental methodology and results were not strong enough at this stage to warrant publication. During the rebuttal, the reviewers did not engage nor discuss the author response, unfortunately, so I do not know what they think of the rebuttal. However, on evaluating the concerns of the reviewers against the updated manuscript, I think the updates do not go far enough to satisfy the concerns raised (experiments + baselines). Therefore, I recommend rejection.
train
[ "nFu4X_rDXop", "5gKBu5bKZLn", "xB42rD2eK88", "St7awz4TKve", "K75dJnURaA", "1aJ822mKnMu", "67UrOXqY8H", "P0I_aPl2yc", "O7J_hZ1wAd4" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer maih, \n\nThank you very much for your review. We found it very helpful since it catches very critical points to understand our work. We worked on a new version to address all concerns raised by the review:\n\nWe rewrote Section 4.3 to replace the vague ideas and anthropomorphism to give place to mo...
[ -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "67UrOXqY8H", "O7J_hZ1wAd4", "P0I_aPl2yc", "1aJ822mKnMu", "iclr_2022_H7Edu1_IZgR", "iclr_2022_H7Edu1_IZgR", "iclr_2022_H7Edu1_IZgR", "iclr_2022_H7Edu1_IZgR", "iclr_2022_H7Edu1_IZgR" ]
iclr_2022_TVHS5Y4dNvM
Patches Are All You Need?
Although convolutional networks have been the dominant architecture for vision tasks for many years, recent experiments have shown that Transformer-based models, most notably the Vision Transformer (ViT), may exceed their performance in some settings. However, due to the quadratic runtime of the self-attention layers in Transformers, ViTs require the use of patch embeddings, which group together small regions of the image into single input features, in order to be applied to larger image sizes. This raises a question: Is the performance of ViTs due to the inherently-more-powerful Transformer architecture, or is it at least partly due to using patches as the input representation? In this paper, we present some evidence for the latter: specifically, we propose the ConvMixer, an extremely simple model that is similar in spirit to the ViT and the even-more-basic MLP-Mixer in that it operates directly on patches as input, separates the mixing of spatial and channel dimensions, and maintains equal size and resolution throughout the network. In contrast, however, the ConvMixer uses only standard convolutions to achieve the mixing steps. Despite its simplicity, we show that the ConvMixer outperforms the ViT, MLP-Mixer, and some of their variants for similar parameter counts and data set sizes, in addition to outperforming classical vision models such as the ResNet. Our code is available at https://github.com/tmp-iclr/convmixer.
Reject
This paper observes that a fully-convolutional model in the style of recent MLP-Mixer and ViT variants can have surprisingly good initial performance. As this paper attracts certain amount of attentions, three expert reviewers have provided very detailed and serious comments, and two actively engaged with author discussions. AC also carefully read the paper as well as all discussion threads. AC agrees the authors should not be penalized by not achieving the best performance, nor not comparing with very recent work. The main legitimate critiques, however, focus on three aspects: (1) over-claimed contribution; (2) experiment solidness/competitiveness; and (3) writing completeness/clarity. First, this paper established an interesting ablation experiment that a very simple model, that uses only standard convolutions to achieve the mixing steps, can roughly "do the work". However, AC disagrees this is a very "surprisingly new" result, on top of MLP-mixer: given convolutions are increasingly re-injected into ViTs to gain the vision inductive bias, their similar role in MLP-mixer should be expected too. Moreover, as in general agreement by reviewers, the paper title might have over-claimed - the authors cannot directly prove this concept "patch is the most critical component" yet. The authors later also agreed and changed some confusing wording, which is a good move (but also, making their contribution now even less obvious). Second, this method does not achieve noteworthy competitive results compared to others, in order to justify its merit (simplicity alone is good to have, but insufficient to justify a strong work). Importantly, it has been pointed out by two reviewers that the model throughput is much worse than the competitors. AC also noticed that the comparison was not very rigorous, e.g., comparing ConvMixer patch size 7 with DeiT-B 16 patch size 16 doesn't help draw much fair informative conclusion. The cifar-10 results alone did not provide strong support and were later de-emphaszied by authors too. Third, while NOT being the main reason of rejection, AC personally suggests the authors to responsibly enrich their main text, and to remove the “A note on paper length” paragraph. The authors intentionally kept the paper length unusually short. Reviewers generally dislike this idea. Being an innovative writer is good, but very relevant details and discussions were left in the supplemental as a result. Especially, AC agrees the whole section A and part of section B of the supplemental should have been in the main paper at very least. In summary, the authors strive to tell an interesting story, but it is not yet a well settled story. The experiments are not solid enough to support their bold claims. The authors are suggested to improve their work further by taking into account reviewer comments.
train
[ "wtX7HGDvFC5", "-t1fYnsoIML", "0Yt9VKbY1NH", "lgjjIf95OY", "orQ5gJ8jus", "2teckI3dvB", "nnhBDYy-J64", "NXILJzE-A_", "n2rCW-_DKuE", "d2cSsCK19aW", "NuSO0HtAgo", "Pqyck1ucZ-t", "yqw8s7YGWh1", "cuz4CjRDRGj", "JIN0diw8_uu", "W3Llc3ECh8E", "IsJ-m-iyalt", "3gFiNcHFRzi", "DFwPFtdXPHY", ...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public", "public", "author", "public", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposed a new design for the image classification task named ConvMixer, which brings the idea from CNN to Visual Transformer. Unlike previous ConvNets, Transformer-based models, and MLP-based models, ConvMixer simply applies depth-wise (with skip connection) and point-wise convolutions on the patches. T...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_TVHS5Y4dNvM", "orQ5gJ8jus", "IsJ-m-iyalt", "iclr_2022_TVHS5Y4dNvM", "2teckI3dvB", "W3Llc3ECh8E", "NXILJzE-A_", "iclr_2022_TVHS5Y4dNvM", "d2cSsCK19aW", "NuSO0HtAgo", "iclr_2022_TVHS5Y4dNvM", "yqw8s7YGWh1", "ocgid221QSc", "wtX7HGDvFC5", "cuz4CjRDRGj", "JIN0diw8_uu", "3gFiNcH...
iclr_2022_w8HXzn2FyKm
Finite-Time Error Bounds for Distributed Linear Stochastic Approximation
This paper considers a novel multi-agent linear stochastic approximation algorithm driven by Markovian noise and general consensus-type interaction, in which each agent evolves according to its local stochastic approximation process which depends on the information from its neighbors. The interconnection structure among the agents is described by a time-varying directed graph. While the convergence of consensus-based stochastic approximation algorithms when the interconnection among the agents is described by doubly stochastic matrices (at least in expectation) has been studied, less is known about the case when the interconnection matrix is simply stochastic. For any uniformly strongly connected graph sequences whose associated interaction matrices are stochastic, the paper derives finite-time bounds on the mean-square error, defined as the deviation of the output of the algorithm from the unique equilibrium point of the associated ordinary differential equation. For the case of interconnection matrices being stochastic, the equilibrium point can be any unspecified convex combination of the local equilibria of all the agents in the absence of communication. Both the cases with constant and time-varying step-sizes are considered. In the case when the convex combination is required to be a straight average and interaction between any pair of neighboring agents may be uni-directional, so that doubly stochastic matrices cannot be implemented in a distributed manner, the paper proposes a push-type distributed stochastic approximation algorithm and provides its finite-time bounds for the performance by leveraging the analysis for the consensus-type algorithm with stochastic matrices.
Reject
This paper studies a stochastic approximation framework for multi-agent consensus algorithms driven by Markovian noise in the spirit of the classical paper of Kushner & Yin. The authors' main result is that - modulo a series of assumptions, some conceptual, some technical - the generated sequence of play reaches a consensus, and they also estimate the rate of this convergence. Even though the paper's premise is interesting, the reviewers identified several weaknesses in the paper, and the reviewers that raised them where not convinced by the authors' replies (especially regarding the relative lack of numerical evidence to demonstrate the claims that are not supported by the theory, such as the role of Assumption 6). After my own reading of the paper and the discussion with the reviewers during the rebuttal phase, I concur that this version of the paper does not clear the bar for acceptance - but, at the same time, I would encourage the authors to submit a suitably revised version at the next opportunity.
train
[ "4pfvJwRi2V", "4D9fadhgIdP", "sp6YCCLqMz5", "5FNpb8WG1Pg", "rAewWd8SN2c", "Xghi-gfqGZP", "TZvuofTxD2", "trmfZMv5bBW", "vIYAEOPP9Pd", "OZarjj7kzP_", "Mt6tsJov3Ev", "gbkeEmmTTb8", "8X5XQ5Q8dbm", "fuYAUpuWYkG", "XXNYqp29z20", "NfeOFJJOw7w", "kq1yhr_1ie", "-WBCYDnB3ZE", "OLhEcHw5S-c"...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " In the revised manuscript, we have highlighted the changes in blue for the typos. We found the remaining concerns are easy to address as they had been clearly stated in the initial submission (see our responses), and thus feel unnecessary to make changes. It seems that the reviewer did not carefully read our pape...
[ -1, -1, -1, -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "4D9fadhgIdP", "sp6YCCLqMz5", "5FNpb8WG1Pg", "NfeOFJJOw7w", "iclr_2022_w8HXzn2FyKm", "iclr_2022_w8HXzn2FyKm", "trmfZMv5bBW", "vIYAEOPP9Pd", "OZarjj7kzP_", "XXNYqp29z20", "-WBCYDnB3ZE", "8X5XQ5Q8dbm", "rAewWd8SN2c", "Xghi-gfqGZP", "6OcJyhUJA0Y", "OLhEcHw5S-c", "iclr_2022_w8HXzn2FyKm",...
iclr_2022_e-JV6H8lwpl
Subspace State-Space Identification and Model Predictive Control of Nonlinear Dynamical Systems Using Deep Neural Network with Bottleneck
A novel nonlinear system identification method that produces state estimator and predictor directly usable for model predictive control (MPC) is proposed in this paper. The main feature of the proposed method is that it uses a neural network with a bottleneck layer between the state estimator and predictor to represent the input-output dynamics, and it is proven that the state of the dynamical system can be extracted from the bottleneck layer based on the observability of the target system. The training of the network is shown to be a natural nonlinear extension of the subspace state-space system identification method established for linear dynamical systems. This correspondence gives interpretability to the resulting model based on linear control theory. The usefulness of the proposed method and the interpretability of the model are demonstrated through an illustrative example of MPC.
Reject
The paper uses neural networks for system identification. The novelty of its contributions seems to be marginal, and the demonstration of its usefulness is not experimentally validated well enough.
train
[ "jxRmQODtX0v", "zG5nrUPAHPL", "-6bfKo2pHK-", "TbgXFMDWjp", "cmnfTETkKZd", "Rt-h67a5jQu", "NHdG8pzNJJG", "8S0M9nbwmIM", "0N7o2DL53jS", "SUEDaEv3zyr", "6C-1qrvRSm", "nfXUNnFZ4j2", "eaiOC3ln7dx", "sQQmxXpYxUp", "c5-vQpUHOn" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking time out of your busy schedule to respond.\n\nDisturbance acts directly on the state of the target system, and if the disturbance is stochastic, observers can't track its effect before it is observed via the system's output. Thus, no observer converges to zero error in the presence of persist...
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "zG5nrUPAHPL", "-6bfKo2pHK-", "TbgXFMDWjp", "Rt-h67a5jQu", "eaiOC3ln7dx", "8S0M9nbwmIM", "iclr_2022_e-JV6H8lwpl", "SUEDaEv3zyr", "c5-vQpUHOn", "6C-1qrvRSm", "NHdG8pzNJJG", "sQQmxXpYxUp", "iclr_2022_e-JV6H8lwpl", "iclr_2022_e-JV6H8lwpl", "iclr_2022_e-JV6H8lwpl" ]
iclr_2022_ms7xJWbf8Ku
Efficient Packing: Towards 2x NLP Speed-Up without Loss of Accuracy for BERT
We find that at sequence length 512 padding tokens represent in excess of 50% of the Wikipedia dataset used for pretraining BERT (Bidirectional Encoder Representations from Transformers). Therefore by removing all padding we achieve a 2x speed-up in terms of sequences/sec. To exploit this characteristic of the dataset, we develop and contrast two deterministic packing algorithms. Both algorithms rely on the assumption that sequences are interchangeable and therefore packing can be performed on the histogram of sequence lengths, rather than per sample. This transformation of the problem leads to algorithms which are fast and have linear complexity in dataset size. The shortest-pack-first histogram-packing (SPFHP) algorithm determines the packing order for the Wikipedia dataset of over 16M sequences in 0.02 seconds. The non-negative least-squares histogram-packing (NNLSHP) algorithm converges in 28.4 seconds but produces solutions which are more depth efficient, managing to get near optimal packing by combining a maximum of 3 sequences in one sample. Using the dataset with multiple sequences per sample requires adjusting the model and the hyperparameters. We demonstrate that these changes are straightforward to implement and have relatively little impact on the achievable performance gain on modern hardware. Finally, we pretrain BERT-Large using the packed dataset, demonstrating no loss of convergence and the desired 2x speed-up.
Reject
This paper proposes to re-organize the training data in such a way that padding can be avoided. The novelty is somewhat limited and the results are what one would expect - a nice speed-up of 2x but nothing really game-changing. While the reviewer scores straddle the decision boundary, nobody is very strongly supportive of acceptance and the positive reviews actually have lower confidence.
train
[ "BddrN5HDSu", "eKOiIMSF__4", "nia2DQM9r6x", "IGzAGE2tip", "MwyEuEls90", "5x_st7PNPMI", "rxtnQQg9GWS", "m-R54Nr20p", "7l_5Zsq9VEw", "Qr9uG0glyY3", "oCMi5UxudBju", "lD8ihidChhJp", "mUYNnPcjBvw", "S7lxOABY_x", "bXaSgJRFBZ6", "gw282yDHkCk", "fIn25A3R2r", "CDOuo6QSsJ", "mWUg8bqRlbg", ...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Today, the code for running our algorithm with TensorFlow on the Habana Gaudi chip has been released. The acceleration showed the expected improvement of around 2x. \n\nhttps://github.com/mlcommons/training_results_v1.1/tree/main/Intel-HabanaLabs/benchmarks/bert/implementations/TensorFlow/nlp/bert", " 1. The co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3, 4 ]
[ "Qr9uG0glyY3", "nia2DQM9r6x", "gw282yDHkCk", "MwyEuEls90", "g2ypKMF7JP", "fIn25A3R2r", "mUYNnPcjBvw", "Qr9uG0glyY3", "oCMi5UxudBju", "fIn25A3R2r", "CDOuo6QSsJ", "iclr_2022_ms7xJWbf8Ku", "iclr_2022_ms7xJWbf8Ku", "CDOuo6QSsJ", "iclr_2022_ms7xJWbf8Ku", "iclr_2022_ms7xJWbf8Ku", "iclr_202...
iclr_2022_DXU0DQUDWLA
Disentangling One Factor at a Time
With the overabundance of data for machines to process in the current state of machine learning, data discovery, organization, and interpretation of the data becomes a critical need. Specifically of need are unsupervised methods that do not require laborious labeling by human observers. One promising approach to this enedeavour is \textit{Disentanglement}, which aims at learning the underlying generative latent factors of the data. The factors should also be as human interpretable as possible for the purposes of data discovery. \textit{Unsupervised disentanglement} is a particularly difficult open subset of the problem, which asks the network to learn on its own the generative factors without any link to the true labels. This problem area is currently dominated by two approaches: Variational Autoencoder and Generative Adversarial Network approaches. While GANs have good performance, they suffer from difficulty in training and mode collapse, and while VAEs are stable to train, they do not perform as well as GANs in terms of interpretability. In current state of the art versions of these approaches, the networks require the user to specify the number of factors that we expect to find in the data. This limitation prevents "true" disentanglement, in the sense that learning how many factors is actually one of the tasks we wish the network to solve. In this work we propose a novel network for unsupervised disentanglement that combines the stable training of the VAE with the interpretability offered by GANs without the training instabilities. We aim to disentangle interpretable latent factors "one at a time", or OAT factor learning, making no prior assumptions about the number or distribution of factors, in a completely unsupervised manner. We demonstrate its quantitative and qualitative effectiveness by evaluating the latent representations learned on two benchmark datasets, DSprites and CelebA.
Reject
This paper presents a method for unsupervised learning of disentangled representations by first training a VAE with a tangled set of latents, and then sequentially learning disentangled latent variables one at a time from the entangled initial VAE latent space. On several toy disentanglement benchmarks, the method is shown to perform competitively with previous VAE and GAN approaches. There were several concerns from reviewers around the clarity and description of the proposed one-factor-at a time (OAT) training procedure. While the updated draft addressed several typos and some clarity issues, multiple reviewers continued to find the method description problematic. There were additional concerns around the viability of the method on real-world datasets where the number of factors are not known, and as the authors stated the proposed method can also result in one factor of variation encoded into mulitple latent variables, which hurts on many of the disentanglement metrics. The addition of CelebA downstream task evaluation begins to address this concern of real-world data, but more rigorous experiments (including more description of how models were selected) and discussion of the limtiations of the proposed method are needed. There is also no theoretical motivation as to why the proposed intervention-based factor learning algorithm should recover the ground truth factors. Given the concerns over experimental results, clarity, and lack of theoretical motivation, I suggest rejecting this paper in the current form.
train
[ "9OK-edxqhiw", "XnlXYJO94sf", "_6WgL3RU5S9", "NsD9AsbxFbr", "IOF0m6LAhe", "wzVLE-EgbVT", "n96f_olLcjt", "W3b99M6Gwt_", "epga2octOA", "xyIURTt4ZCY", "t1_zJNvW9Om", "-wykLma0mQd", "8D0VS-N95fD", "KHDJTNk_fXO", "THenzoxfc9b" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' effort in addressing my questions. The authors’ rebuttal clarifies some of my concerns. As the other reviewers point out and the authors also agree, the paper needs significant improvement. Therefore, I will keep my score unchanged. ", "The paper proposes a new approach for training di...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "epga2octOA", "iclr_2022_DXU0DQUDWLA", "NsD9AsbxFbr", "wzVLE-EgbVT", "XnlXYJO94sf", "IOF0m6LAhe", "THenzoxfc9b", "n96f_olLcjt", "xyIURTt4ZCY", "KHDJTNk_fXO", "8D0VS-N95fD", "iclr_2022_DXU0DQUDWLA", "iclr_2022_DXU0DQUDWLA", "iclr_2022_DXU0DQUDWLA", "iclr_2022_DXU0DQUDWLA" ]
iclr_2022_9rKTy4oZAQt
A Risk-Sensitive Policy Gradient Method
Standard deep reinforcement learning (DRL) agents aim to maximize expected reward, considering collected experiences equally in formulating a policy. This differs from human decision-making, where gains and losses are valued differently and outlying outcomes are given increased consideration. It also wastes an opportunity for the agent to modulate behavior based on distributional context. Several approaches to distributional DRL have been investigated, with one popular strategy being to evaluate the projected distribution of returns for possible actions. We propose a more direct approach, whereby the distribution of full-episode outcomes is optimized to maximize a chosen function of its cumulative distribution function (CDF). This technique allows for outcomes to be weighed based on relative quality, does not require modification of the reward function to modulate agent behavior, and may be used for both continuous and discrete action spaces. We show how to achieve an unbiased estimate of the policy gradient for a broad class of CDF-based objectives via sampling, subsequently incorporating variance reduction measures to facilitate effective on-policy learning. We use the resulting approach to train agents with different “risk profiles” in penalty-based formulations of six OpenAI Safety Gym environments, finding that moderate emphasis on improvement in training scenarios where the agent performs poorly generally improves agent behavior. We interpret and explore this observation, which leads to improved performance over the widely-used Proximal Policy Optimization algorithm in all environments tested.
Reject
This paper introduces a new approach for risk sensitive RL by using an objective that depends on the full distribution and can apply a weight to the resulting trajectory. The reviewers thought that focusing on more general and expressive objectives for RL is well motivated. However, they had a number of concerns of the current paper state, including its clarity in a number of sections and its relation to other work in risk-sensitive RL. The authors provided thoughtful responses but some concerns lingered around the prior concerns.
test
[ "USrbGj-kPG", "xvJBCDKD46t", "Gr2--ai0Ysw", "nEAU43V63hY", "2FLOQ80zNHd", "81WXj8knoa-", "fB4MPmnN6kh", "KPy3FPnhadn", "nJFLQ9KlfF0", "k580qmbYYT", "F9wfV4wdcM2", "WLtfVWpGV9E", "w62l2_scnYo", "UJPovbcsm7S" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers a generalization of the policy gradient method to optimize for arbitrary utility functions with weightings that depend on the entire CDF (rather than the expected reward). This generalization has two aspects: (1) a utility function on top of the trajectory reward and (2) a weighting function f...
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 2, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_9rKTy4oZAQt", "fB4MPmnN6kh", "iclr_2022_9rKTy4oZAQt", "fB4MPmnN6kh", "81WXj8knoa-", "iclr_2022_9rKTy4oZAQt", "k580qmbYYT", "UJPovbcsm7S", "USrbGj-kPG", "F9wfV4wdcM2", "Gr2--ai0Ysw", "w62l2_scnYo", "iclr_2022_9rKTy4oZAQt", "iclr_2022_9rKTy4oZAQt" ]
iclr_2022_7QDPaL-Yl8U
LPRules: Rule Induction in Knowledge Graphs Using Linear Programming
Knowledge graph (KG) completion is a well-studied problem in AI. Rule-based methods and embedding-based methods form two of the solution techniques. Rule-based methods learn first-order logic rules that capture existing facts in an input graph and then use these rules for reasoning about missing facts. A major drawback of such methods is the lack of scalability to large datasets. In this paper, we present a simple linear programming (LP) model to choose rules from a list of candidate rules and assign weights to them. For smaller KGs, we use simple heuristics to create the candidate list. For larger KGs, we start with a small initial candidate list, and then use standard column generation ideas to add more rules in order to improve the LP model objective value. To foster interpretability and generalizability, we limit the complexity of the set of chosen rules via explicit constraints, and tune the complexity hyperparameter for individual datasets. We show that our method can obtain state-of-the-art results for three out of four widely used KG datasets, while taking significantly less computing time than other popular rule learners including some based on neuro-symbolic methods. The improved scalability of our method allows us to tackle large datasets such as YAGO3-10.
Reject
The paper shows how to make use of a linear program for extracting logical rules for knowledge graph completion. Overall, the reviewers and I agree that this is an interesting and important direction for research. Moreover, the presented approach shows good performance with rather small sets of rules extracted. However, all reviewers point out that the related work is not well discussed. While the authors have improved the related work sections during the rolling discussion, overall the positioning of the new method has still to be improved, including a better empirical comparison across different datasets. Overall, we would like to encourage the authors to polish their line of research based on the feedback from the reviews.
train
[ "AEgSbyM1mvm", "6lpn_9qcSpS", "j-A4jJTOQI", "nRC2z4hOuoG", "rb-EQJ6S-Po", "v9Fnz87lKc", "o9jrdyZZY5f", "88AhaLHzXt7", "OaO-C7OG9Q3", "98-KzVNbkr8", "074YsdVi_Xa", "YNvYruXaBu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a simple method to generate logic rules, where rules are generated by a shortest-path heuristic and rule weights by solving a linear program.\n First, the paper does have it's strong points:\n\nS1. Simple, easy to understand approach.\n\nS2. Only requires shortest-path computations, rule evaluat...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2022_7QDPaL-Yl8U", "iclr_2022_7QDPaL-Yl8U", "88AhaLHzXt7", "rb-EQJ6S-Po", "v9Fnz87lKc", "o9jrdyZZY5f", "98-KzVNbkr8", "6lpn_9qcSpS", "YNvYruXaBu", "AEgSbyM1mvm", "iclr_2022_7QDPaL-Yl8U", "iclr_2022_7QDPaL-Yl8U" ]
iclr_2022_WZeI0Vro15y
Generative Posterior Networks for Approximately Bayesian Epistemic Uncertainty Estimation
Ensembles of neural networks are often used to estimate epistemic uncertainty in high-dimensional problems because of their scalability and ease of use. These methods, however, are expensive to sample from as each sample requires a new neural network to be trained from scratch. We propose a new method, Generative Posterior Networks (GPNs), a generative model that, given a prior distribution over functions, approximates the posterior distribution directly by regularizing the network towards samples from the prior. This allows our method to quickly sample from the posterior and construct confidence bounds. We prove theoretically that our method indeed approximates the Bayesian posterior and show empirically that it improves epistemic uncertainty estimation over competing methods.
Reject
This article proposes a novel uncertainty quantification method, formulating the problem as a Bayesian inference problem. Instead of training multiple ensemble models through MAP optimisation, as in ensemble methods, the proposed approach tries to learn a mapping function between the prior distribution and the posterior distribution of model parameters. This avoids the complex training of ensemble models and achieves better efficiency. The approach is novel, and the problem of importance. The paper however suffers from a number of weaknesses: * Some theoretical results would need to be made mathematically more rigorous * The presentation is unclear and confusing in some places * Empirical results are not reproducible due to the lack of details Although the authors clarified some of the points raised by reviewers in their response, the paper in its current form is not ready for publication, and I recommend rejection.
train
[ "9Yb6_Ug6GWf", "2Rd0C2xObZ7", "o9jdC0ut_Rs", "4foDlPwbwXy", "l4DVF7kRI3g", "VfJgQVndHW", "tWlJeWXyiJf", "aOAC0MJeHPj", "OvptEjYYtId", "T87L6JZBnR2", "1vkKM5PYpzE", "Sox07OXUkO", "MHXML-HklcD", "l1h-RJlj0ED", "TXlSCOB45uy", "5oMFmZ-aWH6", "JiczInBpnh", "9pSVYi7xIPH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors introduce generative posterior networks (GPN). GPN is a single network that approximately produces samples from posterior over neural network parameters; this is in contrast to the standard approach of training N individual neural networks. # Strengths\n\nThis paper provides a nifty alter...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "iclr_2022_WZeI0Vro15y", "tWlJeWXyiJf", "aOAC0MJeHPj", "9Yb6_Ug6GWf", "JiczInBpnh", "5oMFmZ-aWH6", "OvptEjYYtId", "Sox07OXUkO", "T87L6JZBnR2", "1vkKM5PYpzE", "MHXML-HklcD", "9pSVYi7xIPH", "9Yb6_Ug6GWf", "JiczInBpnh", "5oMFmZ-aWH6", "iclr_2022_WZeI0Vro15y", "iclr_2022_WZeI0Vro15y", ...
iclr_2022_hJk11f5yfy
Encoding Hierarchical Information in Neural Networks Helps in Subpopulation Shift
Over the past decade, deep neural networks have proven to be adept in image classification tasks, often even surpassing humans in terms of accuracy. However, standard neural networks often fail to understand the concept of hierarchical structures and dependencies among different classes for vision related tasks. Humans on the other hand, seem to learn categories conceptually, progressively growing from understanding high-level concepts down to granular levels of categories. One of the issues arising from the inability of neural networks to encode such dependencies within its learned structure is that of subpopulation shift -- where models are queried with novel unseen classes taken from a shifted population of the training set categories. Since the neural network treats each class as independent from all others, it struggles to categorize shifting populations that are dependent at higher levels of the hierarchy. In this work, we study the aforementioned problems through the lens of a novel conditional supervised training framework. We tackle subpopulation shift by a structured learning procedure that incorporates hierarchical information conditionally through labels. Furthermore, we introduce a notion of graphical distance to model the catastrophic effect of mispredictions. We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement of around 2% in terms of accuracy and around 8.5% in terms of graphical distance over standard models on subpopulation shift benchmarks.
Reject
The paper studies subpopulation shift in object recognition when classes obey a hierarchy. It proposes an architecture, a relevant metric and a dataset (subset of imagenet). The problem of classification in hierarchical label spaces is important and of great interest, and the effect on domain shift is interesting. Naturally, this problem was studied quite intensively over the years. Reviewers were concerned that the current proposal was not placed well enough in context of previous literature, both in terms of the method and in terms of experimental results. Also, the paper would be strengthen if it provides more theoretical analysis about how the hierarchy helps with the domain shift. The authors addressed some of these issues in the rebuttal, adding references and highlighting the differences from previous methods, but the paper would need more time to make the proper experimental comparisons with previous work and subsequent analysis. As a result, the paper is still not ready for acceptance to ICLR in its current form.
train
[ "E1rliAk-sN", "DG9eoG53UM9", "2cKU2AkVUpF", "0RkH716Vjgy", "zYAglqsG0w", "vFW6JRa_GTl", "7pUzG9goSRF", "krt_H82Z2O_", "7oShBkxrmZf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for providing additional references and explanation for the class-imbalanced case. However, my concerns regarding the per-level parameter sharing and experimental results were largely unresolved. Therefore, I'm keeping my original rating.", " I appreciate the clarifications by ...
[ -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "2cKU2AkVUpF", "0RkH716Vjgy", "krt_H82Z2O_", "7oShBkxrmZf", "7pUzG9goSRF", "iclr_2022_hJk11f5yfy", "iclr_2022_hJk11f5yfy", "iclr_2022_hJk11f5yfy", "iclr_2022_hJk11f5yfy" ]
iclr_2022_T_p2GaXuGeA
Local Calibration: Metrics and Recalibration
Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability on average across the entire population. However, it is in general impossible to measure the reliability of an individual prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel local recalibration method LoRe, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves decision making and fairness on classification tasks using both image and tabular data.
Reject
The reviewers all generally find the paper both well-motivated in addressing an important challenge as well as well-written. However, there's quite a bit of hesitation around whether the proposed metric is convincing enough as an approach to measure local calibration. Reviewer 76PS and 784d's concerns around the choice of feature map and associated hyperparameters remain unaddressed, and I agree with their concern. There is no clear understanding of what constitutes a "good" feature map, which makes the metric quite difficult to use whether as a benchmark of ML methods or for general application. I recommend the authors use the reviewers' feedback to enhance their preprint should they aim to submit to a later venue.
train
[ "ialjF8PJbHr", "DQBI71D-0h", "3MqaYgI9oy", "kHmxpI0zfxa", "g43Ao-FhuBz", "vOXrRQGNecT", "todEdj2-hTX", "OFaMq3-skVD", "g3WYDJTLJQ7", "2espqSno6F", "sXFo6ycIr8C", "phPCVt78wMU", "9_4PstEB6KC", "fevOVRSNN6F", "P18G5ewrHqO", "3F3RXalhb1", "Tud0wxVutgr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the informative comment and I appreciate the effort to improve the experiments section.\n\n*\"When all protected groups are explicitly specified, existing methods can ensure fairness, but because of the sheer number of possible minority groups, practitioners may not be able to enumerate all of them. On...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5, 4 ]
[ "2espqSno6F", "g3WYDJTLJQ7", "sXFo6ycIr8C", "iclr_2022_T_p2GaXuGeA", "todEdj2-hTX", "9_4PstEB6KC", "OFaMq3-skVD", "Tud0wxVutgr", "phPCVt78wMU", "3F3RXalhb1", "P18G5ewrHqO", "fevOVRSNN6F", "iclr_2022_T_p2GaXuGeA", "iclr_2022_T_p2GaXuGeA", "iclr_2022_T_p2GaXuGeA", "iclr_2022_T_p2GaXuGeA"...
iclr_2022_2d4riGOpmU8
Sequential Covariate Shift Detection Using Classifier Two-Sample Tests
A standard assumption in supervised learning is that the training data and test data are from the same distribution. However, this assumption often fails to hold in practice, which can cause the learned model to perform poorly. We consider the problem of detecting covariate shift, where the covariate distribution shifts but the conditional distribution of labels given covariates remains the same. This problem can naturally be solved using a two-sample test--- i.e., test whether the current test distribution of covariates equals the training distribution of covariates. Our algorithm builds on classifier tests, which train a discriminator to distinguish train and test covariates, and then use the accuracy of this discriminator as a test statistic. A key challenge is that classifier tests assume given a fixed set of test covariates. In practice, test covariates often arrive sequentially over time---e.g., a self-driving car observes a stream of images while driving. Furthermore, covariate shift can occur multiple times--- i.e., shift and then shift back later or gradually shift over time. To address these challenges, our algorithm trains the discriminator online. Furthermore, it evaluates test accuracy using each new covariate before taking a gradient step; this strategy avoids constructing a held-out test set, which can reduce sample efficiency. We prove that this optimization preserves the correctness---i.e., our algorithm achieves a desired bound on the false positive rate. In our experiments, we show that our algorithm efficiently detects covariate shifts on ImageNet.
Reject
This paper proposes to repeatedly apply the classifier two-sample tests (proposed by Kim, Ramdas, Singh, Wasserman, in 2016, and developed further by Lopez-Paz, Oquab, in 2017) for the purpose of detecting covariate shift. The authors propose methods to extend the aforementioned tests to a sequential setting. Overall, the reviewers do not lean towards acceptance, and neither do I. Several constructive suggestions are provided by reviewers, some are summarized below. The authors claim that sequential tests are not desirable in such a setting, and thus choose to pay a multiple testing price by repeatedly applying a batch test. However, sequential tests are in fact applicable (they will control type-1 error) but may have a worse power if the alternative is not true at the very start --- but these were entirely dropped from the simulations; in fact, comparing the increased type-1 error of the authors' approach to the increased type-2 error of sequential approaches may be worth clarifying. Perhaps the "right" solution that the authors are looking for could be gotten by converting a sequential test into a sequential changepoint detection algorithm (via repeated application of a sequential test, each started at a new time). Also see "Conformal test martingales for change-point detection" and "Inductive Conformal Martingales for Change-Point Detection" by Vovk et al., which are currently not cited.
train
[ "6Y7xdaOzr1o", "iEwHVvhnYv", "hfcH67qhhu0", "6w_wBAK9Fk", "tde78jJAAu9", "optnhozVYX8", "BIfi98e-fMD", "SITlANQ_qx6", "5vZJLDFn8J_", "E_8v2-Rqkvn", "XNFKDJXbsDf" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In traditional machine learning, there is a basic assumption that training and test sets are from the same distribution. When this assumption holds, we can expect low prediction error in the test set. However, in the real world, this assumption may be broken. For example, when the images change from daylight to ni...
[ 6, -1, -1, 5, 6, -1, -1, -1, -1, -1, 3 ]
[ 4, -1, -1, 5, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_2d4riGOpmU8", "5vZJLDFn8J_", "6w_wBAK9Fk", "iclr_2022_2d4riGOpmU8", "iclr_2022_2d4riGOpmU8", "BIfi98e-fMD", "tde78jJAAu9", "6w_wBAK9Fk", "6Y7xdaOzr1o", "XNFKDJXbsDf", "iclr_2022_2d4riGOpmU8" ]
iclr_2022_LczpUPwCnR1
ES-Based Jacobian Enables Faster Bilevel Optimization
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems. However, due to the nested structure of BO, existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations, which can be very costly in practice, especially with large neural network models. In this work, we propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO, and hence fully eliminates all second-order computations. We call our algorithm as ESJ (which stands for the ES-based Jacobian method) and further extend it to the stochastic setting as ESJ-S. Theoretically, we characterize the convergence guarantee and computational complexity for our algorithms. Experimentally, we demonstrate the superiority of our proposed algorithms compared to the state of the art methods on various bilevel problems. Particularly, in our experiment in the few-shot meta-learning problem, we meta-learn the twelve millions parameters of a ResNet-12 network over the miniImageNet dataset, which evidently demonstrates the scalability of our ES-based bilevel approach and its feasibility in the large-scale setting.
Reject
This paper presents an algorithm for approximating the hypergradient for bilevel optimization using a trick based on evolution strategies. It seems like an interesting approach, somewhat reminiscent of STNs, so it's interesting to see experiments with it. I have a big concern about the proposed justification of the method, namely that each iteration is more efficient than methods based on HVPs. The authors claim that because they only require gradient computations and not HVPs, their method is more efficient. However, as various reviewers point out, the proposed method requires numerous inner optimization runs. By comparison, a method based on unrolled backprop simply requires a single inner run, followed by backprop on the trajectory; hence it should be about as expensive as 2-3 inner optimizations (or less if it is truncated BPTT). Similarly, each HVP has a small multiple of the cost of an inner optimization step, so methods based on HVPs ought to be cheaper unless they're doing quite a lot of HVPs. It's conceivable the proposed method could be more efficient than AID, etc. if each hypergradient estimate is more accurate. However, this isn't shown, and it would seem surprising for an ES-based approximation to be more accurate than the exact gradient. The authors claim in the rebuttal that the efficiency claims aren't based on the theoretical analysis, but rather on the experiments (which use Q=1); however, Section 3.2 still finishes with the conclusion that ESJ is more efficient, which seems problematic. I encourage the authors to formulate their theoretical claims more carefully and to consider the reviewers' other feedback, and I think this could make an interesting submission for the next cycle.
train
[ "0fEfA31s3O7", "qMXnofaZiwY", "Fruy0qovuD", "cTGT-tqJZy_", "NzxG9tM1vLI", "H8mPtNOC6oD", "mFHlLFoSnX", "hIswbCoJFzG", "jG4umOlPbhr", "UmNoHSdrg7S", "ThYzZ_qehS", "X3ZUyKQIH5" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for providing the review! In our revision of the paper, we added new experiments in Figures 1 and 3 in Section 4 and in Appendix D, and made various revisions throughout the paper based on all reviewers’ comments. All our changes are highlighted with blue-colored texts. New comments on these changes a...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "X3ZUyKQIH5", "X3ZUyKQIH5", "ThYzZ_qehS", "ThYzZ_qehS", "UmNoHSdrg7S", "jG4umOlPbhr", "jG4umOlPbhr", "jG4umOlPbhr", "iclr_2022_LczpUPwCnR1", "iclr_2022_LczpUPwCnR1", "iclr_2022_LczpUPwCnR1", "iclr_2022_LczpUPwCnR1" ]
iclr_2022_lkQ7meEa-qv
Learning Neural Acoustic Fields
Our sensory perception of the world is rich and multimodal. When we walk into a cathedral, acoustics as much as appearance inform us of the sanctuary's wide open space. Similarly, when we drop a wineglass, the sound immediately informs us as to whether it has shattered or not. In this vein, while recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling the acoustic properties of the scene as a linear time-invariant system, NAFs continuously map all emitter and listener location pairs to an impulse response function that can then be applied to new sounds. We demonstrate that NAFs capture environment reverberations of a scene with high fidelity and can predict sound propagation for novel locations. Leveraging the scene structure learned by NAFs, we also demonstrate improved cross-modal generation of novel views of the scene given sparse visual views. Finally, the continuous nature of NAFs enables potential downstream applications such as sound source localization.
Reject
This paper proposes to use implicit neural representations to model how our surroundings affect the sounds reverberating within. Concretely, the proposed approach can produce impulse responses that capture environment reverberations between any two points in a scene. Reviewers praised the novelty and originality of the idea (and I concur), but raised concerns about the clarity of the writing (especially w.r.t. modelling the phase component), lack of detail, insufficient or inadequate experiments and overclaiming of results. (There were also concerns about overclaiming of contributions, but I am inclined to agree with the authors that this isn't really the case.) The authors have clearly taken the time to try to address these concerns, and I commend them on their willingness to engage with the reviewers' comments and suggestions. While one of the reviewers raised their score to "accept", I am inclined to agree with the other reviewers and recommend rejection. The required degree of revision is substantial, and therefore difficult to assess within a single review cycle. I believe this work must undergo another thorough assessment in its revised form, before it can be accepted for publication.
train
[ "QDAEiyNdD8d", "1iJ7wXYJTVc", "M_VA5evTevv", "GM6iiUHEywK", "ny1xBb0BAeD", "z4vEPJRan6", "GbwwNGGO2z1", "U7FGlcjRMzo", "nlw6igfSlrr", "HSmWoDlDBkd", "6lJO5vHD0kV", "ZK-sjWSebeD", "5fWLZ2zjLvw", "ZX6dxce6tCM", "4bh72Ej9zsK", "h8FVWJ0dqR", "sOX-UEUHHhd", "yhS215HfqWi", "p2r4Udq5NN"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ " Dear Reviewer HhWA,\n\nWe deeply appreciate your feedback, and are grateful that you consider our work to be interesting. Our goal has never changed from our initial revision, and that is to learn a continuous representation of spatial acoustics from sparse training samples.\n\nThe link we provided in the origina...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "ny1xBb0BAeD", "U7FGlcjRMzo", "6lJO5vHD0kV", "iclr_2022_lkQ7meEa-qv", "h8FVWJ0dqR", "U7FGlcjRMzo", "6lJO5vHD0kV", "5fWLZ2zjLvw", "GM6iiUHEywK", "lSeUEnzfXdr", "p2r4Udq5NN", "iclr_2022_lkQ7meEa-qv", "4bh72Ej9zsK", "GM6iiUHEywK", "lSeUEnzfXdr", "ZX6dxce6tCM", "w0u4zGzWQn_", "iclr_202...
iclr_2022_NoE4RfaOOa
Where can quantum kernel methods make a big difference?
The classification problem is a core problem of supervised learning, which is widely present in our life. As a class of algorithms for pattern analysis, Kernel methods have been widely and effectively applied to classification problems. However, when very complex patterns are encountered, the existing kernel methods are powerless. Recent studies have shown that quantum kernel methods can effectively handle some classification problems of complex patterns that classical kernel methods cannot handle. However, this does not mean that quantum kernel methods are better than classical kernel methods in all cases. It is still unclear under what circumstances quantum kernel methods can realize their great potential. In this paper, by exploring and summarizing the essential differences between quantum kernel functions and classical kernel functions, we propose a criterion based on inter-class and intra-class distance and geometric properties to determine under what circumstances quantum kernel methods will be superior. We validate our method with toy examples and multiple real datasets from Qiskit and Kaggle. The experiments show that our method can be used as a valid determination method.
Reject
There was consensus that though the paper introduces an interesting question, but not enough exploration has been made. The reviews point out several mathematical in-accuracies, and points out several issues including that the delta criterion needs to be examined.
train
[ "Df3VFePcB17", "HVP8akM-Bbe", "LieBsLDzmAl", "8GWATeZi5C5", "cPDYuLjGAc", "MFZl7iSdse9", "cqjj71jtkpO", "U3o_-embdAQ", "lEf8d02wlKp", "KrZwM_fesKq" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the answers they provided. Still, the required changes are too important to warrant acceptance. I will stick to my score and encourage the authors to implement the changes they discussed before resubmitting this work to a future venue.", " Thanks for your time and efforts to review our w...
[ -1, -1, -1, -1, -1, 5, -1, 1, 3, 1 ]
[ -1, -1, -1, -1, -1, 4, -1, 5, 2, 4 ]
[ "HVP8akM-Bbe", "U3o_-embdAQ", "lEf8d02wlKp", "KrZwM_fesKq", "MFZl7iSdse9", "iclr_2022_NoE4RfaOOa", "iclr_2022_NoE4RfaOOa", "iclr_2022_NoE4RfaOOa", "iclr_2022_NoE4RfaOOa", "iclr_2022_NoE4RfaOOa" ]
iclr_2022_7xzVpAP5Cm
Non-reversible Parallel Tempering for Uncertainty Approximation in Deep Learning
Parallel tempering (PT), also known as replica exchange, is the go-to workhorse for simulations of multi-modal distributions. The key to the success of PT is to adopt efficient swap schemes. The popular deterministic even-odd (DEO) scheme exploits the non-reversibility property and has successfully reduced the communication cost from $O(P^2)$ to $O(P)$ given sufficient many $P$ chains. However, such an innovation largely disappears given limited chains in big data problems due to the extremely few bias-corrected swaps. To handle this issue, we generalize the DEO scheme to promote the non-reversibility and obtain an optimal communication cost $O(P\log P)$. In addition, we also analyze the bias when we adopt stochastic gradient descent (SGD) with large and constant learning rates as exploration kernels. Such a user-friendly nature enables us to conduct large-scale uncertainty approximation tasks without much tuning costs.
Reject
This paper proposes the algorithm which they call DEO*-SGD, which is a combination of the ideas of the generalized DEO scheme, denoted by DEO*, to facilitate exploration (Section 3.1), adoption of stochastic gradient descent (SGD) in the exploration chains (i.e., those chains except the one with the lowest temperature) (Section 4), and use of adaptive tuning of learning rates (Section 4.2). The proposal is applied experimentally in Section 5 to demonstrate superiority of the proposal over existing approaches. The initial review scores of the four reviewers were one positive and three negatives. Most reviewers positively evaluated the proposal, including the proposal of DEO* and its theoretical analysis, as well as its empirical usefulness in deep learning for a computer-vision task. On the other hand, some reviewers showed concern about soundness of the proposal. Upon reading the reviews and the author responses, as well as the paper itself, I think that this paper lacks a clear statement on its objective. * **What does "uncertainty approximation" mean?:** The paper title would imply that the objective of the proposal in this paper is for "uncertainty approximation," but I could not find any concrete description on what it exactly is. * **Sampling versus optimization:** The methods of Langevin dynamics, or more generally Markov-chain Monte-Carlo methods, have been used for two distinct purposes: sampling and optimization. In any case fast relaxation towards equilibrium would be of practical importance. For sampling purposes it is also important to assure that the stationary distribution of the Markov chain corresponds to the target distribution (In Langevin dynamics the target distribution would be the canonical ensemble defined by the energy $U(\cdot)$ and the temperature $\tau$). For optimization purposes, however, the assurance of the stationary distribution to be equal to the target distribution would be less of concern. It seems that the authors' interest would be in optimization rather than in sampling, but it is not clearly stated. * **Soundness issue:** As Reviewer mbau pointed out, DEO* does not have a guarantee of convergence to the target distribution. I thought that if the objective of this paper would be in optimization rather than in sampling, the existence of approximation already in DEO* might be thought of as a minor problem, as the proposal already has other approximations introduced in Section 4. The authors claim that this problem does not affect the main body of the paper, but I feel that it would affect the overall organization of the paper, as the current organization seems to presume that approximation only resides in the adoption of the SGD-based exploration kernels with deterministic swap. In any case, this problem has been acknowledged by the authors themselves, as well as Reviewer ofJx. In particular, the detailed discussion between the authors and Reviewer mbau has been very fruitful in clarifying technical subtleties in this manuscript, including the soundness issue mentioned above. At the same time, it would imply that this paper still has room for improvement. An additional point I would like to mention is that this paper is not really self-contained, in the sense that several key notions and quantities are not defined or only defined in the Supplementary Materials ($\tilde{U}$ is not explicitly defined at all, the terms "swap time" and "round trip time" are defined in Appendix A.5, $\sigma_p$ in Corollary 1 and Lemma 2 is defined in Appendix A.1). All these weaknesses make me to think that another round of revision would be appropriate to properly judge the quality of this paper, whereas there is no such option within the review procedure of ICLR. I therefore cannot recommend acceptance of this paper at least in its current form. Minor points (page and line numbers refer to the revised version): - Abstract, line 5: "given sufficient many $P$ chains" would be better phrased as "given $P$ chains", as the big-O notation usually assumes the large-P asymptotic. - In several places, there are periods after "Figure" and "Table", which are not needed. - Page 3, line 32: In Lemma 2 there is apparently no such term found as "the second quadratic term". It should appear only after having assumed the equi-acceptance/rejection rates in equation (4), so that the sum becomes proportional to $P$. - Theorem 1: "the maximal round trip time" should certainly be "the minimal round trip time". / is the ceiling function(. T -> , t)he round trip time - Table 1: I did not really understand what "non-asymptotic" / "asymptotic" mean, as the big-O notation used here should by definition be asymptotic. - Corollary 1: the optimal (number of) chains - Page 4, line 34: The abbreviation SGLD is not defined in this paper. - Page 4, line 36: similar(ly) to - Equation (6): The sign of the last term should be "-".
val
[ "9gm7t3isrTY", "hwtxHbGobtB", "c-SUznEEO-p", "7CmWMF7BJJl", "SElNTzcRGx", "gEuuZ54BioE", "utntuwMKgu_", "MFrbeEEKfCd", "3iAqWPu0cP6", "ZaU3O4Nc9GM", "dLdMGfvP6h3", "N5gHZw6cmvp", "o---8KysmN", "ho6FSoxYn9k", "pvKRIjrQEOe", "6fRLdolanKV", "52wAWC8wW1i", "lOVJ6sHrnL", "PAZq9m04rMZ"...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_re...
[ "The manuscript considers parallel tempering for sampling from multi-modal distributions. The choice of the swap scheme can great impact the performance of parallel tempering algorithms. The authors propose a modification of the existing deterministic even-odd (DEO) swap scheme. Theoretical results are established ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "iclr_2022_7xzVpAP5Cm", "c-SUznEEO-p", "7CmWMF7BJJl", "o---8KysmN", "gCLHog3lKJm", "9XDjp3-UIqJ", "MFrbeEEKfCd", "3iAqWPu0cP6", "ZaU3O4Nc9GM", "dLdMGfvP6h3", "ho6FSoxYn9k", "52wAWC8wW1i", "N5gHZw6cmvp", "6fRLdolanKV", "mVw57pv9m3", "gCLHog3lKJm", "PAZq9m04rMZ", "9gm7t3isrTY", "SE...
iclr_2022_zU2v47WF0Ku
Implicit Bias of Linear Equivariant Networks
Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of scientific and technical applications by explicitly encoding particular group symmetries, such as rotations and permutations, in their architectures. Although the success of G-CNNs is driven by the explicit symmetry bias of their convolutional architecture, a recent line of work has proposed that the implicit bias of training algorithms on a particular parameterization (or architecture) is key to understanding generalization for overparameterized neural nets. In this context, we show that $L$-layer full-width linear G-CNNs trained via gradient descent in a binary classification task converge to solutions with low-rank Fourier matrix coefficients, regularized by the $2/L$-Schatten matrix norm. Our work strictly generalizes previous analysis on the implicit bias of linear CNNs to linear G-CNNs over all finite groups, including the challenging setting of non-commutative symmetry groups (such as permutations). We validate our theorems via experiments on a variety of groups and empirically explore more realistic nonlinear networks, which locally capture similar regularization patterns. Finally, we provide intuitive interpretations of our Fourier-space implicit regularization results in real space via uncertainty principles.
Reject
This paper examines the implicit bias of gradient descent of linear group equivalent convolutional neural networks with a single channel and full-dimensional kernel when trained on separable data with exponential loss. The main result is that the linear predictor converges in direction to the first order stationary point of the minimum 2/L Schatten matrix norm max-margin problem, under some assumptions. This generalizes previous results on linear convolutional neural networks. I appreciated this paper states the theorems in terms of general group operations; if done correctly and written well, this can be a good reference for future papers. But I think this paper needs a little more work before getting there, as I explain below. The reviewers were borderline (6,6,6,5) and did not have high confidence. Some stated clarity issues, other criticized the model being used: either that (1) only the case of single channel and full-dimensional kernel we examined, or that (2) the full model is not actually invariant. Given previous results (Jagadeesan et al.) I am OK with (1). I find (2) problematic, but not enough to be a reason to reject the paper. So I took a closer look. First, I felt that indeed the paper writing could be improved. Specifically, the notation could be better explained (e.g., the h and g functions in eq. 3 are not defined: what are their range and domain?), and more discussions and examples can be added throughout the paper to clarify the significance of the results. Second, the experimental results in the non-Abelian case (figure 4a) and non-linear case (figure 5) seemed somewhat weak (not so sparse) after I noticed the y-axis does not start from zero, as in Figure 3a. Most importantly, looking at the proofs, I felt they were rather incremental, as I explain next. The authors claim their main result, Theorem 5.4, does not follow from Yun et al.'s paper. But Gunsekar et al. 2018b already had KKT condition results for max margin in parameter space, and even stronger results are in [1] (which the authors should cite and discuss clearly). These already give a stronger version of Theorem A.6. So the main extra contribution here is to extend it to a guarantee on function space (in the space of linear functions beta). But for L>2, I unless I am missing something, I feel this is straightforward, by using results like in [2], where they relate subdifferentials of unitarily invariant matrix functions to the corresponding vector subdifferentials on the singular values (and in the vector case, the subdifferential is trivial). The L=2 is not trivial as we need to show the condition in Assumption A.7, which is the most technical part of Gunsekar et al. / Yun et al. papers, and is often non-trivial. The authors in this paper, however, did not show this and rather leave to future work in the last paragraph of Appendix A. I think that this is a concrete opportunity to make the paper better, perhaps following the same methodology in Gunsekar et al. / Yun et al. papers. Minor comments: 1) The informal Theorem 1 should state we converge to the f.s.p. of eq. 1, not a solution of eq. 1. 2) The main paper is non-searchable, which makes it harder to read. 3) Many hype refs in the appendix do not work well (they get me to some random page). [1] K. Lyu, and J. Li. "Gradient descent maximizes the margin of homogeneous neural networks." 2019. [2] A. S. Lewis The Convex Analysis of Unitarily Invariant Matrix Functions, 1995
train
[ "tsWTCG-eUEJ", "Rctr90-mHG", "MUgumBTz1l7", "etJIqhCK7Sw", "JjMRg1k2spc", "Xevn5xmUQ7f", "rLtpJiy2qqZ", "HMpkuYCq3KL", "5h36XQn2_Xn", "dRUf5-14Ev", "lyT04B6iQ3C", "7oK1qm-uhpl", "eYXS3MI8C_M", "7gKDTygP2n-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Because of the explicit inductive bias of G-CNN, other inductive biases have not been discussed much.\nIn this paper, the author focuses on and analyzes the non-explicit inductive bias of G-CNNs.\nTechnically, they present the non-explicit bias in terms of Fourier matrices by using a group Fourier transform, which...
[ 6, -1, -1, 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 2, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2022_zU2v47WF0Ku", "HMpkuYCq3KL", "dRUf5-14Ev", "iclr_2022_zU2v47WF0Ku", "7oK1qm-uhpl", "iclr_2022_zU2v47WF0Ku", "tsWTCG-eUEJ", "rLtpJiy2qqZ", "Xevn5xmUQ7f", "7gKDTygP2n-", "dRUf5-14Ev", "etJIqhCK7Sw", "iclr_2022_zU2v47WF0Ku", "iclr_2022_zU2v47WF0Ku" ]
iclr_2022_GrFix2vWsh4
The hidden label-marginal biases of segmentation losses
Most segmentation losses are arguably variants of the Cross-Entropy (CE) or Dice losses. In the abundant segmentation literature, there is no clear consensus as to which of these losses is a better choice, with varying performances for each across different benchmarks and applications. In this work, we develop a theoretical analysis that links these two types of losses, exposing their advantages and weaknesses. First, we provide a constrained-optimization perspective showing that CE and Dice share a much deeper connection than previously thought: They both decompose into label-marginal penalties and closely related ground-truth matching penalties. Then, we provide bound relationships and an information-theoretic analysis, which uncover hidden label-marginal biases: Dice has an intrinsic bias towards specific extremely imbalanced solutions, whereas CE implicitly encourages the ground-truth region proportions. Our theoretical results explain the wide experimental evidence in the medical-imaging literature, whereby Dice losses bring improvements for imbalanced segmentation. It also explains why CE dominates natural-image problems with diverse class proportions, in which case Dice might have difficulty adapting to different label-marginal distributions. Based on our theoretical analysis, we propose a principled and simple solution, which enables to control explicitly the label-marginal bias. Our loss integrates CE with explicit ${\cal L}_1$ regularization, which encourages label marginals to match target class proportions, thereby mitigating class imbalance but without losing generality. Comprehensive experiments and ablation studies over different losses and applications validate our theoretical analysis, as well as the effectiveness of our explicit label-marginal regularizers.
Reject
The submission evaluates the relationship between (logarithmic) Dice loss and cross-entropy loss, arguing for a similar decomposition into ground truth and "hidden label-marginal biases." The submission received mixed reviews, with two reviewers voting for rejection, and two feeling that it is marginally above the acceptance threshold. Setting aside the numerical scores, there are reasons to believe that this submission, while interesting, has shortcomings that limit its relevance to the wider ICLR community. These include - Very many losses have been proposed for imbalanced classification / (medical) image segmentation, such as Jaccard and Tversky index or ranking measures, although admittedly Dice is probably the most popular in the medical imaging literature due to historic reasons. Arguably, Dice is less well-behaved from a theoretical perspective compared to other options (e.g. it does not even form a metric), and may not be the most relevant point of departure for a representation learning conference. The literature review misses many relevant papers on such losses, including papers that specifically are focused on the relationship between Dice and cross-entropy, e.g. Eelbode et al., IEEE-TMI 2020 and citations therein. - The empirical results do not show substantially improved results compared to baselines. On the balance, this does not cross the threshold for acceptance to a competitive venue such as ICLR.
train
[ "3ceNYC0jcLg", "a7gFtyuzQoI", "V4u4niCCMWa", "ORsNPB6jMG", "aeOeXN_GcP", "a6n6Z7H4HiP", "cynUJHjbI3g", "2pPuiyvqvXO", "-YD8VZBl4CL", "Ue03DhZ8ybx", "Sb9swHG3f4u", "yT20gmfRNMV", "ZsXc2yNiBe2", "5QkpzX7yat1", "EIl60MUeQ-g", "nRJbmIEqP-H", "VFrvmJixzSN", "AcUDiS2eYOK" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the time spent on our paper. We should mention that we are completely disappointed about the short/non-informative/misleading review of our work, with the three-word review summary; the many incorrect claims about the work (we provide details below); the unfounded criticism about the lac...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3 ]
[ "EIl60MUeQ-g", "iclr_2022_GrFix2vWsh4", "AcUDiS2eYOK", "2pPuiyvqvXO", "-YD8VZBl4CL", "cynUJHjbI3g", "ZsXc2yNiBe2", "Ue03DhZ8ybx", "Sb9swHG3f4u", "Sb9swHG3f4u", "AcUDiS2eYOK", "VFrvmJixzSN", "yT20gmfRNMV", "nRJbmIEqP-H", "iclr_2022_GrFix2vWsh4", "iclr_2022_GrFix2vWsh4", "iclr_2022_GrF...
iclr_2022_XSwpJ2bonX
Neural Circuit Architectural Priors for Embodied Control
Artificial neural networks coupled with learning-based methods have enabled robots to tackle increasingly complex tasks, but often at the expense of requiring large amounts of learning experience. In nature, animals are born with highly structured connectivity in their brains and nervous systems that enables them to efficiently learn robust motor skills. Capturing some of this structure in artificial models may bring robots closer to matching animal performance and efficiency. In this paper, we present Neural Circuit Architectural Priors (NCAP), a set of reusable architectural components and design principles for deriving network architectures for embodied control from biological neural circuits. We apply this method to control a simulated agent performing a locomotion task and show that the NCAP architecture achieves comparable asymptotic performance with fully connected MLP architectures while dramatically improving data efficiency and requiring far fewer parameters. We further show through an ablation analysis that principled excitation/inhibition and initialization play significant roles in our NCAP architecture. Overall, our work suggests a way of advancing artificial intelligence and robotics research inspired by systems neuroscience.
Reject
Meta Review for Neural Circuit Architectural Priors for Embodied Control The motivation of this work is to address an important challenge: To understand innate contributions to neural circuits for motor control. This paper proposes both a set of reusable architectural components and design principles, and also interesting principles for producing biologically-inspired neural networks for embodied control. This work aims to be at the intersection between neuroscience and machine learning for improving the design of artificial neural networks and improving our understanding of observed biological networks. In their model, various components of biological networks are replicated (such as the balance between excitation and inhibition, sparsity, and oscillation). They show that a resulting model, inspired by C.elegans, can learn to swim more efficiently (when evaluated on the Swimmer RL environment) and requires fewer parameters while achieving similar accuracy as an MLP. Most reviewers, including myself, recognize (and appreciate) the ambition of this work, and are excited at the goal of looking at problems from the perspectives of both system neuroscience and machine learning. The motivations of this paper are clearly explained, and the paper is well written (also the diagrams are great). I'm very excited about this work, and hope to see it succeed, but in the paper's current state (even with the revisions), I don't think it addresses the reviewers' main concerns. After discussions and examining the paper and the reviews in detail, I feel reviewer GaKc best summarizes the main issues with the work at its current state: - This paper is interesting but does not proposes a significant improvement to the literature as the gap between the promises made in the motivation and actually delivered work is too wide. - From a neuroscience point of view, this work does not provide substantial evidence of the importance of the model at either modeling or simulating biological neural systems. - From a "theoretical" point of view, the model does not provide much advancement to the machine learning community either. So while the current work (especially in the revised state) I would consider to be an outstanding workshop paper, I cannot recommend it for acceptance at ICLR 2022. An advice I would give to the authors (as someone who publishes to ML conferences, and Sys-Neuro/Bio venues) is that for these ML conferences, it might be easier to make the narrative of the work narrower, and well-defined. If the method is supposed to demonstrate significant advantages of biologically inspired network architecture over current RL methods, the results should clearly demonstrate convincing experimental results that can persuade the (non-neuro) RL community to have interest in the method. If the method does not achieve SOTA results, then try to present the method capable of something really useful that existing RL methods simply fail at (and emphasize that as a core contribution). Conversely, if the narrative is to use a bio-inspired network to emulate biological behaviors, the method must have something important to offer for the community of people working on simulating biological neural systems. I look forward to seeing this work improved and eventually published at a journal or presented at a conference in the future, good luck!
train
[ "qjOZ4LkKmqL", "xeunCEOI1zP", "bVusXf8fvxh", "0UEJwljNia_", "4v3E46vEGvF", "_eys6RyLGl", "CWIdEMnc3JI", "2dwaVU6bGRW", "lQ8wZsR_Sjl", "v5BC8JCh3GB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response and for the efforts made to improve their manuscript. However, after reading the answer to my concerns and from the other reviewers, I am still convinced that this paper doesn't propose enough novelty compared to existing pieces of work. I am also not convinced by the result...
[ -1, -1, -1, -1, -1, -1, 5, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "xeunCEOI1zP", "v5BC8JCh3GB", "lQ8wZsR_Sjl", "2dwaVU6bGRW", "CWIdEMnc3JI", "iclr_2022_XSwpJ2bonX", "iclr_2022_XSwpJ2bonX", "iclr_2022_XSwpJ2bonX", "iclr_2022_XSwpJ2bonX", "iclr_2022_XSwpJ2bonX" ]
iclr_2022_N4KRX61-_1d
A Hierarchical Bayesian Approach to Inverse Reinforcement Learning with Symbolic Reward Machines
A misspecified reward can degrade sample efficiency and induce undesired behaviors in reinforcement learning (RL) problems. We propose symbolic reward machines for incorporating high-level task knowledge when specifying the reward signals. Symbolic reward machines augment existing reward machine formalism by allowing transitions to carry predicates and symbolic reward outputs. This formalism lends itself well to inverse reinforcement learning, whereby the key challenge is determining appropriate assignments to the symbolic values from a few expert demonstrations. We propose a hierarchical Bayesian approach for inferring the most likely assignments such that the concretized reward machine can discriminate expert demonstrated trajectories from other trajectories with high accuracy. Experimental results show that learned reward machines can significantly improve training efficiency for complex RL tasks and generalize well across different task environment configurations.
Reject
The paper proposes a framework of Symbolic Reward Machine (SRM), an extension of Reward Machine (RM), for specifying interpretable and explainable reward functions. Then, a Bayesian IRL method is proposed to concretize an SRM using expert demonstrations. Experimental results demonstrate the effectiveness of SRMs in terms of training efficiency and generalization. The reviewers acknowledged that the problem of inferring symbolic rewards is important and that the proposed SRM framework is an important step in this direction. However, the reviewers pointed out several weaknesses in the paper and shared concerns, including (a) limited comparison with alternate approaches to tackle the problem and positioning w.r.t. the existing literature; (b) the domains seem rather simple since they require only a few demonstrations (implying that the holes being inferred might be quite small); (c) the novelty and theory around the SRM representation is not fully clear. I want to thank the authors for their detailed responses. Based on the reviewers’ concerns and follow-up discussions, there was a consensus that the work is not ready for publication. The reviewers have provided detailed feedback to the authors. We hope that the authors can incorporate this feedback when preparing future revisions of the paper.
train
[ "hwf7ncjcnR", "IkTxUuaZuJ1", "WpAIATOaWn_", "uJ5RsrdOSLo", "GGI3SRwY-Xc", "fdm6a7Gz9UJ", "TyE-w_DN_WW", "-Xle-hQiYLL", "ITvuDFI6kmS", "Rp5YCNev-h3", "AKwFBtiNDC", "QE_KvxhC7x" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Our experimental results show that by only using 10 and even less expert demonstrated trajectories the agent can attain high performance. Note that those trajectories are expert trajectories, not the training data for the agent policy networks. Similar results have been seen in other IRL works. For instance, in t...
[ -1, -1, -1, 6, 5, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, 3, 2 ]
[ "QE_KvxhC7x", "-Xle-hQiYLL", "uJ5RsrdOSLo", "iclr_2022_N4KRX61-_1d", "iclr_2022_N4KRX61-_1d", "iclr_2022_N4KRX61-_1d", "uJ5RsrdOSLo", "GGI3SRwY-Xc", "QE_KvxhC7x", "AKwFBtiNDC", "iclr_2022_N4KRX61-_1d", "iclr_2022_N4KRX61-_1d" ]
iclr_2022_rxF4IN3R2ml
MQTransformer: Multi-Horizon Forecasts with Context Dependent and Feedback-Aware Attention
Recent advances in neural forecasting have produced major improvements in accuracy for probabilistic demand prediction. In this work, we propose novel improvements to the current state of the art by incorporating changes inspired by recent advances in Transformer architectures for Natural Language Processing. We develop a novel decoder-encoder attention for context-alignment, improving forecasting accuracy by allowing the network to study its own history based on the context for which it is producing a forecast. We also present a novel positional encoding that allows the neural network to learn context-dependent seasonality functions as well as arbitrary holiday distances. Finally we show that the current state of the art MQ-Forecaster (Wen et al., 2017) models display excess variability by failing to leverage previous errors in the forecast to improve accuracy. We propose a novel decoder-self attention scheme for forecasting that produces significant improvements in the excess variation of the forecast.
Reject
This paper proposes a number of improvements to the previously-published transformer-based MQ-forecaster model for multi-horizon forecasts on time series data. They show strong empirical improvements in terms of accuracy and excess forecast variability on a large proprietary dataset, as well as four public datasets. Concerns were raised about the relatively incremental changes to the MQ-forecaster model this work is based on, lack of ablations on public data and, relatedly, inability to reproduce results on the proprietary data.
val
[ "8iY5Oqop9nc", "H-80rpoUxmc", "scF2IlJYVb8", "zowJj5iyhM", "A2N1G_R4vZP", "KwxZjSbX-NE", "eV0e069J1q1", "F9RCZoi_f0" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their response. Having read the response and other discussions on this paper I'd like to keep my rating unchanged, I think the paper still remains below the threshold of acceptability.", " Thanks for the thoughtful review and response! Reviewer 3wZQ, any further thoughts after...
[ -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "H-80rpoUxmc", "A2N1G_R4vZP", "F9RCZoi_f0", "eV0e069J1q1", "KwxZjSbX-NE", "iclr_2022_rxF4IN3R2ml", "iclr_2022_rxF4IN3R2ml", "iclr_2022_rxF4IN3R2ml" ]
iclr_2022_M6jm8fRG5eq
Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration
Many real-world applications of multi-agent reinforcement learning (RL), such as multi-robot navigation and decentralized control of cyber-physical systems, involve the cooperation of agents as a team with aligned objectives. We study multi-agent RL in the most basic cooperative setting --- Markov teams --- a class of Markov games where the cooperating agents share a common reward. We propose an algorithm in which each agent independently runs stage-based V-learning (a Q-learning style algorithm) to efficiently explore the unknown environment, while using a stochastic gradient descent (SGD) subroutine for policy updates. We show that the agents can learn an $\epsilon$-approximate Nash equilibrium policy in at most $\propto\widetilde{O}(1/\epsilon^4)$ episodes. Our results advocate the use of a novel \emph{stage-based} V-learning approach to create a stage-wise stationary environment. We also show that under certain smoothness assumptions of the team, our algorithm can achieve a nearly \emph{team-optimal} Nash equilibrium. Simulation results corroborate our theoretical findings. One key feature of our algorithm is being \emph{decentralized}, in the sense that each agent has access to only the state and its local actions, and is even \emph{oblivious} to the presence of the other agents. Neither communication among teammates nor coordination by a central controller is required during learning. Hence, our algorithm can readily generalize to an arbitrary number of agents, without suffering from the exponential dependence on the number of agents.
Reject
This paper presents a decentralized cooperative approach in multi-agents using Markov games theory. After reviewing the paper and reading the comments from the reviewers, here are my comments: - The paper is well-written, quite difficult to follow, but very informative. - The contribution is clearly stated and the results support it. - Theoretical results are interesting for the RL community. - The main concern is about learning the epsilon-approximate Nash equilibrium policy which is a fundamental part of the paper.
train
[ "VJoljsT2lCt", "z4ien9pDXb", "q5RWQ4vz2lr", "y-tc89oQVr-", "2onQm5r9Py", "dnjxx0KouN2", "4we2HDBm5my", "D2Darpxh_HY", "yPrW6YKt6IU", "j2yez0uOFwG", "VDQe0WcuWmh", "x6ZYReoPYlV", "sAv-vVVA2w", "EBGbzYFEgBq", "0UZp-zXP6Y" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I understand the \"derandomized\" policy, and understand that this is a product policy. But why this policy is a Nash? Could the authors write down a theorem *giving a high probability statement on the value function of the \"derandomized\" policy and show that this statement guarantees that the \"derandomized\" ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2, 4 ]
[ "z4ien9pDXb", "q5RWQ4vz2lr", "2onQm5r9Py", "dnjxx0KouN2", "4we2HDBm5my", "yPrW6YKt6IU", "D2Darpxh_HY", "0UZp-zXP6Y", "EBGbzYFEgBq", "sAv-vVVA2w", "x6ZYReoPYlV", "iclr_2022_M6jm8fRG5eq", "iclr_2022_M6jm8fRG5eq", "iclr_2022_M6jm8fRG5eq", "iclr_2022_M6jm8fRG5eq" ]
iclr_2022_xiXOrugVHs
Long Document Summarization with Top-Down and Bottom-Up Representation Inference
Text summarization aims to condense long documents and retain key information. Critical to the success of a summarization model is the faithful inference of latent representations of words or tokens in the source documents. Most recent models infer the latent representations with a transformer encoder, which is purely bottom-up. Also, self-attention-based inference models face the challenge of quadratic complexity with respect to sequence length. We propose a principled inference framework to improve summarization models on these two aspects. Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency at a coarser time scale and the bottom token level preserves the details. Critically, this hierarchical structure enables token representations to be updated in both a bottom-up and top-down manner. In the bottom-up pass, token representations are inferred with local self-attention to leverage its efficiency. Top-down correction is then applied to allow tokens to capture long-range dependency. We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets, including narrative, conversational, scientific documents and news. Our model achieves (1) competitive or better performance on short documents with higher memory and compute efficiency, compared to full attention transformers, and (2) state-of--the-art performance on a wide range of long document summarization benchmarks, compared to recent efficient transformers. We also show that our model can summarize an entire book and achieve competitive performance using $0.27\%$ parameters (464M vs. 175B) and much less training data, compared to a recent GPT-3-based model. These results indicate the general applicability and benefits of the proposed framework.
Reject
This paper deals with the task of long text summarization. Inspired by earlier work on top-down and bottom-up architectures, this work focuses on improving the traditional bottom-up converter encoder structure, and the fine resolution representations. Pros: - Their model can model longer documents in coarse and fine granularity levels. - The performance on benchmark datasets looks pretty good compared to strong baselines - Computationally efficient. Cons: The reviewers have raised several concerns including: - the experimental verification for calculation efficiency and memory usage of model is not sufficient. - the novelty of this design is somehow limited since the bottom-up and top-down idea is not new. - several details about the figures and especially the experiments were missing. The authors have addressed several of the suggestions, added new experiments results addressing the issues raised by the reviewers. During the rebuttal period, the authors further conducted empirical investigations showing that the top-down update for token representations, especially with good top-level representations, leads to good summarization because of enriched token-level representations by the top-down. Despite positive results, some reviewers raised concerns that with only using BART as a backbone, it is surprising to achieve this great performance boost with the top-down/bottom-up models on long document summarization when they compared to the state-of-the-art transformer models (BigBird, Longformer and T5) that have been shown to encode longer sequences and beat several summarization models.
train
[ "RCiCny8Q_5h", "iYwZDKhB0qX", "tGDYCzs4Z22", "OyXJqbsTxOl", "pTld9zx3GoB", "px2fwK0r6Ko", "rqOTtfYMqZA", "5s43467tDhe" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your interest!\n\nWe updated the paper to include all the details you mentioned. They're included in the Supplementary B. ", " Dear Reviewer:\n\nWe appreciate your positive feedback on the empirical performance and your valuable reviews. Your comments are addressed as follows.\n\n1. *Regarding abl...
[ -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "pTld9zx3GoB", "px2fwK0r6Ko", "5s43467tDhe", "rqOTtfYMqZA", "iclr_2022_xiXOrugVHs", "iclr_2022_xiXOrugVHs", "iclr_2022_xiXOrugVHs", "iclr_2022_xiXOrugVHs" ]
iclr_2022_Xx4MNjSmQQ9
Robust Generalization of Quadratic Neural Networks via Function Identification
A key challenge facing deep learning is that neural networks are often not robust to shifts in the underlying data distribution. We study this problem from the perspective of the statistical concept of parameter identification. Generalization bounds from learning theory often assume that the test distribution is close to the training distribution. In contrast, if we can identify the ``true'' parameters, then the model generalizes to arbitrary distribution shifts. However, neural networks are typically overparameterized, making parameter identification impossible. We show that for quadratic neural networks, we can identify the function represented by the model even though we cannot identify its parameters. Thus, we can obtain robust generalization bounds even in the overparameterized setting. We leverage this result to obtain new bounds for contextual bandits and transfer learning with quadratic neural networks. Overall, our results suggest that we can improve robustness of neural networks by designing models that can represent the true data generating process. In practice, the true data generating process is often very complex; thus, we study how our framework might connect to neural module networks, which are designed to break down complex tasks into compositions of simpler ones. We prove robust generalization bounds when individual neural modules are identifiable.
Reject
This paper tackles an interesting problem: distribution shift generalization often requires parameter identification but this is not possible for over-parameterized neural networks. This paper shows for quadratic neural networks, it is possible to identify the function without identifying the parameter. This is an interesting result. However, reviewers raise concerns about the assumption and technical details. The meta-reviewer agrees with these concerns.
train
[ "7_A6pm2h_u0", "E26OxJPQuEG", "W3TUk08oI9B", "yViVPIFejBz", "oHHIFsE41v", "cHHtmad-tP" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your helpful comments and feedback! \n\n**Technical details.**\nThank you for pointing out these issues; we have fixed them in our paper. We note that in Section 6, we assume the components $f_j$ are quadratic neural networks. However, this assumption is only needed to prove Lemma 6.1; in general, o...
[ -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, 4, 4, 3 ]
[ "cHHtmad-tP", "oHHIFsE41v", "yViVPIFejBz", "iclr_2022_Xx4MNjSmQQ9", "iclr_2022_Xx4MNjSmQQ9", "iclr_2022_Xx4MNjSmQQ9" ]
iclr_2022_kOtkgUGAVTX
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
We introduce Contrastive Intrinsic Control (CIC) - an algorithm for unsupervised skill discovery that maximizes the mutual information between skills and state transitions. In contrast to most prior approaches, CIC uses a decomposition of the mutual information that explicitly incentivizes diverse behaviors by maximizing state entropy. We derive a novel lower bound estimate for the mutual information which combines a particle estimator for state entropy to generate diverse behaviors and contrastive learning to distill these behaviors into distinct skills. We evaluate our algorithm on the Unsupervised Reinforcement Learning Benchmark, which consists of a long reward-free pre-training phase followed by a short adaptation phase to downstream tasks with extrinsic rewards. We find that CIC improves on prior unsupervised skill discovery methods by $91\%$ and the next-leading overall exploration algorithm by $26\%$ in terms of downstream task performance.
Reject
The paper addresses the question of skill discovery in reinforcement learning: can we (without supervision) discover behaviors so that later (when supervision is available via a reward signal) we can learn faster? The paper proposes a new contrastive loss that an agent can optimize for this purpose, based on a decomposition of mutual information between skills and transitions. The reviewers praised the extensive experimental evaluation and good empirical results, as well as the analysis of failure modes of related algorithms. Unfortunately, there appeared to be errors in the derivation and implementation. (These include typos in derivations that made them difficult to follow, as well as uploaded code that didn't match the experimental results.) While the authors claim to have fixed all of them, the reviewers were not all completely convinced by the end of the discussion period. In any case, these errors caused confusion during review; so, whether the errors are fixed or not, it seems clear that there hasn't been time for a full evaluation of the corrected derivations and code. For this reason, it seems wise to ask that this paper be reviewed again from scratch before being published.
train
[ "fQhXfRfi8rc", "6y0c4slnknA", "h8fMD0_fRij", "hNjHttz7-h", "cd2QxHHI6Ck", "M1kba7lP5Iw", "3HVzMTzc60W", "DxWHLh6TM2L", "cVcVd8VSbdv", "MJ9wF74Q5dL", "zSQYgpdJ1w6", "yx6gsPzgL4F", "a9gqWEPJtv", "m33u-aY-R0V", "8qwu1NKYxRf", "Grspvahjoz9", "eHLnxmOOVKG", "d96-ifjTkIw" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their detailed response to my questions. Their responses helped clarify most of my concerns. ", " Thank you for your response! We have updated the paper with clarifications (for Q1 see Appendix M; for Q10 see end of page 8). We clarify your follow-up questions below:\n\n**Q1:*...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "zSQYgpdJ1w6", "M1kba7lP5Iw", "hNjHttz7-h", "cd2QxHHI6Ck", "a9gqWEPJtv", "3HVzMTzc60W", "DxWHLh6TM2L", "cVcVd8VSbdv", "d96-ifjTkIw", "eHLnxmOOVKG", "MJ9wF74Q5dL", "Grspvahjoz9", "m33u-aY-R0V", "8qwu1NKYxRf", "iclr_2022_kOtkgUGAVTX", "iclr_2022_kOtkgUGAVTX", "iclr_2022_kOtkgUGAVTX", ...
iclr_2022_ExJ4lMbZcqa
Learning Audio-Visual Dereverberation
Reverberation from audio reflecting off surfaces and objects in the environment not only degrades the quality of speech for human perception, but also severely impacts the accuracy of automatic speech recognition. Prior work attempts to remove reverberation based on the audio modality only. Our idea is to learn to dereverberate speech from audio-visual observations. The visual environment surrounding a human speaker reveals important cues about the room geometry, materials, and speaker location, all of which influence the precise reverberation effects in the audio stream. We introduce Visually-Informed Dereverberation of Audio (VIDA), an end-to-end approach that learns to remove reverberation based on both the observed sounds and visual scene. In support of this new task, we develop a large-scale dataset that uses realistic acoustic renderings of speech in real-world 3D scans of homes offering a variety of room acoustics. Demonstrating our approach on both simulated and real imagery for speech enhancement, speech recognition, and speaker identification, we show it achieves state-of-the-art performance and substantially improves over traditional audio-only methods.
Reject
This paper investigates the dereverberation problem from the audio-visual perspective. The geometry of the environment is represented by RGB and depth images. The authors propose a so-called visually-informed dereverberation of audio (VIDA) model and also create a dataset consisting of both synthetic and real data to verify the effectiveness of the model. Experiments are conducted on speech enhancement, speech recognition and speaker identification tasks. The authors compare VIDA with audio only dereverberation as well as various established baseline systems in the community. The audio-visual way of coping with dereverberation using visual representation of the acoustic environment seems to be interesting. The authors' rebuttal has cleared most of the concerns raised by the reviewers but there are still numerous lingering concerns which affect its acceptance. First of all, most of the reviewers consider the novelty not overwhelmingly significant. Second, the contribution of the visual input seems to be only marginal compared to the audio-only dereverberation. Results on real data are also mixed. Some of the reported p-values are extremely small, which raises questions whether it is due to the size of the test set. Third, there are noticeable artifacts in some of the samples in the demo. Fourth, there are numerous issues in the paper that are worth further in-depth investigation. For instance, it would be helpful to show in which way exactly the RGB and depth images helps.
train
[ "g6Fm3L8ircw", "1lrrCn4pSl3", "qvtBdKm0rg7", "mj2qPNC6LX", "eVVyNQ3illS", "9cWDMgfYI8B", "FHKu0q5fIz", "JXtZHN-IHJF", "9g1AtrTbWfg", "fZx5zXxIvl", "mMWXE_dlaSM", "xq0qZGU0fWv" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a deep neural net-based dereverberation algorithm that uses both audio and video modalities. Based on the observation that a visual scene captured by a camera conveys information that is related to room characteristics, the authors propose a visually informed audio dereverberation method that a...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2022_ExJ4lMbZcqa", "qvtBdKm0rg7", "FHKu0q5fIz", "xq0qZGU0fWv", "fZx5zXxIvl", "mMWXE_dlaSM", "g6Fm3L8ircw", "iclr_2022_ExJ4lMbZcqa", "eVVyNQ3illS", "iclr_2022_ExJ4lMbZcqa", "iclr_2022_ExJ4lMbZcqa", "iclr_2022_ExJ4lMbZcqa" ]
iclr_2022_NZQ8aTScT1-
Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks
Understanding the fundamental principles behind the massive success of neural networks is one of the most important open questions in deep learning. However, due to the highly complex nature of the problem, progress has been relatively slow.In this note, through the lens of infinite-width networks, a.k.a. neural kernels, we present one such principle resulting from hierarchical locality. It is well-known that the eigenstructure of infinite-width multilayer perceptrons (MLPs) depends solely on the concept frequency, which measures the order of interactions. We show that the topologies from convolutional networks (CNNs) restructure the associated eigenspaces into finer subspaces. In addition to frequency, the new structure also depends on the concept space— the distance among interaction terms, defined via the length of a minimum spanning tree containing them. The resulting fine-grained eigenstructure dramatically improves the network’s learnability, empowering them to simultaneously model a much richer class of interactions, including long-range-low-frequency interactions, short-range-high-frequency interactions, and various interpolations and extrapolations in-between. Finally, we show that increasing the depth of a CNN can improve the inter/extrapolation resolution and, therefore, the network’s learnability.
Reject
The paper shows that deep convolutional neural networks in the kernel regime restructure the eigenspaces of the inducing kernels, which leads to some insights regarding the range of space-frequency combinations learned by such networks. The reviewers identified a number of problems with the current submission. For instance, they found that the paper is hard to follow, it lacks clarity and the theorem statements are hard to understand. The authors also use a somewhat non-standard experimental setup. Despite an extensive discussion with the authors which cleared out a few minor problems, the bulk of the concerns of the reviewers were not successfully adressed. I am therefore not able to recommend acceptance. The authors need to improve the clarity of the paper and provide more discussions of the theorems in a resubmission, as well as potentially reconsider their experimental setup.
train
[ "ByYX-qk3BPa", "XQxfSID1Ef", "7qW7jDp_Z1M", "S-3iwOzGDi", "9IIISSbFlcO", "j4gZwRquvlv", "OrklXKVsGj0", "3FLj4yXCOEJ", "XpLcGQwBdQq", "J7f20_cd2FM", "LNFkjWKyx8L", "NdkhvtwaXsc", "1Jp1ERGU2U", "cFaLpHuKXO", "n7A08sd4dKO", "u37nI8RjL3" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors show that deep convolutional networks restructure the eigenspaces of the inducing kernels,\nwhich empowers them to learn a dramatically broader class of functions, covering a wide range\nof space-frequency combinations. Strengths:\n\nThis paper evaluates the success of deep neural networks by running e...
[ 3, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2022_NZQ8aTScT1-", "9IIISSbFlcO", "j4gZwRquvlv", "iclr_2022_NZQ8aTScT1-", "cFaLpHuKXO", "NdkhvtwaXsc", "iclr_2022_NZQ8aTScT1-", "XpLcGQwBdQq", "ByYX-qk3BPa", "u37nI8RjL3", "n7A08sd4dKO", "LNFkjWKyx8L", "S-3iwOzGDi", "J7f20_cd2FM", "iclr_2022_NZQ8aTScT1-", "iclr_2022_NZQ8aTScT1-" ...
iclr_2022_czmQDWhGwd9
Representations of Computer Programs in the Human Brain
We present the first study relating representations of computer programs generated by unsupervised machine learning (ML) models and representations of computer programs in the human brain. We analyze recordings---brain representations---from functional magnetic resonance imaging (fMRI) studies of people comprehending Python code. We discover brain representations, in different and specific regions of the brain, that encode static and dynamic properties of code such as abstract syntax tree (AST)-related information and runtime information. We also map brain representations to representations of a suite of ML models that vary in their complexity. We find that the Multiple Demand system, a system of brain regions previously shown to respond to code, contains information about multiple specific code properties, as well as machine learned representations of code. We make all the corresponding code, data, and analysis publicly available.
Reject
This paper aims to relate brain activity (of people reading computer code) to properties of the computer code. They relate the found representations to those obtained from ML computational language models applied to the same programs. The paper is clearly written and an interesting idea. There was a lot of discussion and the author(s) updated their paper a lot. Program length as a potential confound was raised and successfully rebutted. The extent of novelty from Ivanova et al 2020 was also discussed and successfully rebutted. In the end, the main issues the reviewers had were 1) that the paper had been updated substantially since submission (and would therefore benefit from a thorough re-review) and 2) whether the results provide enough new insights about the brain or about ML language models. To summarize, the authors spent a lot of time addressing issues in the rebuttal phase and the paper got a lot better with the reviewers' suggestions, but reviewers agreed it would benefit from more work and further review before acceptance. I agree with this assessment.
train
[ "m2Iiq27VNRk", "7Kg4nGzf7Kf", "vkArmCEm_gL", "dyD1ATKCpQ", "PGRBWBjK6Hz", "zqDFtZuSY60", "jVfO656OSSv", "5QADrSlcJGv", "N5QEuv23uN4", "Ixrh6oFZfkZ", "S4YLVU8qc_r", "225e_d13z5b", "5Crq1eBDdMp", "HQSb_NI8SKo", "l68wtbc8tlo", "RaMdTjy9pXc", "G9uTah92vCA", "sOl57QfBMrq", "oCtEa_eLCU...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " I appreciate the authors for their well-written responses regarding my questions and concerns. They provide convincing arguments about the differences between this paper and previous work. Moreover, they discussed further merit points to interpret some of the results in both Experiments 1 and 2, \nAcknowledging ...
[ -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "zGiCdcjw9Wq", "iclr_2022_czmQDWhGwd9", "iclr_2022_czmQDWhGwd9", "vkArmCEm_gL", "5r0QFF7mCzh", "wZIcMCqBf-M", "5r0QFF7mCzh", "wLoIPzCNaf9", "sOl57QfBMrq", "5Crq1eBDdMp", "225e_d13z5b", "sOl57QfBMrq", "oCtEa_eLCUp", "7Kg4nGzf7Kf", "7Kg4nGzf7Kf", "zGiCdcjw9Wq", "zGiCdcjw9Wq", "7Kg4nG...
iclr_2022_nuWpS9FNSKn
One Objective for All Models --- Self-supervised Learning for Topic Models
Self-supervised learning has significantly improved the performance of many NLP tasks. In this paper, we highlight a key advantage of self-supervised learning - when applied to data generated by topic models, self-supervised learning can be oblivious to the specific model, and hence is less susceptible to model mis-specification. In particular, we prove that commonly used self-supervised objectives based on reconstruction or contrastive samples can both recover useful posterior information for general topic models. Empirically, we show that the same objectives can perform competitively against posterior inference using the correct model, while outperforming posterior inference using mis-specified model.
Reject
This paper generated a large amount of discussion. Three reviewers were marginally above and one marginally below. The paper presents an intriguing relationship between self-supervised learning and topic model inference that extends earlier work of Tosh. The result seems to be subtle because there was considerable discussion with the authors wrapping up with a reminder of what the main goal is: SSL can achieve the state-of-the-art performance for topic inference problem, moreover (main goal) SSL can be oblivious to the specific topic model. This is indeed intriguing. But with all the discussion, and one persistent negative reviewer, I feel the paper needs to be polished. Given the theorem gives a testable statement, I don't see why experimental results cannot be done for 4 different real data sets, to give us more confidence.
train
[ "30yZUzyRXVf", "98mduoFf_h", "VsFYLvAL9PG", "6AhyzvrkN8H", "Vv3PQ3a5FJG", "cG2uQtcRU_l", "PPsJ2S0llCn", "xr3fBFj_b0V", "hJM6s2kA9aJ", "wl9TiDC78X", "GSsuBCZeQlr", "jDQLRLTrDO", "BXLavNq9Idx", "cLzoXotNeCk", "PcB2lR04o1", "vmwwh4rBGbu", "59VQVclKoS5", "INgiwS4_Wg1", "LyOw953938T",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_rev...
[ " Just wanted to say thanks to the authors, I learned more than usual from this discussion period and I am glad for your participation.\n\nI wish you the best for this project... I know getting constructive criticism is frustrating sometimes but I do think your paper will be stronger and more widely appreciated if...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "PPsJ2S0llCn", "VsFYLvAL9PG", "6AhyzvrkN8H", "Vv3PQ3a5FJG", "cG2uQtcRU_l", "xr3fBFj_b0V", "hJM6s2kA9aJ", "wl9TiDC78X", "cLzoXotNeCk", "CQ3PaOl6MSE", "iclr_2022_nuWpS9FNSKn", "INgiwS4_Wg1", "vmwwh4rBGbu", "PcB2lR04o1", "Wa9BYjk2Dkr", "fUvCgqv2ZUE", "iclr_2022_nuWpS9FNSKn", "LyOw9539...
iclr_2022_qCBmozgVr9r
Few-Shot Attribute Learning
Semantic concepts are often defined by a combination of attributes. The use of attributes also facilitates learning of new concepts with zero or few examples. However, the zero-shot learning paradigm assumes that the set of attributes are known and fixed, which is a limitation if a test-time task depends on a previously irrelevant attribute. In this work we study rapid learning of attributes that are previously not labeled in the dataset. Compared to standard few-shot learning of semantic classes, learning new attributes imposes a stiffer challenge. We found that directly supervising the model with a set of training attributes does not generalize well on the test attributes, whereas self-supervised pre-training brings significant improvement. We further experimented with random splits of the attribute space and found that the predictability of attributes provides an informative estimate of a model's ability to generalize.
Reject
This paper has been reviewed with four expert reviewers. The reviewers have reached the consensus that the paper is not yet ready for publication. The main concerns are related to novelty. All reviewers gave substantial and constructive feedback. Following the recommendation of the reviewers, the meta reviewer recommends rejection.
train
[ "ejLNQtz1Ifq", "SL6dioGltEb", "2dNSK5rTN-", "iqK1Ys0awHC", "ujm8XGD8zNz", "sRR9mAJYxOT", "7tkGNMdUR94", "ic5bAWB0l0p", "InSpioM0GQ", "z331ogKD92", "PGCumOkVmBZ", "g_Weiv27KGB", "39SViZLFLYr", "i-8tUYvbInJ", "VtL3O7ENJdV", "stziXV7q98", "ZqVyEHROqjy", "I4kwduYnaQy", "uiXXda0Cyny",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response and acknowledge their efforts in responding. After the author's response, most of my initial concerns remain:\n- The machine learning problem considered in the paper is novel by itself but is addressed by a direct combination of pre-existing methods. I am a bit concerned abo...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "39SViZLFLYr", "iclr_2022_qCBmozgVr9r", "hzqw-ZzBNwN", "7tkGNMdUR94", "iclr_2022_qCBmozgVr9r", "iclr_2022_qCBmozgVr9r", "ic5bAWB0l0p", "InSpioM0GQ", "z331ogKD92", "SL6dioGltEb", "g_Weiv27KGB", "hzqw-ZzBNwN", "uiXXda0Cyny", "VtL3O7ENJdV", "I4kwduYnaQy", "ZqVyEHROqjy", "iclr_2022_qCBmo...
iclr_2022_0q0REJNgtg
Retrieval-Augmented Reinforcement Learning
Most deep reinforcement learning (RL) algorithms distill experience into parametric behavior policies or value functions via gradient updates. While effective, this approach has several disadvantages: (1) it is computationally expensive, (2) it can take many updates to integrate experiences into the parametric model, (3) experiences that are not fully integrated do not appropriately influence the agent's behavior, and (4) behavior is limited by the capacity of the model. In this paper we explore an alternative paradigm in which we train a network to map a dataset of past experiences to optimal behavior. Specifically, we augment an RL agent with a retrieval process (parameterized as a neural network) that has direct access to a dataset of experiences. This dataset can come from the agent's past experiences, expert demonstrations, or any other relevant source. The retrieval process is trained to retrieve information from the dataset that may be useful in the current context, to help the agent achieve its goal faster and more efficiently. We integrate our method into two different RL agents: an offline DQN agent and an online R2D2 agent. In offline multi-task problems, we show that the retrieval-augmented DQN agent avoids task interference and learns faster than the baseline DQN agent. On Atari, we show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores. We run extensive ablations to measure the contributions of the components of our proposed method.
Reject
One of the four reviewers failed to engage in discussion, two acknowledged the author's response and paper revision without changing their scores, and one reviewer engaged in considerable discussion resulting in a score increase to a weak accept. No reviewer gave the paper a strong endorsement. I do appreciate the large effort that the authors put into revising their paper and addressing reviewers concerns. However, major post-submission revision puts an inappropriate burden on reviewers. In any case, there is not strong support for this paper even from the one heavily engaged reviewer.
train
[ "1_hBOg0x4jW", "BscNFQ8ByW5", "EtxowO_nRaw", "qX-iJSgiKc", "V_ar_lo3rZE", "qmGYw2Dg4JJ", "s8EAGyBdnbR", "9iJqQemRXVY", "Gimg0meTwWt", "NvDWt90qe6", "NguMQmc6wc", "mb9UBm0K5s4", "6VX2LMUikk0", "z-aKHpoKkd5", "iQoy28oacMf", "0YMtVT_rKtV", "eX15-G9hAul", "lBymXfEktpX", "5IYdeXC9XqB"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_...
[ " \"also did not see training curves for BabyAI\"\n\nAs mentioned here (https://openreview.net/forum?id=0q0REJNgtg&noteId=NguMQmc6wc), BabyAI takes more than a week (to be precise ~ 10 days) to train and we don't have the saved logs as of now, and hence we were not able to update the paper.\n\nThanks for your help ...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "BscNFQ8ByW5", "V_ar_lo3rZE", "qX-iJSgiKc", "57RHu7istey", "mb9UBm0K5s4", "9iJqQemRXVY", "iclr_2022_0q0REJNgtg", "s8EAGyBdnbR", "5oUv19O6xew", "s8EAGyBdnbR", "for_JfNFwX0", "0YMtVT_rKtV", "5oUv19O6xew", "5IYdeXC9XqB", "s8EAGyBdnbR", "lBymXfEktpX", "iclr_2022_0q0REJNgtg", "NguMQmc6w...
iclr_2022_Fh_NyEuejsZ
ZenDet: Revisiting Efficient Object Detection Backbones from Zero-Shot Neural Architecture Search
In object detection models, the detection backbone consumes more than half of the overall inference cost. Recent researches attempt to reduce this cost by optimizing the backbone architecture with the help of Neural Architecture Search (NAS). However, existing NAS methods for object detection require hundreds to thousands of GPU hours of searching, making them impractical in fast-paced research and development. In this work, we propose a novel zero-shot NAS method to address this issue. The proposed method, named ZenDet, automatically designs efficient detection backbones without training network parameters, reducing the architecture design cost to nearly zero yet delivering the state-of-the-art (SOTA) performance. Under the hood, ZenDet maximizes the differential entropy of detection backbones, leading to a better feature extractor for object detection under the same computational budgets. After merely one GPU day of fully automatic design, ZenDet innovates SOTA detection backbones on multiple detection benchmark datasets with little human intervention. Comparing to ResNet-50 backbone, ZenDet is $+2.0\%$ better in mAP when using the same amount of FLOPs/parameters and is $1.54$ times faster on NVIDIA V100 at the same mAP. Code and pre-trained models will be released after publication.
Reject
This paper received scores of 6,6,6 after the reviewers succeeded in making two authors raise their scores from 5 to 6. However, even after this, none of the reviewers actively argued for the paper. The only positive point raised in the private discussion was that the results are strong. (However, there is still the question of how much of this was due to the different design space used.) Negative points raised in the private discussion included that - despite the authors clarification on the differences to Zen-NAS, the difference is perceived not to be large. - there is no theoretical foundation behind the selection of a critical parameter, and this directly limits the applicability of ZenDet in searching for FPN connections. - as a paper focused on detection NAS, a limitation to only search for the backbone may not be novel enough for publication at ICLR. Overall, I agree with this criticism and weakly recommend rejection.
train
[ "JkYLSoT7OTK", "Gfq_WKjSxWN", "TPIoP6phJal", "Lm-9UyVxeBd", "lmof7UrbfFB", "ILDvBGY9DPC", "lxC0gJnHVza", "PC1wKnF4ri", "VXRXiViu_e5", "M6DwB6yaPs", "xRvOfvLHj2v", "IWE5oqCevL", "cy11586w2jR", "iKEVBTCYeoZ", "PDnkUU7lqQV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks again for your latest response. Since the discussion period is coming to the end, there is not enough time to do this experiment. We will follow your suggestion to create more results with different heads in the future.", "In this paper, the authors propose a zero-shot NAS to search backbone for detectio...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "TPIoP6phJal", "iclr_2022_Fh_NyEuejsZ", "lmof7UrbfFB", "iclr_2022_Fh_NyEuejsZ", "ILDvBGY9DPC", "cy11586w2jR", "PC1wKnF4ri", "xRvOfvLHj2v", "M6DwB6yaPs", "Lm-9UyVxeBd", "Lm-9UyVxeBd", "PDnkUU7lqQV", "Gfq_WKjSxWN", "iclr_2022_Fh_NyEuejsZ", "iclr_2022_Fh_NyEuejsZ" ]
iclr_2022_kHNKTO2sYH
Repairing Systematic Outliers by Learning Clean Subspaces in VAEs
Data cleaning often comprises outlier detection and data repair. Systematic errors result from nearly deterministic transformations that occur repeatedly in the data, e.g. specific image pixels being set to default values or watermarks. Consequently, models with enough capacity easily overfit to these errors, making detection and repair difficult. Seeing as a systematic outlier is a combination of patterns of a clean instance and systematic error patterns, our main insight is that inliers can be modelled by a smaller representation (subspace) in a model than outliers. By exploiting this, we propose \emph{Clean Subspace Variational Autoencoder (CLSVAE)}, a novel semi-supervised model for detection and automated repair of systematic errors. The main idea is to partition the latent space and model inlier and outlier patterns separately. CLSVAE is effective with much less labelled data compared to previous related models, often with less than 2\% of the data. We provide experiments using three image datasets in scenarios with different levels of corruption and labelled set sizes, comparing to relevant baselines. CLSVAE provides superior repairs without human intervention, e.g. with just 0.25\% of labelled data we see a relative error decrease of 58\% compared to the closest baseline.
Reject
This is a borderline paper with 2 marginally above and a marginally below acceptance recommendations. While the authors provided valid responses to some of the criticism, I still find some of the motivation and assumptions not sufficiently clear, theoretical and practical issues are mixed, and the validation on only synthetic data raises practical questions.
test
[ "Jdwfmrpje6", "3lfvSsPItO", "7nSuadSfLS", "Zahoy24dG2r", "3HMB8u6iDmu", "PkNV1IZvYEM", "PYTbBm3GiEO", "r1yR0hZs48u", "FGlWSiI_Ylo", "trw1laDkpVv", "p971XlEzvwM" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a neural network model based on Variational Autoencoders (VAE) that learn an implicit representation separating outliers with systematic \"errors\" from inliers using a small labelled subset of the training data set (trusted set).\nMore specifically, clean data and recurring systematic errors a...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2022_kHNKTO2sYH", "Zahoy24dG2r", "iclr_2022_kHNKTO2sYH", "trw1laDkpVv", "Jdwfmrpje6", "Jdwfmrpje6", "Jdwfmrpje6", "p971XlEzvwM", "iclr_2022_kHNKTO2sYH", "iclr_2022_kHNKTO2sYH", "iclr_2022_kHNKTO2sYH" ]
iclr_2022_R-I5CUDOAp7
STORM: Sketch Toward Online Risk Minimization
Empirical risk minimization is perhaps the most influential idea in statistical learning, with applications to nearly all scientific and technical domains in the form of regression and classification models. The growing concerns about the high energy cost of training and the increased prevalence of massive streaming datasets have led many ML practitioners to look for approximate ERM models that can achieve low cost on memory and latency for training. To this end, we propose STORM, an online sketching-based method for empirical risk minimization. STORM compresses a data stream into a tiny array of integer counters. This sketch is sufficient to estimate a variety of surrogate losses over the original dataset. We provide rigorous theoretical analysis and show that STORM can estimate a carefully chosen surrogate loss for regularized least-squares regression and a margin loss for classification. We perform an exhaustive experimental comparison for regression and classification training on real-world datasets, achieving an approximate solution with a size even less than a data sample.
Reject
The paper proposed a sketching algorithm for empirical risk minimization (ERM) for linear regression and classification. The technique is based on LSH with non-standard hash functions. The reviews indicate that the paper is well written and easy to follow. However, there are several concerns raised regarding its quality. A major one regards the novelty of the paper. MTPW: “The technique novelty is limited since previous work (Coleman & Shrivastava, 2020) has used LSH to approximate kernel density estimation on streaming setting.“, 7LQM: “My review of the theoretical results and data structure design is that the results are believable and seem correct, but lack technical novelty.” “Other than using non-standard hashing functions, what distinguishes the STORM sketch from the RACE sketch?” An additional concern is a claim of weak experimental evidence. There seems to be a need for more thorough experiments isolating different components rather than the system as a whole, and in addition the bottom line results provide only a slight lift over a naive baseline (7LQM: “The experiments suggest that using the STORM estimator is only slightly better than returning the mean of your data.”). Whether it is the case that the techniques should be improved or that these concerns could be addressed by improving the presentation of the paper, the conclusion is that the paper now is not ready to be published.
val
[ "Za4f5e7IXnZ", "YASGf3hS72y", "DUB_YlVVOe", "5RFAl_IXNdh", "NjrzizrW1VQ", "oD1tTbdJ6VU", "AByNuVWmyaI", "zAQGcSBJEfM", "lwI3mSWVyf1", "TC45aTgqfIO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The discussion period is ending today and the authors have not responded to my many questions about the experiments, so my score remains unchanged. It's been 6 days since I followed up, so they had time to respond.\n\nThis paper should clearly be rejected. My score is unchanged.", " Thank you for the detailed a...
[ -1, -1, -1, -1, -1, -1, 5, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "DUB_YlVVOe", "NjrzizrW1VQ", "5RFAl_IXNdh", "lwI3mSWVyf1", "AByNuVWmyaI", "zAQGcSBJEfM", "iclr_2022_R-I5CUDOAp7", "iclr_2022_R-I5CUDOAp7", "iclr_2022_R-I5CUDOAp7", "iclr_2022_R-I5CUDOAp7" ]
iclr_2022_xtZXWpXVbiK
Flow-based Recurrent Belief State Learning for POMDPs
Partially Observable Markov Decision Process (POMDP) provides a principled and generic framework to model real world sequential decision making processes but yet remains unsolved, especially for high dimensional continuous space and unknown models. The main challenge lies in how to accurately obtain the belief state, which is the probability distribution over the unobservable environment states given historical information. Accurately calculating this belief state is a precondition for obtaining an optimal policy of POMDPs. Recent advances in deep learning techniques show great potential to learn good belief states, but they assume the belief states follow certain types of simple distributions such as diagonal Gaussian, which imposes strong restrictions to precisely capture the real belief states. In this paper, we introduce the \textbf{F}l\textbf{O}w-based \textbf{R}ecurrent \textbf{BE}lief \textbf{S}tate model (FORBES), which incorporates normalizing flows into the variational inference to learn general continuous belief states for POMDPs. Furthermore, we show that the learned belief states can be plugged into downstream RL algorithms to improve performance. In experiments, we show that our methods successfully capture the complex belief states that enable multi-modal predictions as well as high quality reconstructions, and results on challenging visual-motor control tasks show that our method achieves superior performance and sample efficiency.
Reject
Summary: this is a difficult paper to meta-review, since it contains some insightful ideas and interesting experiments, while it also unfortunately contains omissions, confusions, and places where clarity is lacking (see below). One consistent theme is that the paper is too dismissive of prior work; the exposition is not as clear as it should be about what aspects of FORBES are present in previous papers, it uses too broad a brush to describe prior methods (resulting in too-general statements about what these methods can't do), and it skips important chunks of the extensive literature on POMDP belief representation and tracking. As a result, the paper doesn’t do a good job concisely and accurately stating its contribution; there is still reasonable concern about how significant this contribution is. On the other hand, the experimental results for FORBES are interesting; the new method seems to represent a better combination of techniques than at least many existing works, at least to the resolution of the experiments’ statistical power. So the end question is whether interesting experimental results and a new combination of techniques are enough to outweigh the problems outlined above. In the end we believe that the correct outcome is rejection; but we have every expectation that a future version of the paper will resolve the difficulties outlined here and will appear in a future conference. A brief note about the discussion: the original scores for this paper were lower. While some reviewers raised their score later in the discussion, a thorough reading of the discussion and the revised paper indicates that a substantial fraction of the issues leading to the lower scores still remain. More details: There is a lot of prior work on tracking belief states, which should be cited more thoroughly. The paper's intro makes it sound like diagonal Gaussians were the only previous alternative. At least, the intro should cite older work on MCMC methods like particle filters (e.g., Thrun’s book Probabilistic Robotics, or Arnaud Doucet’s work), and prior deep-net papers that attempt non-Gaussian representations, even if these don’t perform as well as hoped (see below for examples). It is also important to compare to RKHS representations of beliefs, such as Nishiyama, Boularias, Gretton, Fukumizu 2012; these handle multimodality, and can behave similarly to deep nets if they use the neural tangent kernel. Accurately comparing to prior work is one of the most important functions of a paper, so it doesn’t make sense to be unfairly critical of prior work or to skip it. The paper is also unclear about the effects of Gaussian distributions at different places in a variational approximation. Because of this lack of clarity, the criticisms it levels at previous variational methods seem to be true only of some of them. In particular, the introduction should distinguish between two uses of Gaussian approximations: first for the belief itself, and second for the distribution of observations given a belief. Some prior works make only one of these approximations. For example, a non-Gaussian distribution used as a belief state can predict multi-modal future behaviors, even if we approximate observations under a given belief as Gaussian. The introduction should also distinguish between two common places that a Gaussian could enter into a variational approximation: at the input or at the output of a network. A Gaussian latent at the input of a variational network (even if it has diagonal covariance) can result in a highly non-Gaussian output distribution, while Gaussian noise added at the end will (if it is the only noise) lead to a Gaussian output. Again, some of the statements in the intro apply only to the latter use of a Gaussian, while some prior work focuses on the former use. There is an important conceptual confusion in the paper about what it means to have a multimodal belief state: the paper presents the true belief as an inherent property of an environment, while in fact it is a property of an environment *model*. So, there can be two different equally accurate models of the same environment which differ in the belief representation; a simple example would be to use either a continuous state whose components are joint angles, or a discrete state obtained by finely discretizing this continuous one. In the first case the belief would be a distribution over the continuous space, while in the second it would be a categorical distribution (a point in a simplex). A consequence of such a difference is that beliefs can be multimodal in one representation and not another. The importance of this confusion is that, since we are asking our network to learn a belief state, the learning process could potentially favor representations that lead to unimodal beliefs — so it’s not clear theoretically that forcing a unimodal belief representation is necessarily a disadvantage. The paper presents the situation as if the disadvantage is forced by theory, while instead the argument should be based on experiments: e.g., one could try to show that unimodal representations, even if given a higher latent dimension to work with, aren’t empirically able to capture the same information. Some interesting prior deep-net POMDP papers that might need better discussion: * Han, Doya, and Tani ICLR 2020 (which isn’t cited here) puts the Gaussian latent variable as an input to the network for predicting beliefs (eq 2), resulting in a possibly highly non-Gaussian output representing the belief. * Tschiatschek et al, 2018 (also not cited) uses a Gaussian *mixture* as the variational distribution to approximate beliefs, again allowing multimodality. * Igl et al. 2018 (which is cited only late in the current paper, and basically dismissed) uses a deep version of particle filters to allow non-Gaussian distributions for both beliefs and observations. * More work that is potentially relevant but not adequately compared (even if briefly cited): Gregor et al. (2019), DreamerV2, Ha & Schmidhuber’s World Models. Each of these makes at least some choices to try to handle at least some kinds of multimodality, so a clear explanation of differences that avoids the confusions mentioned above would be very helpful. * In general, the results of the search “variational encoder POMDP” seem to include a number of papers not cited in the current paper; another useful search is “normalizing flow POMDP" Finally, in the experiments section, the paper needs to correctly report the reliability of its conclusions. In some places (e.g., Fig. 5) there’s no mention of reliability or repeatability of conclusions; the paper just says that its evidence “support[s] the claim that FORBES can better capture the complex belief states”. In other places (e.g., Fig. 6, 7), the paper displays uncertainty representations based on only a few replications of an experiment (e.g., 3 seeds for Fig. 6, or 5 seeds when a reviewer requested extra experiments). The corresponding uncertainty estimates almost certainly are strongly biased too low (too certain); e.g., three runs would have less than a 50% chance of even seeing failure modes that happen with probability as high as 20% (0.8^3 = 0.512 > 0.5), meaning that the estimated standard deviation could be almost arbitrarily badly biased downward. To be clear, experiments with few replicates can still be highly useful and informative, and it’s true that some experiments are too expensive to run many times; but in such cases the paper should add appropriate caveats to its conclusions. For example, instead of reporting the sample standard deviation based on a normal model, the paper could report a confidence interval based on a more robust model or test, such as a Wilcoxon test. (To illustrate the difference, confidence intervals at typical significant levels like p=0.05 would be vacuous (infinitely wide) under Wilcoxon with 3 seeds, but much-weaker p-values would still yield non-vacuous intervals.) A few smaller questions: The authors added a nice ablation study to compare to Dreamer; this is great to see. It would be good to discuss the connection to earlier methods such as PlaNet and Dreamer at places where the current method is similar or different (e.g., different from Dreamer in the belief state representation in sec 2.2, but similar in the RL framework in section 3.2). These comparisons would aid in the reader’s understanding of what is new in FORBES. An unusual feature of FORBES is that the variational approximation to the belief at time t+1 is not a function of the belief at time t. Instead the belief inference network q_{\psi,\theta} takes as input the entire past trajectory, uses convolution and recurrence to reduce the variable-length input to fixed dimension, and passes this fixed-dimension representation through a normalizing flow mapping. It would be interesting to discuss the reason for this design decision. In particular, it seems like it would inhibit tracking — i.e., it could be hard to propagate information from one belief distribution to the immediate next one, particularly if there are a few unlikely observations scattered through a trajectory. A minor point for clarity: in Fig 1 it's unclear what distributions the white and gray triangles refer to. They don't seem to correspond to a natural belief state: instead maybe they incorporate three simultaneous observations from the same starting belief? Correctly intersecting beliefs is an important issue though, so at a high level the point that the figure is trying to make fits well. Another point for clarity: “there always exists a diffeomorphism that can turn one well-behaved distribution into another”: this is true for some definition of ”well-behaved”, but it’s misleading to say it this way. E.g., it is not true if the distributions in question can have atoms, or differ in dimension or topology; these exceptions are unfortunately important cases that do come up in practice.
train
[ "JwAr2g9If0I", "x0knH3CtBOA", "IRDHje6i9ln", "6Puer7jl_kX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes using normalizing flow based belief approximation for continous-state POMDPs and learning the belief approximation via variational inference. In addition, the paper proposes learning an actor-critic reinformcement learning algorithm based on the proposed belief representation.\n Accurate belief...
[ 6, 6, 8, 8 ]
[ 4, 4, 4, 2 ]
[ "iclr_2022_xtZXWpXVbiK", "iclr_2022_xtZXWpXVbiK", "iclr_2022_xtZXWpXVbiK", "iclr_2022_xtZXWpXVbiK" ]
iclr_2022_DBOibe1ISzB
SiT: Simulation Transformer for Particle-based Physics Simulation
Most existing particle-based simulators adopt graph convolutional networks (GCNs) to model the underlying physics of particles. However, they force particles to interact with all neighbors without selection, and they fall short in capturing material semantics for different particles, leading to unsatisfactory performance, especially in generalization. This paper proposes Simulation Transformer (SiT) to simulate particle dynamics with more careful modeling of particle states, interactions, and their intrinsic properties. Specifically, besides the particle tokens, SiT generates interaction tokens and selectively focuses on essential interactions by allowing both tokens to attend to each other. In addition, SiT learns material-aware representations by learnable abstract tokens, which will participate in the attention mechanism and boost the generalization capability further. We evaluate our model on diverse environments, including fluid, rigid, and deformable objects, which cover systems of different complexity and materials. Without bells and whistles, SiT shows strong abilities to simulate particles of different materials and achieves superior performance and generalization across these environments with fewer parameters than existing methods. Codes and models will be released.
Reject
The submission received split reviews: two reviewers recommended weak accepts, and the other two weak rejects. The AC went through the reviews, responses, and discussions carefully. The AC agrees that this paper is well-written and has demonstrated the possibility of using transformers for particle-based physical simulation. The AC also believes that the authors have addressed the concerns of reviewer dYcg, despite that the reviewer didn't engage in the discussions. The contributions are however not most exciting, and none of the reviewers would like to champion the submission. Further, the AC agrees with the knowledgeable and responsible reviewer ZsKn that the presentation and experiments can be better positioned to highlight the key contribution. As reviewer ZsKn has summarized, it's recommended that "the authors took the approach of integrating the different parts of the newly proposed layer into existing architectures (possibly including non-simulation settings), and try to understand better that way how the new layer may help in a more apples-to-apples comparison." The recommendation is reject, and the authors are encouraged to revise the paper for the next venue.
train
[ "oHx_TUSmJz", "shxwcYAnsfM", "SzA_cAhlOiF", "s6rnNSK6xal", "JwVwbQgZzyw", "K_46m4BsYXM", "suizjIyYLlF", "EJAngcGkbf5", "ZUWOeBzN4SN", "Af1-J21oaOh", "y9I43AcaoGv", "9ZjPvccltig", "trznJEIdEOB", "Z2YU6GURHfC", "0F30vrle1Ph", "qWwXamdX5zM", "eeE8o79Cux-", "xlDmwsXmPc", "ZCbRr-Skwz"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_...
[ " ## Q1: SiT with dummy nodes performs worse on BoxBath\nWe list the MSEs for rigid and fluid separately on BoxBath.\n\n| Methods\\BoxBath | Rigid MSEs | Fluid MSEs |\n| --------------------- | ---------------- | ------------------- |\n| SiT | $1.06\\pm0.56$ | $1.79\\pm0.07...
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "JwVwbQgZzyw", "s6rnNSK6xal", "iclr_2022_DBOibe1ISzB", "EJAngcGkbf5", "K_46m4BsYXM", "ZUWOeBzN4SN", "Af1-J21oaOh", "y9I43AcaoGv", "Z2YU6GURHfC", "trznJEIdEOB", "9ZjPvccltig", "SzA_cAhlOiF", "SzA_cAhlOiF", "SzA_cAhlOiF", "SzA_cAhlOiF", "r9DBSyCsVn", "r9DBSyCsVn", "x7Rm3cDfiT_", "o...
iclr_2022_HRL6el2SBQ
Intra-class Mixup for Out-of-Distribution Detection
Deep neural networks have found widespread adoption in solving image recognition and natural language processing tasks. However, they make confident mispredictions when presented with data that does not belong to the training distribution, i.e. out-of-distribution (OoD) samples. Inter-class mixup has been shown to improve model calibration aiding OoD detection. However, we show that both empirical risk minimization and inter-class mixup create large angular spread in latent representation. This reduces the separability of in-distribution data from OoD data. In this paper we propose intra-class mixup supplemented with angular margin to improve OoD detection. Angular margin is the angle between the decision boundary normal and sample representation. We show that intra-class mixup forces the network to learn representations with low angular spread in the latent space. This improves the separability of OoD from in-distribution examples. Our approach when applied to various existing OoD detection techniques shows an improvement of 4.68% and 6.38% in AUROC performance over empirical risk minimization and inter-class mixup, respectively. Further, our approach aided with angular margin improves AUROC performance by 7.36% and 9.10% over empirical risk minimization and inter-class mixup, respectively.
Reject
The paper proposes to use intra-class mixup supplemented with angular margin to improve OOD detection. Strengths: + Simple idea + Experiments on multiple datasets (although mostly focused on image benchmarks) Weaknesses: - Justification for the idea could be improved. It'd be nice to understand when we expect this to (not) work. - Differences from prior work "Angle-based outlier detection in high-dimensional data" could be better explained. While the paper has some interesting contributions, the reviewers and I feel that the current version falls short of the acceptance threshold. I encourage the authors to revise and resubmit to a different venue.
train
[ "Ly5Iiv94nRm", "pHbdQ7sY93F", "BKgcocl7sTC", "4ez3KxM3EtF", "cKTA9yHLQlP", "yX4W9ZHlB2", "1ZbkwVX-aUj", "_OBvZT5AVw7", "yGTVoJKkaEG", "egeltn0cMO", "RFnsgvJNs74", "jOC-AjjE8rY" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I still consider that the comparisons to ERM and Intra-class mixup are not necessarily fair. Unfortunately, the authors did not comment much on this point in the rebuttal.\n\nDespite this, most of the questions that I had raised have been addressed. Thanks to the authors for the detailed response. I will upgrade ...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 6, 8, 3 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "1ZbkwVX-aUj", "iclr_2022_HRL6el2SBQ", "cKTA9yHLQlP", "yGTVoJKkaEG", "yX4W9ZHlB2", "jOC-AjjE8rY", "pHbdQ7sY93F", "RFnsgvJNs74", "egeltn0cMO", "iclr_2022_HRL6el2SBQ", "iclr_2022_HRL6el2SBQ", "iclr_2022_HRL6el2SBQ" ]
iclr_2022_lf0W6tcWmh-
Towards understanding how momentum improves generalization in deep learning
Stochastic gradient descent (SGD) with momentum is widely used for training modern deep learning architectures. While it is well understood that using momentum can lead to faster convergence rate in various settings, it has also been observed that momentum yields higher generalization. Prior work argue that momentum stabilizes the SGD noise during training and this leads to higher generalization. In this paper, we take the opposite view to this result and first empirically show that gradient descent with momentum (GD+M) significantly improves generalization comparing to gradient descent (GD) in many deep learning tasks. From this observation, we formally study how momentum improves generalization in deep learning. We devise a binary classification setting where a two-layer (over-parameterized) convolutional neural network trained with GD+M provably generalizes better than the same network trained with vanilla GD, when both algorithms start from the same random initialization. The key insight in our analysis is that momentum is beneficial in datasets where the examples share some features but differ in their margin. Contrary to the GD model that memorizes the small margin data, GD+M can still learn the features in these data thanks to its historical gradients. We also empirically verify this learning process of momentum in real-world settings.
Reject
All but one of the reviewers recommended rejecting this submission. The reviewer recommending acceptance (PBhC) was not confident in their assessment and was unwilling to champion the paper during the discussion phase, making it very difficult for me to unilaterally overrule the de facto reviewer consensus and recommend accepting the submission. Although some of the reviewers recommending rejecting the submission made relatively weak arguments, others raised more compelling points in favor of rejecting the paper. The discussion and reviews convinced me that the preponderance of the evidence indicated that I should recommend rejecting on the merits of the case anyway. Ultimately, I am recommending rejecting this submission, primarily because I do not believe the empirical contributions are strong enough, nor are they polished enough. Holistically, it is hard to see what impact this work can have without improved empirical evidence, given how little guidance the theoretical results give to practitioners. That said, I hope the authors iterate some more on the experiments and refocus the narrative a bit in that direction. The paper exhibits a problem where gradient descent with momentum provably generalizes better than gradient descent without momentum. Given that momentum does not universally improve the out of sample error of neural networks trained with gradient descent, we should strongly suspect that there also exist problems where adding momentum to gradient descent degrades out of sample performance. Therefore, what actionable insights do we have? The paper suggests that perhaps the details of the problem (constructed in the submission) where momentum helps gives us an ability to predict when momentum will be helpful in practice, but we would need to see several more successful predictions of this form on typical datasets from the literature or other real (non-synthetic) datasets. Furthermore, has the literature and this submission even demonstrated convincingly enough that momentum improving out of sample error for the same training loss is a common occurrence? And has this submission even made a convincing empirical case on CIFAR10, let alone a larger selection of problems? The latter question would be sufficient to reject the submission, but resolving it favorably would not, in my view, be sufficient to accept the submission without also more evidence for the prevalence of this momentum generalization phenomenon or without demonstrating successful predictions about relative generalization performance on more problems. Has the literature established that gradient descent or minibatch stochastic gradient descent often generalizes better when using momentum? The paper says "While these works shed light on how momentum acts on neural network training, they fail to capture the generalization improvement induced by momentum (Sutskever et al., 2013)." but Sutskever et al. to my recollection only measures training set loss and never properly considers questions of generalization. Certainly, in many places in the literature we see momentum get better validation error, but rarely do we get information on whether it does so for the same training loss and a priori we should suspect optimization speed is the primary effect at play. The paper also claims "Although it is well accepted that Momentum improve generalization in deep learning...", but the submission does not provide enough evidence that this is well accepted. The results of Leclerc & Madry (2020) are equivocal and may well be confounded by batch norm, but would need to be investigated further. So no, at least with the citations in this submission, it is far from well-established that momentum often improves generalization performance, i.e. that momentum results in better validation loss for the same training loss. Of course it won't always do this, but we should observe it regularly in the wild (the more dramatically the better) for this to be interesting. Ok, but what about the experiments on CIFAR10? These experiments are hard to interpret because they seem to compare misclassification error (zero one loss) with the actual optimization objective of cross entropy error. These issues may be resolvable, but in their current form leave open too many loose ends. Just because two training runs both get zero classification errors on the training set does not mean that they do not differ in the log loss and even a small difference in log loss might explain a large difference in out of sample classification error. Although we often use these quantities as proxies for each other, that isn't quite safe and a better way to conduct this measurement would be to select an iterate of GD without momentum that has an almost identical (but slightly better) training cross entropy loss than a specific iterate of GD with momentum and then compare the cross entropy loss on the validation set, repeating for many different runs and iterates. In the final analysis, stochastic gradient descent without momentum rarely gets used in practice and full gradient descent even more rarely, so this submission needs to do a better job of making a case for the impact it will have on researchers in this field. Perhaps a stronger case can be made, but I do not quite find the current version sufficiently compelling.
train
[ "pYwKtag5Xyq", "Rz7oUEAazgs", "gi5jwn8xydj", "Uqg8grAH-o", "nxMDBmuoMfA", "g4clcNeNcOR", "3NUmEGWNDZ", "CkBvRayo6h", "eDihE5QsVfv" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like thank the reviewer 6raf for all the comments and pointers to the typos in the proof. We acknowledge that during the submission there were some not-so-well-stated statements in the technical lemmas. During the revision period, we revised parts of the proof and believe that the proof is now in good s...
[ -1, -1, -1, -1, 3, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, 4, 4, 2, 4, 4 ]
[ "eDihE5QsVfv", "CkBvRayo6h", "g4clcNeNcOR", "nxMDBmuoMfA", "iclr_2022_lf0W6tcWmh-", "iclr_2022_lf0W6tcWmh-", "iclr_2022_lf0W6tcWmh-", "iclr_2022_lf0W6tcWmh-", "iclr_2022_lf0W6tcWmh-" ]
iclr_2022_0Mo_5PkLpwc
Robust Cross-Modal Semi-supervised Few Shot Learning
Semi-supervised learning has been successfully applied to few-shot learning (FSL) due to its capability of leveraging the information of limited labeled data and massive unlabeled data. However, in many realistic applications, the query and support sets provided for FSL are potentially noisy or unreadable where the noise exists in both corrupted labels and outliers. Motivated by that, we propose to employ a robust cross-modal semi-supervised few-shot learning (RCFSL) based on Bayesian deep learning. By placing the uncertainty prior on top of the parameters of infinite Gaussian mixture model for noisy input, multi-modality information from image and text data are integrated into a robust heterogenous variational autoencoder. Subsequently, a robust divergence measure is employed to further enhance the robustness, where a novel variational lower bound is derived and optimized to infer the network parameters. Finally, a robust semi-supervised generative adversarial network is employed to generate robust features to compensate data sparsity in few shot learning and a joint optimization is applied for training and inference. Our approach is more parameter-efficient, scalable and adaptable compared to previous approaches. Superior performances over the state-of-the-art on multiple benchmark multi-modal dataset are demonstrated given the complicated noise for semi-supervised few-shot learning.
Reject
This paper aims to address the problem of cross-modal semi-supervised few-shot learning with noisy data, and proposed a robust cross-modal semi-supervised few-shot learning (RCFSL) based on Bayesian deep learning. The approach combines several existing techniques for tackling a new problem in a non-trivial approach. Empirical results demonstrate the effectiveness of the proposed method to some extent. While the proposed integrated complex approach seems to be novel in the proposed unique setting, there are some major concerns from the reviewers. One concern is about the lack of clear justification on technical contributions for the proposed methodology in the complex settings. In particular, it lacks of comprehensive ablation studies for analyzing and understanding the source of gains by the proposed complex method, and the baselines in the experiments also do not look strong enough. In addition, many aspects of the paper writing and presentation are not satisfied (e.g., the math formulation in Section 2 is densely presented making it difficult to follow). Overall, this is a borderline case, where the paper did contribute a new method for the interesting cross-modal semi-supervised few-shot learning task, but some major concerns on the weaknesses remain at its current form. Therefore, it cannot be recommended for acceptance. Nonetheless, I hope authors can improve the paper by fully addressing these issues and hope to see it accepted in the near future.
train
[ "Vtb52jMG3uU", "010stAA5bQ-", "kdocI9HFd2w", "Xt263puBf0H", "QzwnfuuyGqF", "v2xYyJ9D82D", "hWqBOjZsZr", "5ANJQJmILty", "Sobh_iAPMeT", "wN0y-TF11SD", "s_g2PBdysGd", "HAvWeUu1zZY", "oRmoJAIn43y", "zwjdf-RHib3", "eh4PMQYWHvs" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \"The detailed response is very much appreciated, which clears up some of my doubts. However, I still believe it would be better if the proposed method can be evaluated on each individual low data problem, where it can be compared to the sota in each field. Therefore, I would remain my original score.\"\n\nWe tha...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2 ]
[ "010stAA5bQ-", "Sobh_iAPMeT", "5ANJQJmILty", "iclr_2022_0Mo_5PkLpwc", "iclr_2022_0Mo_5PkLpwc", "zwjdf-RHib3", "eh4PMQYWHvs", "oRmoJAIn43y", "v2xYyJ9D82D", "hWqBOjZsZr", "HAvWeUu1zZY", "Xt263puBf0H", "HAvWeUu1zZY", "iclr_2022_0Mo_5PkLpwc", "iclr_2022_0Mo_5PkLpwc" ]
iclr_2022_Jvoe8JCGvy
Online MAP Inference and Learning for Nonsymmetric Determinantal Point Processes
In this paper, we introduce the online and streaming MAP inference and learning problems for Non-symmetric Determinantal Point Processes (NDPPs) where data points arrive in an arbitrary order and the algorithms are constrained to use a single-pass over the data as well as sub-linear memory. The online setting has an additional requirement of maintaining a valid solution at any point in time. For solving these new problems, we propose algorithms with theoretical guarantees, evaluate them on several real-world datasets, and show that they give comparable performance to state-of-the-art offline algorithms that store the entire data in memory and take multiple passes over it.
Reject
This paper studies online MAP inference and learning for nonsymmetric determinantal point processes (NDPPs). The main contribution is an online greedy algorithm. Surprisingly they show that their algorithm outperforms various offline algorithms on real-world datasets. That said, the main concern was the novelty with respect to the prior work of Bhaskara et al. who gave an online approximation algorithm for MAP inference in DPPs. To compare the two works: (1) Bhaskara et al. give an algorithm for DPPs, and NDPPs are more complex (2) Bhaskara et al. give provable guarantees on the approximation ratio, but no such guarantees are known for NDPPs (3) And finally, some of the key ingredients in the online algorithm for NDPPs, like the stash, were already in the work of Bhaskara et al. Overall the reviewers felt that this submission would be improved with a clearer discussion of the contributions over prior work.
train
[ "FCEukvCYRT2", "zmwbuOlwn-T", "LZXh-VE3Lsd", "jwDpbmAtvPf", "kM4RTgdRP1H", "DqGeilOEveY", "mJEE1mB3xUA", "2nL-DUN1TxN", "-EmM2xm8cfz", "sKfvH-miYj3", "tRjOnHfVDHz", "fevVCOB6bEr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional comments. To clarify what I mean regarding an approximation guarantee for the online learning algorithm: here I am *not* referring to a guarantee that optimization of the non-convex NDPP objective function converges to a global maximum, which is a much more difficult problem; such guara...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "zmwbuOlwn-T", "LZXh-VE3Lsd", "jwDpbmAtvPf", "fevVCOB6bEr", "sKfvH-miYj3", "tRjOnHfVDHz", "-EmM2xm8cfz", "iclr_2022_Jvoe8JCGvy", "iclr_2022_Jvoe8JCGvy", "iclr_2022_Jvoe8JCGvy", "iclr_2022_Jvoe8JCGvy", "iclr_2022_Jvoe8JCGvy" ]
iclr_2022_z7DAilcTx7
A Distributional Robustness Perspective on Adversarial Training with the $\infty$-Wasserstein Distance
While ML tools are becoming increasingly used in industrial applications, adversarial examples remain a critical flaw of neural networks. These imperceptible perturbations of natural inputs are, on average, misclassified by most of the state-of-the-art classifiers. By slightly modifying each data point, the attacker is creating a new distribution of inputs for the classifier. In this work, we consider the adversarial examples distribution as a tiny shift of the original distribution. We thus propose to address the problem of adversarial training (AT) within the framework of distributional robustness optimization (DRO). We show a formal connection between our formulation and optimal transport by relaxing AT into DRO problem with an $\infty$-Wasserstein constraint. This connection motivates using an entropic regularizer-- a standard tool in optimal transport--- for our problem. We then prove the existence and uniqueness of an optimal regularized distribution of adversarial examples against a class of classifier (e.g., a given architecture) that we eventually use to robustly train a classifier. Using these theoretical insights, we propose to use Langevin Monte Carlo to sample from this optimal distribution of adversarial examples and train robust classifiers outperforming the standard baseline and providing a speed-up of respectively $\times 200$ for MNIST and $\times8$ for CIFAR-10.
Reject
In this paper, authors study adversarial examples from a distributional robustness point of view. Reviewers had several concerns about the work and all thought the paper is not above the accept threshold. In particular, they mentioned that the presentation and writing of the paper need to be improved and results (specially the ones presented in Section 2) are not significant contributions and novel. Given all, I think the paper needs more work before being accepted.
train
[ "QR8bEUCqndg", "Fq1F1ok-Lb", "okKK5e5_YTR", "YBlLYFVFvk", "8OUga38GQ8f", "GvgeMT05Fd", "54Z-Semd3gu", "1VilRYQfnfq", "ELspPfjEu6L", "V412FjqO5b", "xbuas34ku1t", "C4TgBxoTl8G", "ZR8tRnHeVFH", "NTRQRqwIn6" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments. We will take them into account.", " Thanks for your answer.\n1) $\\Pi(p_{data}, p_{adv})$ is the space of transport plans or couplings, but you denoted as $\\Pi(p_{data} \\mid p_{adv})$, so it is not consistent. In addition, $P_{conv}$ was not used anywhere.\n2) I cannot see $|z - z...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Fq1F1ok-Lb", "YBlLYFVFvk", "xbuas34ku1t", "8OUga38GQ8f", "V412FjqO5b", "1VilRYQfnfq", "ZR8tRnHeVFH", "ELspPfjEu6L", "C4TgBxoTl8G", "ZR8tRnHeVFH", "NTRQRqwIn6", "iclr_2022_z7DAilcTx7", "iclr_2022_z7DAilcTx7", "iclr_2022_z7DAilcTx7" ]
iclr_2022_lvM693mon8q
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives to a fixed point at a rate of $O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-$k$ sparsification. Finally, we experimentally show compression can reduce communication by over $90\%$ without a significant decrease in accuracy over VFL without compression.
Reject
Reviewers have all agreed that this paper studied an important problem and made valuable contributions. The goal is to reduce the communication costs of Federated learning where the data are stored in different parities based on subsets of features. The paper developed the theory to show guaranteed convergence and provided empirical evaluations to validate the theory. On the other hand, compared with existing literature, Reviewers feel that the novelty of this submission appears limited and the improvements seem to be incremental. Reviewers appreciate the Authors' efforts in conducting the detailed rebuttals and providing an improved manuscript. We hope the authors would continue to improve the paper based on reviews, when they prepare for their future submission.
val
[ "mm59SZ3IhuM", "vIfh9-X8lsw", "Vo18uc4AQbl", "_FqMLOw_NaQ", "5gNI7Fsa3j", "B0HdieB_eU0", "RFdmgSdZ8o7", "N8b6LR8O4wK", "wknoXSBmRV0", "pmQgKzb8m9", "GuYhBKu4NA" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all the reviewers again for taking time to review our paper. We have responded to each of your concerns in the comments below and revised our paper accordingly. As the discussion period ends today, could you kindly let us know whether there are further questions after reading our responses and revision? ...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ "iclr_2022_lvM693mon8q", "GuYhBKu4NA", "pmQgKzb8m9", "5gNI7Fsa3j", "wknoXSBmRV0", "RFdmgSdZ8o7", "N8b6LR8O4wK", "iclr_2022_lvM693mon8q", "iclr_2022_lvM693mon8q", "iclr_2022_lvM693mon8q", "iclr_2022_lvM693mon8q" ]
iclr_2022_t2LJBsPxQM
Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework
Enforcing orthogonality in neural networks is an antidote for gradient vanishing/exploding problems, sensitivity to adversarial perturbation, and bounding generalization errors. However, many previous approaches are heuristic, and the orthogonality of convolutional layers is not systematically studied. Some of these designs are not exactly orthogonal, while others only consider standard convolutional layers and propose specific classes of their realizations. We propose a theoretical framework for orthogonal convolutional layers to address this problem, establishing the equivalence between diverse orthogonal convolutional layers in the spatial domain and the paraunitary systems in the spectral domain. Since a complete factorization exists for paraunitary systems, any orthogonal convolution layer can be parameterized as convolutions of spatial filters. Our framework endows high expressive power to various convolutional layers while maintaining their exact orthogonality. Furthermore, our layers are memory and computationally efficient for deep networks compared to previous designs. Our versatile framework, for the first time, enables the study of architecture designs for deep orthogonal networks, such as choices of skip connection, initialization, stride, and dilation. Consequently, we scale up orthogonal networks to deep architectures, including ResNet and ShuffleNet, substantially increasing the performance over their shallower counterparts. Finally, we show how to construct residual flows, a flow-based generative model that requires strict Lipschitzness, using our orthogonal networks.
Reject
This paper proposes a method for parameterizing orthogonal convolutional layers that derives from paraunitary systems in the spectral domain and performs a comparison with other state-of-the-art orthogonalization methods. The paper argues that the approach is more computationally efficient than most previous methods and that the exact orthogonality is important to ensure robustness in some applications. The reviewers had diverging opinions about the paper, with most reviewers appreciating the theoretical grounding and empirical analysis, but with some reviewers finding weakness in the clarity, reproducibility, and discussion of prior work. The revisions addressed many, but not all, of the reviewers' criticisms. One point that was highlighted in the discussion is that the method is restricted to separable convolutions. The authors acknowledged this limitation, justifying the expressivity of the method with a comparison to CayleyConv (Trockman & Kolter) and a suggestion that more expressive parameterizations are not necessarily available in 2D. I am not sure this is entirely accurate. In the discussion of related work, the paper briefly mentions dynamical isometry and the prior work of Xiao et al. 2018, who develop a method for initializing orthogonal convolutional layers. What the current paper fails to recognize is that Algorithm 1 of Xiao et al. 2018 actually provides a method for parameterizing non-separable 2D convolutions: simply represent every orthogonal matrix in that algorithm in a standard way, e.g. via the exponential map. While I think there is certainly value in the connection to paraunitarity systems, it seems to me that the above approach would yield a simpler and more expressive representation, and is at minimum worth discussing. Overall, between the mixed reviewer opinions and their lingering concerns and the existence of relevant prior art that was not discussed in sufficient depth, I believe this paper is not quite suitable for publication at this time.
train
[ "hM45ctXWJ1", "K3RRs-bs31f", "sGvj2YWyPwM", "TylAGARw4yd", "_p4TOQIrTVb", "wFzjSGlzKT", "lraMzmty9A", "7QZNCOzdHy", "qmNtHFSQYzU", "TIS9iLKoxm7", "e3CsD-KgCZ", "Yz02xApXdZt", "gzN42TL3qEj", "SDVMoFvNMi", "jkwyCo729VW", "H5jCao8vU0R", "fRk4DR1jbt-", "_SNJXild1-", "4ewcjfMJQVP", ...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_...
[ " **Experimental setup.** Following other state-of-the-art approaches, we searched for hyperparameters, such as the loss margin $\\epsilon_0$ and kernel size, and reported the best models for all. **1) The loss margin $\\epsilon_0$** with ResNet9 is $\\epsilon_0 = 0.2$ for our model and $\\epsilon_0 = 0.1$ for Cay...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "K3RRs-bs31f", "TylAGARw4yd", "Yz02xApXdZt", "wFzjSGlzKT", "Mb5k3rEcIKH", "7QZNCOzdHy", "iclr_2022_t2LJBsPxQM", "fRk4DR1jbt-", "e3CsD-KgCZ", "iclr_2022_t2LJBsPxQM", "SDVMoFvNMi", "H5jCao8vU0R", "Mb5k3rEcIKH", "jkwyCo729VW", "4ewcjfMJQVP", "_-w04mPP5iX", "_SNJXild1-", "VyXBv5jiGp", ...
iclr_2022_gdegUuC_fxR
Hessian-Free High-Resolution Nesterov Acceleration for Sampling
It is known (Shi et al., 2021) that Nesterov's Accelerated Gradient (NAG) for optimization starts to differ from its continuous time limit (noiseless kinetic Langevin) when its stepsize becomes finite. This work explores the sampling counterpart of this phenonemon and proposes an accelerated-gradient-based MCMC method, based on the optimizer of NAG for strongly convex functions (NAG-SC): we reformulate NAG-SC as a Hessian-Free High-Resolution ODE, change its high-resolution coefficient to a hyperparameter, inject appropriate noise, and discretize the resulting diffusion process. Accelerated sampling enabled by the new hyperparameter is quantified and it is not a false acceleration created by time-rescaling. At continuous-time level, additional acceleration over underdamped Langevin in $W_2$ distance is proved. At discrete algorithm level, a dedicated discretization is proposed to simulate the Hessian-Free High-Resolution SDE in a cost-efficient manner. For log-strong-concave-and-smooth target measures, the proposed algorithm achieves $\tilde{\mathcal{O}}(\sqrt{d}/\epsilon)$ iteration complexity in $W_2$ distance, same as underdamped Langevin dynamics, but with a reduced constant. Empirical experiments are conducted to numerically verify our theoretical results.
Reject
The paper considers the high resolution continuous limit of Nesterov's Accelerated Gradient (NAG) algorithm and its connections to sampling (MCMC methods). The paper develops a Hessian-Free High Resolution (HFHR) ODE and injects noise into it to obtain an accelerated sampling algorithm. Further, the paper provides a discrete-time variant of the algorithm by appropriately discretizing HFHR using simple discretization schemes. For strongly log-concave potential functions (log-densities), the paper proves convergence of the order $\tilde{O}(\sqrt{d}/\epsilon)$ in Wasserstein-2 distance. In the asymptotic sense, the result matches the convergence of the underdamped Langevin algorithm; however, the paper argues that the constants in the proposed algorithm are smaller and empirically shows that the proposed algorithm is faster in practice. The main contributions of the paper are theoretical; however, the theoretical results are supplemented by numerical experiments. Overall, the reviewers found the contributions interesting and the theoretical contributions of the paper technically sound. The main concerns that were not completely addressed were related to the presentation of the results and reproducibility of some of the numerical experiments. While both seem minor and possible to address, ultimately there was not enough support to recommend acceptance. However, the paper is solid and merits acceptance after suitable revisions. Thus, the authors are encouraged to revise the paper and resubmit it to one of the conferences in the equivalence class of ICLR.
train
[ "KOCHHhLXZZi", "zWUj_NnFwcg", "lFY9SgYDf5", "Nk0kP0dv6QG", "zJlRM6FshSA", "CSajrw3UitW", "OF1drgYaSoe", "POHgKOH3DVl", "jUhkZz0RdKo", "K4mug5nvLyT", "odli16u59Pt", "SroV_Z0-RvX", "78D0SdMtQCp", "hoC9jO_GEPa", "y1_0GRqyIs8", "0vYUs3w1-a", "RCM_Kqwfs88", "MQsoCSqVyP1", "HMtVcS7kixj...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "a...
[ " I have not been able to reproduce Figure 1a. I would be very grateful if someone can confirm my result or correct my mistake.\n\n---\n\n### My attempt\n\n1. Download code from Supplementary Material.\n2. Go to folder `code/verifyDependenceOnStepSize`.\n3. Run `mkdir results`.\n4. Run `for j in {1..100}; do printf...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 3, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 3, 1 ]
[ "iclr_2022_gdegUuC_fxR", "zJlRM6FshSA", "CSajrw3UitW", "CSajrw3UitW", "K4mug5nvLyT", "jUhkZz0RdKo", "Pjwy4-erZSK", "0vYUs3w1-a", "0vYUs3w1-a", "RCM_Kqwfs88", "MQsoCSqVyP1", "MQsoCSqVyP1", "0vYUs3w1-a", "0vYUs3w1-a", "0vYUs3w1-a", "SroV_Z0-RvX", "odli16u59Pt", "HMtVcS7kixj", "4MXi...
iclr_2022_JedTK_aOaRa
Private Multi-Winner Voting For Machine Learning
Private multi-winner voting is the task of revealing k-hot binary vectors that satisfy a bounded differential privacy guarantee. This task has been understudied in the machine learning literature despite its prevalence in many domains such as healthcare. We propose three new privacy-preserving multi-label mechanisms: Binary, $\tau$, and Powerset voting. Binary voting operates independently per label through composition. $\tau$ voting bounds votes optimally in their $\ell_2$ norm. Powerset voting operates over the entire binary vector by viewing the possible outcomes as a power set. We theoretically analyze tradeoffs showing that Powerset voting requires strong correlations between labels to outperform Binary voting. We use these mechanisms to enable privacy-preserving multi-label learning by extending the canonical single-label technique: PATE. We empirically compare our techniques with DPSGD on large real-world healthcare data and standard multi-label benchmarks. We find that our techniques outperform all others in the centralized setting. We enable multi-label CaPC and show that our mechanisms can be used to collaboratively improve models in a multi-site (distributed) setting.
Reject
This is an interesting paper discussing differential privacy for multi-label classification. The initial reviews rated the paper with rather extreme scores, therefore I have invited an additional reviewer. This review did not clarify the issues raised by the most critical reviewer, but pointed out that the goal of showing how DP can be enforced in MLC is not fully obtained as there is a lack of the discussion concerning the MLC performance. This is also a problem raised in my comments. Taking this into account, I need to state that the paper is not ready for publication.
train
[ "tQHpf2q8sEm", "h0d8Xs8uiew", "cJdYz_ltou", "43-E1AwWuH7", "vOZzAW3Ris", "srV2CAbgb9", "MgPNQdVmQTC", "ImSkkeoIXaI", "eKs_XPorAAF", "td5LFA5_HTJ", "fhlCSJOkPyU", "bJMkiyFOom", "zCQS0aFJ-52", "61-S5BxenZJ", "RUPxH_5HNgb", "RlEhpRD4bg", "x5BLgzpfzj", "n1liAZiQXIk", "YrE1JJX1WMO", ...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " Thank you for the review. We appreciate the feedback.\n\n>**First of all, the canonical MLC metrics are not calculated in the experiments. (In fact, they are not even discussed in the paper.) This is important though because this is one of the crucial, distinctive aspects of MLC. Therefore, it is not clear, how t...
[ -1, -1, 5, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "cJdYz_ltou", "cJdYz_ltou", "iclr_2022_JedTK_aOaRa", "srV2CAbgb9", "MgPNQdVmQTC", "iclr_2022_JedTK_aOaRa", "iclr_2022_JedTK_aOaRa", "srV2CAbgb9", "iclr_2022_JedTK_aOaRa", "JX0OVX-gZwQ", "ImSkkeoIXaI", "srV2CAbgb9", "61-S5BxenZJ", "bJMkiyFOom", "88glTCoYVc8", "srV2CAbgb9", "vBuOXuHRb2...
iclr_2022_xD3RiCCfsY
On Learning to Solve Cardinality Constrained Combinatorial Optimization in One-Shot: A Re-parameterization Approach via Gumbel-Sinkhorn-TopK
Cardinality constrained combinatorial optimization requires selecting an optimal subset of $k$ elements, and it will be appealing to design data-driven algorithms that perform TopK selection over a probability distribution predicted by a neural network. However, the existing differentiable TopK operator suffers from an unbounded gap between the soft prediction and the discrete solution, leading to inaccurate estimation of the combinatorial objective score. In this paper, we present a self-supervised learning pipeline for cardinality constrained combinatorial optimization, which incorporates with Gumbel-Sinkhorn-TopK (GS-TopK) for near-discrete TopK predictions and the re-parameterization trick resolving the non-differentiable challenge. Theoretically, we characterize a bounded gap between the Maximum-A-Posteriori (MAP) inference and our proposed method, resolving the divergence issue in the previous differentiable TopK operator and also providing a more accurate estimation of the objective score given a provable tightened bound to the discrete decision variables. Experiments on max covering and discrete clustering problems show that our method outperforms state-of-the-art Gurobi solver and the novel one-shot learning method Erdos Goes Neural.
Reject
The authors propose a random perturbation on top of a soft top-k operator that builds upon entropic regularized optimal transport (when applied to a 1D problem). The motivation of the paper is built around an approximation bound (proposed in the Xie et al '20 paper) that compares the true OT matrix from the regularized OT matrix in the case where some of the 1D entries from which one wishes to extract top-k values are very close (eg. x_{t} ~ x_{t+1}). The authors argue that this bound, with inverse dependencies in the closest element in the list, diverges. The authors state that this possible divergence is an issue, because values to be sorted/top-ked can be very close in practice. To solve this issue, the authors introduce instead a Gumbel noise mechanism that no longer makes the bound diverge, through a fairly long theoretical analysis. The approach now requires the recomputation for several noisy inputs of the same regularized OT estimator. The authors propose then to use these soft-top-k approaches to solve a combinatorial problem using gradient descent, namely a capacity constrained problem and clustering, including some tricks on controlling both entropy regularization and Gumbel noise magnitude. The paper has generated a long discussion among the AC and reviewers. While the paper has a few strong points that were appreciated (interest of empirical validation which seems to suggest some improvements over commercial solvers on considered setups), there remain a few issues. The theoretical side of the paper is bit blurry. The idea of introducing Gumbel noise on top of an already soft operator is not completely clear, since these perturbations are there to add differentiability to something (reg-OT) that was introduced itself to be differentiable. The theoretical motivation is unclear: the noise is introduced because the _upper bound_ diverges (and not the gap between the "true" OT and entropic OT, since it is always bounded). The perturbation mechanism is only motivated to improve the limitations of an upper bound, not of the original algorithm itself. What's more, it's not entirely clear why that gap should be decreased (between true and regularized OT) since it has to exist to obtain some differentiability. While the study of the gap itself was added during the discussion phase in Fig. 1"A toy example to explain Lemma 2", one would expect better foundations for this idea. With a somewhat unclear theoretical motivation, the experiments should be very convincing. Reviewers have noted some issues related to comparing CPU/GPU times. While I am sympathetic to the problems encountered by the authors when running such comparisons, these issues should be properly reflected in their initial claims, and not appear in the rebuttal only. I also think experiments are still lacking in diversity. For instance, the k-means problem is studied in 2D (begging the natural question of whether such an improvement would remain in higher dimensions). I could not find a clear statement on the number of repeats carried out to obtain error bars. Since I don't envision either of the max-covering problem nor k-means to become the "killer app" of this paper, I would encourage the authors to consider problems that are less synthetic.
train
[ "99N1Ngrac-S", "k_GNA88kiLV", "7VNJd5lrRsR", "CbW_ScPSCM", "zWpGNvKQ9B_", "o5JoDPZozed", "fTQeH9dVlC3", "tSgsv5DHTLG", "woxR-8rTEc", "wmyVUfvIY7w", "gPjOc_779IG", "43KnM78gymg", "HjAHPVvPgNZ", "exiYdtzJvoR", "d1NAsyiXTTo", "4A-dlLlB9st", "ZgpUGyYAK09", "iLO51SyPrxv", "KHdZ-XSeuMB...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " **Update:** We fix some typos below and add discussions for your easy reference.\n\nMany thanks for your timely update, and we agree that current machine learning methods, including our GS-TopK, rely on GPU for their high efficiency. Here we provide the timing of machine learning methods on CPU on the first test ...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "7VNJd5lrRsR", "iclr_2022_xD3RiCCfsY", "CbW_ScPSCM", "zWpGNvKQ9B_", "o5JoDPZozed", "iclr_2022_xD3RiCCfsY", "tSgsv5DHTLG", "KHdZ-XSeuMB", "ZgpUGyYAK09", "o5JoDPZozed", "o5JoDPZozed", "o5JoDPZozed", "748XNpwcLmk", "748XNpwcLmk", "utnGL02nCRH", "SC3k60uzxqI", "SC3k60uzxqI", "iclr_2022...
iclr_2022_JvPopr9skL0
Efficient Out-of-Distribution Detection via CVAE data Generation
Recently, contrastive loss with data augmentation and pseudo class creation has been shown to produce markedly better results for out-of-distribution (OOD) detection than previous methods. However, a major shortcoming of this approach is that it is extremely slow due to significant increase in the data size and the number of classes and the quadratic complexity of pairwise similarity computation. This paper proposes a novel and simple method that can build an effective data generator using Conditional Variational Auto-Encoder (CVAE) to generate pseudo OOD samples. Based on the generated pseudo OOD data, a flexible and efficient OOD detection method is proposed through fine-tuning, which achieves results comparable to the state-of-the-art OOD detection techniques, but the execution speed is at least 10 times faster. Also importantly, the proposed approach is in fact a general framework that can be applied to many existing OOD methods and improve them via the proposed fine-tuning. We have combined it with the best baseline OOD models in our experiments to produce new state-of-the-art results.
Reject
The paper adopts CVAE to generate OOD samples for training an outliner detector. It consists of two phases that train an OOD detector by leveraging the generated OOD data and shows it outperform other methods. According to reviewers’ discussion, there is a concern from the discussion: why CVAE works but other variants or cGAN doesn’t. The paper needs more motivation or evidence or ablations to support the generality of the work.
test
[ "9lNcluZqBp", "ld9PqRSdCDs", "bXaLL-a8jPI", "o3GgGCR6OcM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors proposed to use a Conditional VAE in order to generate pseudo OOD data, which can be used to improve OOD detection.\nThe authored demonstrated the method in several benchmarks, with good results. The paper is well written, covers the related works, and importantly, is well motivated.\nThe usage of 'kno...
[ 6, 5, 6, 5 ]
[ 4, 4, 4, 3 ]
[ "iclr_2022_JvPopr9skL0", "iclr_2022_JvPopr9skL0", "iclr_2022_JvPopr9skL0", "iclr_2022_JvPopr9skL0" ]
iclr_2022_V70cjLuGACn
Closed-loop Control for Online Continual Learning
Online class-incremental continual learning (CL) deals with the sequential task learning problem in a realistic non-stationary setting with a single-pass through of data. Replay-based CL methods have shown promising results in several online class-incremental continual learning benchmarks. However, these replay methods typically assume pre-defined and fixed replay dynamics, which is suboptimal. This paper introduces a closed-loop continual learning framework, which obtains a real-time feedback learning signal via an additional test memory and then adapts the replay dynamics accordingly. More specifically, we propose a reinforcement learning-based method to dynamically adjust replay hyperparameters online to balance the stability and plasticity trade-off in continual learning. To address the non-stationarity in the continual learning environment, we employ a Q function with task-specific and task-shared components to support fast adaptation. The proposed method is applied to improve state-of-the-art replay-based methods and achieves superior performance on popular benchmarks.
Reject
This manuscript studies the problem of continual learning and introduces a reinforcement learning agent to select hyperparameters for replay/training. Ordinarily, replay based mechanisms for continual learning use settings and hyperparameters that are chosen and fixed through training. If it was possible to adjust replay dynamics online (in this case by looking at performance on a held-aside test set), performance might be improved. This is the approach taken by this manuscript. Reviewers were generally happy with the writing of the paper and presentation of the material. At the same time, more than one reviewer worried about the novelty of the approach. In essence, the proposal amounts to using a black-box optimizer (in this case RL) to adjust online the hyperparameters (e.g. the replay ratio) for continual learning (of-the-shelf ER and SCR). Viewed through this lens, and given that the optimizer in this case was a straightforward application of DQN, this concern is potentially well founded. The primary novelty then is the construction of the reward function to be optimized: in this case defined as the decrease of the CL loss measured on a held aside test set that is constructed online. Nevertheless, novelty is only part of the equation and strong empirical results can easily be a deciding factor in readiness for publication. On this front, reviewer GhFg points out that the empirical results and comparisons with baseline methods are not as clear as they need to be. Several issues are raise in discussion: the primary one is around the question of how the authors have allowed task-specific information for the Q functions used by RL, and what the implications of this might be. The baselines compared against do not use any task-specific information, which muddies the waters when trying to understand the comparisons. I agree with the reviewer that the manuscript needs to do a better job of making the empirical setting and comparisons as transparent and fair as possible. Given this, and the fact (raised by several reviewers) that some empirical evidence presented in the manuscript actually points to RL selecting near-static parameters over time, I recommend that the manuscript be rejected. At the same time, I want to encourage the authors to focus on a streamlined version of the manuscript that addresses the issues raised by GhFg, as I believe that if the concerns can be addressed the work is close to making a compelling contribution for the field.
train
[ "abG8Ym0LXaq", "TOrn_TqXyU", "Yoot9MTk0K", "mqNGmPo-R-I", "iLevFo9cTMa", "dOVoxUla4tE", "PmyHf-guIYY", "exaMaH_-BYA", "df8Hx3uk-O", "GU3lu7sFHg" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response.\nUnfortunately my concerns are not addressed in the author response.\nI am looking for something a lot more quantitative and significant on the cost-performance tradeoff, since I think that's a fairly important issue for this paper.\nTherefore, my score will remain the same...
[ -1, -1, -1, -1, -1, -1, 5, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "dOVoxUla4tE", "mqNGmPo-R-I", "GU3lu7sFHg", "df8Hx3uk-O", "PmyHf-guIYY", "exaMaH_-BYA", "iclr_2022_V70cjLuGACn", "iclr_2022_V70cjLuGACn", "iclr_2022_V70cjLuGACn", "iclr_2022_V70cjLuGACn" ]
iclr_2022_yrD7B9N_54F
Few-shot graph link prediction with domain adaptation
Real world link prediction problem often deals with data coming from multiple imbalanced domains. Similar problems in computer vision are often referred to as Few-Shot Learning (FSL) problems. However, for graph link prediction, this problem has rarely been addressed and explored. In this work, we propose an adversarial training based modification to the current state-of-the-arts link prediction method to solve this problem. We introduce a domain discriminator on pairs of graph-level embedding. We then use the discriminator to improve the model in an adversarial way, such that the graph embeddings generated by the model are domain agnostic. We test our proposal on 3 benchmark datasets. Our results demonstrate that, when domain differences exist, our method creates better graph embeddings that are more evenly distributed across domains and generates better prediction outcomes.
Reject
This paper has been reviewed by four expert reviewers who gave diverging scores. The three negative reviewers have provided significant constructive feedback. The main criticism is the lack of novelty and clarity in the paper. The authors have submitted their rebuttal which did not improve the scores of these reviewers. After the discussion phase, the paper did not obtain any support for acceptance and stayed under the acceptance threshold. Following the reviewers' recommendation, the meta reviewer recommends rejection.
train
[ "k87RkseD5f", "8SCJE6IbKrl", "BteganisrAn", "w74OM3sBnIm", "JFcCKxOljA2", "n9h7o4VB1L5", "4qwP6Dl5fi2", "QAvD6Gx1Aw", "hQbXsGpOwrk", "nMhi_-KgOT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate that the authors took time to answer the questions and made corresponding revisions to the paper. I still believe that the claim of novelty of using enclosing subgraph is minor and there is no additional experiments on more realistic datasets. I have decided to keep my initial rating.", " I appreci...
[ -1, -1, -1, -1, -1, -1, 5, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "w74OM3sBnIm", "BteganisrAn", "nMhi_-KgOT", "hQbXsGpOwrk", "QAvD6Gx1Aw", "4qwP6Dl5fi2", "iclr_2022_yrD7B9N_54F", "iclr_2022_yrD7B9N_54F", "iclr_2022_yrD7B9N_54F", "iclr_2022_yrD7B9N_54F" ]
iclr_2022_lu_DAxnWsh
Guiding Transformers to Process in Steps
Neural networks have matched or surpassed human abilities in many tasks that humans solve quickly and unconsciously, i.e., via Kahneman's “System 1”, but have not been as successful when applied to “System 2” tasks that involve conscious multi-step reasoning. In this work, we argue that the kind of training that works for System 1 tasks is not sufficient for System 2 tasks, propose an alternative, and empirically demonstrate its effectiveness. Specifically, while learning a direct mapping from inputs to outputs is feasible for System 1 tasks, we argue that algorithmic System 2 tasks can only be solved by learning a mapping from inputs to outputs through a series of intermediate steps. We first show that by using enough intermediate steps a 1-layer 1-head Transformer can in principle compute any finite function, proving the generality of the approach. We then show empirically that a 1-layer 1-head Transformer cannot learn to compute the sum of binary numbers directly from the inputs, but is able to compute the sum when trained to first generate a series of intermediate results. This demonstrates, at a small scale, how a fixed-size neural network can lack the expressivity to encode the direct input-output mapping for an algorithmic task and yet be fully capable of computing the outputs through intermediate steps. Finally, we show that a Frozen Pretrained Transformer is able to learn binary addition when trained to compute the carry bits before the sum, while it fails to learn the task without using intermediates. These results indicate that explicitly guiding the neural networks through the intermediate computations can be an effective approach for tackling algorithmic tasks.
Reject
This paper offers new ideas about the key question of how to extend modern Transformer architectures to solve problems that require more reasoning steps than the model can implement in a single-step forward pass. Reviewers were unanimous that the problem is important, and that the paper is a step in a promising direction. However, reviewers were also unanimous that the proposed experiments are too narrow to be the basis for any confident new claims in this area and that, in addition, the experimental design has a confound that makes it difficult to interpret, even after the addition of a new condition during discussion.
train
[ "Xrwblt-z6Nx", "-tduyN9XM3Y", "bsm47wVGJnN", "RPabicom01j", "8nOehP-tTut", "tHqO8jqsdA_", "lXfEVl9ALr", "D5_EDrMEnL", "i6Or9ZG5zQ", "l_KcOLaaoW", "5oIPmaAKmLH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper’s main claim is that sequential reasoning is necessary for system 2 tasks, and exploring addition as an example for that, showing that a 1-layer 1-head transformer fails to compute the sum of numbers directly (although theoretically shown to be able to compute any finite function) but does work when bei...
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2022_lu_DAxnWsh", "bsm47wVGJnN", "iclr_2022_lu_DAxnWsh", "D5_EDrMEnL", "tHqO8jqsdA_", "D5_EDrMEnL", "D5_EDrMEnL", "iclr_2022_lu_DAxnWsh", "iclr_2022_lu_DAxnWsh", "iclr_2022_lu_DAxnWsh", "iclr_2022_lu_DAxnWsh" ]
iclr_2022__K6rwRjW9WO
RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests
Many causal and policy effects of interest are defined by linear functionals of high-dimensional or non-parametric regression functions. $\sqrt{n}$-consistent and asymptotically normal estimation of the object of interest requires debiasing to reduce the effects of regularization and/or model selection on the object of interest. Debiasing is typically achieved by adding a correction term to the plug-in estimator of the functional, that is derived based on a functional-specific theoretical derivation of what is known as the influence function and which leads to properties such as double robustness and Neyman orthogonality. We instead implement an automatic debiasing procedure based on automatically learning the Riesz representation of the linear functional using Neural Nets and Random Forests. Our method solely requires value query oracle access to the linear functional. We propose a multi-tasking Neural Net debiasing method with stochastic gradient descent minimization of a combined Reisz representer and regression loss, while sharing representation layers for the two functions. We also propose a random forest method which learns a locally linear representation of the Reisz function. Even though our methodology applies to arbitrary functionals, we experimentally find that it beats state of the art performance of the prior neural net based estimator of Shi et al. (2019) for the case of the average treatment effect functional. We also evaluate our method on the more challenging problem of estimating average marginal effects with continuous treatments, using semi-synthetic data of gasoline price changes on gasoline demand.
Reject
In this paper, the problem of estimating the average of a moment function that depends on an unknown regression function. It heavily relies on prior papers by e.g. Chernozhukov et al. and the actual novel material consists of making these theoretical results more practical. Experiments for two practical approaches based on neural networks respectively random forests are also reported. Initially, the presentation of the paper was heavily criticized by the reviewers, but during the rebuttal phase at least some of the issues were removed. Together with some other improvements this lead to an increased average score. However, it seems fair to say that reading the other papers first, is still kind of necessary. Despite the still unclear novelty the paper has some merits, which in principle make it acceptable. Compared to the other good papers in my batch, however, it is more incremental and the overall contribution is not as strong. For this reason I vote for rejection, but a comparison to other papers outside my batch is probably a good idea.
train
[ "yHjm4Ci5mRs", "hXZpEdFDGDb", "gkqT58CLHcU", "dQXSjyH0qNQ", "dC8weCz-Tn0", "9K3NZxlN-Tm", "XSFyNgl8p4J", "7NFUPSYZNi", "4JucnrFtUsi", "DTuvwaLNze4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering most of my points. I read the updated version of the paper. I still have some points and questions, and although the author response period is over, maybe you could try to address some or all of the questions in the final version of the paper.\n\n**Some points regarding the updated paper:**\n...
[ -1, 8, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 2, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "dC8weCz-Tn0", "iclr_2022__K6rwRjW9WO", "iclr_2022__K6rwRjW9WO", "dC8weCz-Tn0", "hXZpEdFDGDb", "4JucnrFtUsi", "DTuvwaLNze4", "gkqT58CLHcU", "iclr_2022__K6rwRjW9WO", "iclr_2022__K6rwRjW9WO" ]
iclr_2022_HRF6T1SsyDn
On the Expressiveness and Learning of Relational Neural Networks on Hypergraphs
This paper presents a framework for analyzing the expressiveness and learning of relational models applied to hypergraph reasoning tasks. We start with a general framework that unifies several relational neural network architectures: graph neural networks, neural logical machines, and transformers. Our first contribution is a fine-grained analysis of the expressiveness of these neural networks, that is, the set of functions that they can realize and the set of problems that they can solve. Our result is a hierarchy of problems they can solve, defined in terms of various hyperparameters such as depth and width. Next, we analyze the learning properties of these neural networks, especially focusing on how they can be trained on a small graphs and generalize to larger graphs. Our theoretical results are further supported by the empirical results illustrating the optimization and generalization of these models based on gradient-descent training.
Reject
The paper aims to improve our understanding of GNNs for relational reasoning. In this regard, authors develop a conceptual framework unifying popular models (GNNs, Transformers, etc.) for analyzing their expressiveness and learning capacity. We thank the reviewers and authors for engaging in an active discussion. Based on author comments, the goal of the paper was more of a conceptual exposition, however this did not come across to the reviewers from the manuscript at first. Thus, a better presentation would definitely make the paper much more accessible and useful to the community. Moreover, there were some concerns about the significance of the exposition and better positioning would help (e.g., how the results help improve our understanding of GNNs). Thus, unfortunately I cannot recommend an acceptance of the paper in its current form.
train
[ "FEk_PCYotc_", "9JkUFGVVe8", "sTAjeeiuurw", "u1vwLllBO8u", "7fkOPQcCdcM", "CHRrrnPoWVm", "nPGx0LLWQJE", "dDeneSQR6in", "fg-t5gVeJKA", "pwPuK6CoqPW", "Xjp1OlzCTDj", "E4qzVhuHJ5", "qF7V9BD-dvx" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\nI have read your response, and am not convinced by your arguments or about the significance of the contributions. I understand that the technicalities in producing the proofs could be non-trivial, but I must stress that the results themselves remain incremental and add little to our understanding o...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 4, 5 ]
[ "9JkUFGVVe8", "qF7V9BD-dvx", "E4qzVhuHJ5", "Xjp1OlzCTDj", "fg-t5gVeJKA", "pwPuK6CoqPW", "dDeneSQR6in", "iclr_2022_HRF6T1SsyDn", "iclr_2022_HRF6T1SsyDn", "iclr_2022_HRF6T1SsyDn", "iclr_2022_HRF6T1SsyDn", "iclr_2022_HRF6T1SsyDn", "iclr_2022_HRF6T1SsyDn" ]
iclr_2022_H4EXaI6HR2
Representing value functions in power systems using parametric network series
We describe a novel architecture for modeling the cost-to-go function in approximate dynamic programming problems involving country-scale, real-life electrical power generation systems. Our particular scenario features a heterogeneous power grid including dozens of renewable energy plants as well as traditional ones; the corresponding state space is in the order of thousands of variables of different types and ranges. While Artificial Neural Networks are a natural choice for modeling such complex cost functions, their effective use hinges on exploiting the particular structure of the problem which, in this case, involves seasonal patterns at many different levels (day, week, year). Our proposed model consists of a series of neural networks whose parameters are themselves parametric functions of a time variable. The parameters of such functions are learned during training along with the network parameters themselves. The new method is shown to outperform the standard backward dynamic programming program currently in use, both in terms of the objective function (total cost of operation over a period) and computational cost. Last, but not least, the resulting model is readily interpretable in terms of the parameters of the learned functions, which capture general trends of the problem, providing useful insight for future improvements.
Reject
The reviews are of good quality. The responses by the authors are commendable, but ICLR is selective and reviewers continue to feel that important choices in the research are not sufficiently clear and fully justified.
train
[ "_sx3ydFOJPG", "4jedTzZzWo", "LumSsbbwGDJ", "xNQNkyTrg6v", "K8v72XWl8Xy", "dPBn5eCzX9H", "cGxC2o-Qh9p", "ULpNmAgF5ez", "s8ns9IpXHN-", "ywmbUXzPh-g", "ypVg6jV3tOA", "hEN4DNe_kRJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Overall algorithm We have now added a diagram in Appendix A which we hope will serve to describe the algorithm as a whole.\n\nThank you for adding this. I do think this could be made more concrete, however (e.g., references to specific sections or equations, and avoidance of jargon like \"chronicles\" but inste...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "4jedTzZzWo", "s8ns9IpXHN-", "xNQNkyTrg6v", "K8v72XWl8Xy", "dPBn5eCzX9H", "cGxC2o-Qh9p", "hEN4DNe_kRJ", "ypVg6jV3tOA", "ywmbUXzPh-g", "iclr_2022_H4EXaI6HR2", "iclr_2022_H4EXaI6HR2", "iclr_2022_H4EXaI6HR2" ]
iclr_2022_8uqOMUHgW4M
Learning shared neural manifolds from multi-subject FMRI data
Functional magnetic resonance imaging (fMRI) is a notoriously noisy measurement of brain activity because of the large variations between individuals, signals marred by environmental differences during collection, and spatiotemporal averaging required by the measurement resolution. In addition, the data is extremely high dimensional, with the space of the activity typically having much lower intrinsic dimension. In order to understand the connection between stimuli of interest and brain activity, and analyze differences and commonalities between subjects, it becomes important to learn a meaningful embedding of the data that denoises, and reveals its intrinsic structure. Specifically, we assume that while noise varies significantly between individuals, true responses to stimuli will share common, low-dimensional features between subjects which are jointly discoverable. Similar approaches have been exploited previously but they have mainly used linear methods such as PCA and shared response modeling (SRM). In contrast, we propose a neural network called MRMD-AE (manifold-regularized multiple-decoder, autoencoder), that learns a common embedding from multiple subjects in an experiment while retaining the ability to decode to individual raw fMRI signals. We show that our learned common space represents an extensible manifold (where new points not seen during training can be mapped), improves the classification accuracy of stimulus features of unseen timepoints, as well as improves cross-subject translation of fMRI signals. We believe this framework can be used for many downstream applications such as guided BCI training in the future.
Reject
In this paper, the author present a method for learning a shared latent space between the fMRI activity of multiple individuals processing the same stimulus. The method consists of an auto-encoder with a single encoder and subject-specific decoders which is specifically regularized to decouple common and shared representations. This paper generated a lot of discussion between the reviewers and the authors, as well as between the reviewers. In light of these discussion, I cannot recommend acceptance at this point, as the paper is not ready. The main concerns were (1) about how the results and improvement are evaluated statistically, (2) that the baselines chosen were not strong enough and did not include existing approaches (neural or non-neural) and relatedly (3) that the paper was not framed correctly within the existing literature on finding shared spaces between participants, which would help with determining and understanding the novelty of the proposed approach. Some other smaller points were made by the reviewer can also strengthen the paper for a future submission in a neuroscience or machine learning venue.
val
[ "QxQdHlzdPHl", "6Huw6vc6jla", "4eFzPGzzGx", "lUeJxy43zLB", "gZYhHomyZpI", "Sf-ecvcepqK", "q5W2WS25DBH", "1xSWugKf5VY", "9AWj7Suq5Sx", "yEftyjN-Uc5", "WIgT29E_EQ4", "2wkKkeVjtUo", "3oOZx9XlVie", "jyMD8x0534", "4MhqlUBmG0j", "-lxCUnvX_HV", "lYWmB3z58wk", "1zYH3BLhKfZ", "2xYf144c4zD...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ "The submission proposes a new model for functional alignment of fMRI datasets from multiple subjects. It combines a one-in-many-out autoencoder with two regularization loss terms (one inspired by GRAE) to develop a model that can encode every subject's data to a shared latent space, from which each subject's data ...
[ 6, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "iclr_2022_8uqOMUHgW4M", "gZYhHomyZpI", "9AWj7Suq5Sx", "Sf-ecvcepqK", "Sf-ecvcepqK", "iclr_2022_8uqOMUHgW4M", "1xSWugKf5VY", "Sf-ecvcepqK", "WIgT29E_EQ4", "QxQdHlzdPHl", "dErqMVS_h0f", "QxQdHlzdPHl", "Sf-ecvcepqK", "Sf-ecvcepqK", "Sf-ecvcepqK", "2xYf144c4zD", "1zYH3BLhKfZ", "iclr_2...
iclr_2022_cuGIoqAJf6p
Newer is not always better: Rethinking transferability metrics, their peculiarities, stability and performance
Fine-tuning of large pre-trained image and language models on small customized datasets has become increasingly popular for improved prediction and efficient use of limited resources. Fine-tuning requires identification of best models to transfer-learn from and quantifying transferability prevents expensive re-training on all of the candidate models/tasks pairs. In this paper, we show that the statistical problems with covariance estimation drive the poor performance of H-score (Bao et al., 2019) — a common baseline for newer metrics — and propose shrinkage-based estimator. This results in up to 80% absolute gain in H-score correlation performance, making it competitive with the state-of-the-art LogME measure by You et al. (2021). Our shrinkage-based H-score is 3-55 times faster to compute compared to LogME. Additionally, we look into a less common setting of target (as opposed to source) task selection. We demonstrate previously overlooked problems in such settings with different number of labels, class-imbalance ratios etc. for some recent metrics e.g., NCE (Tran et al., 2019), LEEP (Nguyen et al., 2020) that resulted in them being misrepresented as leading measures. We propose a correction and recommend measuring correlation performance against relative accuracy in such settings. We also outline the difficulties of comparing feature-dependent metrics, both supervised (e.g. H-score) and unsupervised measures (e.g., Maximum Mean (Long et al., 2015) and Central Moment Discrepancy (Zellinger et al., 2019)), across source models/layers with widely varying feature embedding dimension. We show that dimensionality reduction methods allow for meaningful comparison across models, cheaper computation (6x) and improved correlation performance of some of these measures. We investigate performance of 14 different supervised and unsupervised metrics and demonstrate that even unsupervised metrics can identify the leading models for domain adaptation. We support our findings with ~65,000 (fine-tuning trials) experiments.
Reject
This paper considers transferability measures both in the supervised and unsupervised domain. It identifies instabilities in the way that H-score is computed and proposes to correct the issue with a shrinkage-based covariance estimations. The proposed fix results in 80% absolute gain over the original H-score and makes it competitive with state-of-the-art LogME metric. The new shrinkage-based H-score is much faster to compute. Reviewers agree that the paper makes interesting and important contributions. In particular, the reviewers appreciate that the paper takes a deeper look at existing metrics and propose valuable fixes instead of proposing yet another new metric. The paper demonstrates depth of statistic knowledge and proposes shrinkage operators to estimate high dimensional covariance. There are a few shortcomings of the paper, however, that suggests that the paper can benefit of another round of improvement. In particular, the paper is very dense with little motivation. Some of the choices in the paper can be motivated better. For instance, the hypothesis of lack of robustness in estimating H-score is not demonstrated empirically. The reviewers also felt that the paper should extend experiments to other domains beyond image.
train
[ "uzqNraAXHlU", "Li71XPrL8j3", "PJciEimlrpA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on transferability measures both in a supervised and unsupervised context. In particular, the authors propose a shrinkage-based estimation of H-score in order to correct its instability and discuss the limitations of the other approaches on two different scenarios: source model selection or targ...
[ 5, 5, 8 ]
[ 3, 3, 4 ]
[ "iclr_2022_cuGIoqAJf6p", "iclr_2022_cuGIoqAJf6p", "iclr_2022_cuGIoqAJf6p" ]
iclr_2022_QKEkEFpKBBv
DNBP: Differentiable Nonparametric Belief Propagation
We present a differentiable approach to learn the probabilistic factors used for inference by a nonparametric belief propagation algorithm. Existing nonparametric belief propagation methods rely on domain-specific features encoded in the probabilistic factors of a graphical model. In this work, we replace each crafted factor with a differentiable neural network enabling the factors to be learned using an efficient optimization routine from labeled data. By combining differentiable neural networks with an efficient belief propagation algorithm, our method learns to maintain a set of marginal posterior samples using end-to-end training. We evaluate our differentiable nonparametric belief propagation (DNBP) method on a set of articulated pose tracking tasks and compare performance with learned baselines. Results from these experiments demonstrate the effectiveness of using learned factors for tracking and suggest the practical advantage over hand-crafted approaches. The project webpage is available at: https://sites.google.com/view/diff-nbp
Reject
All reviewers concur that the paper has promise, but fails to deliver on that promise. The idea of learning potentials based on DNNs is appreciated, but the evaluation of the contribution is considered lacking by all reviewers. In addition, reviewers note that the training is not differentiable, which the rebuttal acknowledges is future work. I do not reject the paper simply for failing to beat a deep learning baseline, but for having chosen applications which do not even test the paper's hypotheses: reviewers note that the models are tree structured, so loopy BP is not tested, despite the revised paper's claim that "the inference strategy is compatible with graphs containing cycles".
test
[ "dE91BoVH8G", "pBb6KUrotsJ", "Czqo2ecgyH", "febx1ynFMLX", "J8FHJsER9Od", "MXQ0SBKZYR7", "fXnI9p_Q4AN", "JPMtSG4Ik7X", "cYvB9kJAVSM", "vNnfWib0ebH", "Z-5XheerRcN", "cFGYMkpITtg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and clarification of some of my questions and issues. Unfortunately, the main discrepancy I see in the paper remains unresolved: Is it a conceptual paper proposing a new differentiable approach for supervised learning and prediction in MRFs with infinite state spaces? Then it would req...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "fXnI9p_Q4AN", "febx1ynFMLX", "JPMtSG4Ik7X", "cFGYMkpITtg", "Z-5XheerRcN", "fXnI9p_Q4AN", "vNnfWib0ebH", "cYvB9kJAVSM", "iclr_2022_QKEkEFpKBBv", "iclr_2022_QKEkEFpKBBv", "iclr_2022_QKEkEFpKBBv", "iclr_2022_QKEkEFpKBBv" ]
iclr_2022_naoQDOYsHnS
Learning Pseudometric-based Action Representations for Offline Reinforcement Learning
Offline reinforcement learning is a promising approach for practical applications since it does not require interactions with real-world environments. However, existing offline RL methods only work well in environments with continuous or small discrete action spaces. In environments with large and discrete action spaces, such as recommender systems and dialogue systems, the performance of existing methods decreases drastically because they suffer from inaccurate value estimation for a large proportion of o.o.d. actions. While recent works have demonstrated that online RL benefits from incorporating semantic information in action representations, unfortunately, they fail to learn reasonable relative distances between action representations, which is key to offline RL to reduce the influence of out-of-distribution (o.o.d.) actions. This paper proposes an action representation learning framework for offline RL based on a pseudometric, which measures both the behavioral relation and the data-distributional relation between actions. We provide theoretical analysis on the continuity and the bounds of the expected Q-values using the learned action representations. Experimental results show that our methods significantly improve the performance of two typical offline RL methods in environments with large and discrete action spaces.
Reject
The paper proposes a new pseudometric for action representations. The reviewers generally liked the work and the rebuttal helped to clarity may concerns. However, the degree of novelty of the approach remains a concern. In addition, a technical error was discovered by a reviewer in the revised paper. Hence the paper is not ready for publication.
train
[ "SAyJ_tK2m05", "Eg4ITq7YNW7", "mOOyRavqZAJ", "LC1Um4Hhuw-", "JrIigOM5en2", "Nl8SzMtKjrY", "G4aHOdcJGQL", "ADx-CcPGJGt", "eM8VKgf-O6N", "zcle3KXm0t", "FUw4nIgrXDR", "7nGIlVPFtf", "kJ2ugxE1UUV" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the update. \n\nTo make it more clear, we think that the metric proposed in [1] cares the distance between state-action pairs, and thus suitable for constraining the policy to stay clase to dataset. By contrast, BMA cares the distance between pure actions, and thus more suitable for learning action rep...
[ -1, -1, -1, 5, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, 2, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "Nl8SzMtKjrY", "mOOyRavqZAJ", "G4aHOdcJGQL", "iclr_2022_naoQDOYsHnS", "iclr_2022_naoQDOYsHnS", "ADx-CcPGJGt", "LC1Um4Hhuw-", "JrIigOM5en2", "kJ2ugxE1UUV", "7nGIlVPFtf", "iclr_2022_naoQDOYsHnS", "iclr_2022_naoQDOYsHnS", "iclr_2022_naoQDOYsHnS" ]
iclr_2022_YRDlrT00BP
On Transportation of Mini-batches: A Hierarchical Approach
Mini-batch optimal transport (m-OT) has been successfully used in practical applications that involve probability measures with a very high number of supports. The m-OT solves several smaller optimal transport problems and then returns the average of their costs and transportation plans. Despite its scalability advantage, the m-OT does not consider the relationship between mini-batches which leads to undesirable estimation. Moreover, the m-OT does not approximate a proper metric between probability measures since the identity property is not satisfied. To address these problems, we propose a novel mini-batching scheme for optimal transport, named Batch of Mini-batches Optimal Transport (BoMb-OT), that finds the optimal coupling between mini-batches and it can be seen as an approximation to a well-defined distance on the space of probability measures. Furthermore, we show that the m-OT is a limit of the entropic regularized version of the BoMb-OT when the regularized parameter goes to infinity. Finally, we present the new algorithms of the BoMb-OT in various applications, such as deep generative models and deep domain adaptation. From extensive experiments, we observe that the BoMb-OT achieves a favorable performance in deep learning models such as deep generative models and deep domain adaptation. In other applications such as approximate Bayesian computation, color transfer, and gradient flow, the BoMb-OT also yields either a lower quantitative result or a better qualitative result than the m-OT.
Reject
This paper propose a way to make minibatch Optimal transport (m-OT) more efficient by computing an optimal assignment (in the OT sens) and us this assignment to compute instead a hierarchical OT loss (bomb-OT ) that can be used instead of the m-OT loss. The authors discuss how the equivalent OT plan with bomb-OT is much more sparse, and how the proposed approach is actually not biased when the number of mini-batches $k\rightarrow \infty$ . Numerical experiments show that the proposed method allows a gain in performances in applications such as generative modeling, domain adaptation, color transfer and approximate Bayesian computation. The paper originally got borderline-negative scores from the reviewers. While the reviewers acknowledged that the idea is interesting, they had some concerns about the theoretical results strength, some missing baselines and discussions in the numerical experiments. The authors did a detailed reply that clarified some problems. the new numerical experiments with m-UOT were also greatly appreciated by the reviewers but they also raised some questions about the paper. Some concerns detailed below about the comparison with m-OT appeared during the reviewers discussion. Despite the new information, the reviewers reached an agreement that this paper is interesting but needs more work and another round of reviews before acceptance. For theses reasons the AC recommends a rejection for this paper. More details and suggestions below: - While it is clearly not the objective of the paper a discussion about the proximity of the average plan to the exact OT plan is interested. Also a short numerical experiments showing that the bomb-OT average plan is closer to the exact plan than m-OT would be a good illustration of the better performance of bomb-OT. This seems more important for the paper than the color transfer experiments that is kind of a toy problem. - After checking the definition in the paper and discussion between reviewers it appeared that the comparison with m-OT is a bit unfair due to the reformulation of the problem in (1). indeed in the usual formulation, k pairs of independent minibatches are used and the OT is done on those pairs (a sum of k OT) not on all the possible pairwise permutation as in definition of m-OT in equation (1). In other words in m-OT the batches are supposed to be independent which is not the case in the proposed formulation (it is equivalent in the population case though). It means that in practical application, for the same computational complexity (k^2 OT computed), m-OT actually uses $k^2m$ independent samples on each distribution whereas the bomb-OD (and the m-OT defined in equation (1) ) use $km$ samples . By implementing m-OT as in (1) they actually prevent m-OT to explore the dataset as its original formulation does. This means that all the experiments should be done either with the original m-OT implementation of both the original and (1) in addition to bomb-OT. The proposed method will proably work better but the current experiment do not allow this fair comparison. - The theoretical result need more discussion and justification. For instance m-OT converges to its population value in $O(m^{1/2}n^{-1/2}+k^{-1/2})$ that is independent from the dimensionality $d$, but the authors prove the concentration of bomb-OT in of $O(m^{1/2}n^{-1/d})$ which is clearly a problem for large $d$. Also the dependence on $k$ of the convergence would be important since bomb-OT is well defined is true only in the population case where $k$ is large. Note that the claim that it is well defined and hence better is also a bit dubious because it is well defined for $k=\infty$, which is also the case for m-OT when $m=\infty$. Both $m$ and $k$ large will lead to not practical optimization problems so they are comparable except that m-OT converged to the true OT plan when $m\rightarrow \infty$ which is not the case for bomb-OT. - While the contribution of the paper in indeed a methodological method and does not require to be state of the art on all applications the numerical experiments should be improved. First as discussed above the comparison with m-OT is actually unfair an do not correspond to what in done in practice (where all mini batches are independent). m-OT should be implemented with $k^2$ truly independent minibatches. - Second , the authors use approximate W2 on two of the GAN dataset and FID on the third. This is problem because approximate W2 is not defined in the paper. FID is the standard performance measure and should be used for all dataset. - Third the novel experiments comparing also raises a lot of questions. m-UOT is far better than BoMb-OT suggesting that Unbalanced OT can compensate for the limits of m-OT far better than bomb-OT itself. Yes there is a slight increase in performance for ebomb-UOT over m-UOT but is is so small (0.08 %) that it is hard to find them significant, especially since we have no variance. This result that is provided only for DA application actually suggest that the competitor of bomb-OT is m-UOT and not m-OT so it should also be part of the comparison in the other experiments. The authors talk in their replay about the limits of m-UOT but stating that the experiences are not done in the paper is not an excuse for evaluating this clear competitor on other problems and showing numerically these limits. - Finally in the current version of the paper puts a lot of things in the annex that make the paper clearly not self content. Some experiments could go in annex/supp for instance the color transfer to make place for more details in the main paper. Note that it is not one of those comments above that lead the the reject decision but the sum of them that clearly show that the paper needs more work.
val
[ "aFUiJJWiXC", "GUnjjVFi8kk", "xs9USlaNq58", "_esSnQzWYO7", "8tVY8b7vekQ", "iKji3-TtxuR", "bPZfn29bruY", "rjmYZTYQCCN", "TwioRrWuwha", "OdTCNJPVBSD", "bBkFyPXAiv", "jar5OOsMwYd", "4KHqurmw06D", "sRbc0crz5zow", "vxeHhbFtEiG", "uu4aKofb-ei", "FZ20DsDhxX7W", "7zWI-4MmLEs", "Ymck29t6E...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " 1. We thank you for your responses. Our response also sheds some light on the input marginals. Since we have an additional layer for the optimal transport between mini-batches, obtaining an upper bound for the BoMb's minibatch transport plan and the input marginals requires several new proof techniques as we need...
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "OdTCNJPVBSD", "TNKiQwur6bC", "X4ARTWur5By", "TwioRrWuwha", "rjmYZTYQCCN", "iclr_2022_YRDlrT00BP", "sRbc0crz5zow", "JP09fbWOYxb", "FZ20DsDhxX7W", "uu4aKofb-ei", "X4ARTWur5By", "iclr_2022_YRDlrT00BP", "FZ20DsDhxX7W", "iKji3-TtxuR", "TNKiQwur6bC", "iKji3-TtxuR", "iKji3-TtxuR", "X4ART...
iclr_2022_cggphp7nPuI
Reasoning-Modulated Representations
Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising data efficiency. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of data-efficient representation learning.
Reject
The paper proposes Reasoning-Modulated Representations (RMR). That is, it incorporates how to incorporate (structure) prior knowledge (such as a law in physics) into a pre-trained reasoning modules, and investigates how doing so shapes the discovered representations in a number of self-supervised learning settings from pixels. The reviews and (short) discussion have presented salient arguments about the suitability of the paper for publication at this stage. One review argues that the "methodological contribution is minimal," another one is asking for "systematic evaluation" of the main claims made. Moreover, while we all agree that the direction is interesting, the RMR approach presented is not shown to "scale well" (yet), as pointed out by one review. This, however, is important since the general idea that prior knowledge shapes the representation learned is common wisdom in the literature. Indeed, one may now argue that the paper is much more about "how best to combine pixel-based deep learning and neural algorithmic reasoning algorithms" as one reviewer puts it. From this perspective the ATARI experiments are more interesting but here the benefit compared to C-SWM seems to be marginal and one should compare to other deep baseline conditions on the RAM; the significance is not looking at the difference in score and degree in freedoms but just the number of wins. Additionally, there should be other baselines that directly make use of more structured models (structure = prior knowledge, e.g., HMMs or some other way to have bit of a memory), other datasets (where no access to RAM exists) as well as a discussion of other approaches that combines (combinatorial) reasoning with pixel-based deep learning. That is, while pushing for a more high-level contributions is fine, this also requires some more illustrations and discussion of the broader context. Therefore, my overall recommendation is reject at this stage of the paper.
train
[ "6tt3bpsHU_", "XGCZ00Lyz1q", "76YJiVjE1lR", "HQfSMnpwleF", "NkGsdN9a6QJ", "ZQIy9H5cip8", "_KqG0rd6wTb", "bWWZe6G57vC", "sld_U7tohz4", "yyZkTUchPZ", "B39q5cg0x8k", "CVrz2X_QoqK", "u-CCY8y5fHT", "3N2YHH3jt7b" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for this additional discussion. I just want to clarify that I don't find this suggestion (that RMR can be used to connect real-world data to classical, abstract algorithms) entirely implausible. It's just that, given the present experiments, this idea is merely speculative. The present exper...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "XGCZ00Lyz1q", "_KqG0rd6wTb", "iclr_2022_cggphp7nPuI", "bWWZe6G57vC", "sld_U7tohz4", "B39q5cg0x8k", "yyZkTUchPZ", "76YJiVjE1lR", "u-CCY8y5fHT", "3N2YHH3jt7b", "CVrz2X_QoqK", "iclr_2022_cggphp7nPuI", "iclr_2022_cggphp7nPuI", "iclr_2022_cggphp7nPuI" ]
iclr_2022_Y1O-K5itG09
Deep Ensemble as a Gaussian Process Posterior
Deep Ensemble (DE) is a flexible, feasible, and effective alternative to Bayesian neural networks (BNNs) for uncertainty estimation in deep learning. However, DE is broadly criticized for lacking a proper Bayesian justification. Some attempts try to fix this issue, while they are typically coupled with a regression likelihood or rely on restrictive assumptions. In this work, we propose to define a Gaussian process (GP) approximate posterior with the ensemble members, based on which we perform variational inference directly in the function space. We further develop a function-space posterior regularization mechanism to properly incorporate prior knowledge. We demonstrate the algorithmic benefits of variational inference in the GP family, and provide strategies to make the training feasible. As a result, our method consumes only marginally added training cost than the standard Deep Ensemble. Empirically, our approach achieves better uncertainty estimation than the existing Deep Ensemble and its variants across diverse scenarios.
Reject
This paper proposes to use deep ensembles to parameterize a variational Gaussian process posterior, and uses an additional L2 penalty on parameters of the neural networks, and an (MC) NN-GP prior (although the prior is a free design choice). Reviewers appreciated aspects of the paper, finding there to be a minor improvement in uncertainty calibration over regularized deep ensembles, and nice results for the contextual bandit experiments. Ultimately, however, after a healthy and active exchange between reviewers and authors, four out of five reviewers are voting to reject the paper. There is a belief that the paper can be substantially improved from its current form, by carefully accommodating reviewer feedback, but it is not currently at a stage ready for publication. There were common themes in the concerns expressed by several reviewers. Many reviewers found the technical contributions incremental. Parametrizing a GP using deep ensembles, or adding L2 regularization, is not itself a major technical contribution, and the variational framework leans heavily on Sun et. al (2019) and work that came before it from Titsias (2009). Similarly, the theoretical contributions were found to be incremental. These concerns about the technical contributions may have been counterbalanced if the experimental results had been outstanding or the framing of the paper perceived to be very clear and well justified. However, the experimental results had a mixed reception, with several reviewers noting accuracy was not in fact much better than the simpler regularized deep ensembles, despite some improvements in uncertainty calibration. One reviewer liked the bandit experiments, but wished there was a deeper exploration of this application domain. The current experimental results do not seem to warrant the relative complexity of the approach over simple regularized deep ensembles. Additionally, several reviewers found the framing and presentation of the paper needing significant work. The introduction of the L2 regularization terms, for example, was perceived to be overly complex, involving several steps that were not well-motivated. Several reviewers also found the motivation about making deep ensembles Bayesian unconvincing. A procedure being sensitive to initialization, or unreliable in certain settings, does not mean it does not perform approximate Bayesian inference. For example, variational methods and Laplace approximations can depend on initialization, and could get stuck in poor local optima. Quoting papers referring to deep ensembles as non-Bayesian is also not an argument in itself. The blog post linked by a reviewer is clearly pushing back against these claims, and does address points raised in the discussion, such as unimodal approximations and theoretical guarantees. As reviewers have also noted, several papers have now provided plain deep ensembles with a Bayesian justification, and these papers should be acknowledged. It could be reasonable to argue that your paper makes deep ensembles _more_ Bayesian, and you could potentially try to measure this claim in a concrete way. Or you could simply argue that your approach helps reduce sensitivity to initialization, and represents solutions with lower posterior density, which can be helpful practical contributions and don't need to be tied to claims about the method being Bayesian. Please thoughtfully reflect on the reviewer comments in updated versions of the paper. The reviewers put a lot of effort into providing feedback and engaging during the rebuttal period. While the paper has some nice features, there is significant room for improvement on several fronts: technical innovation, experimental investigation, and framing. Improving the framing will help, but working further to also address other concerns will likely be needed to sway reviewers.
train
[ "mt0lvM_QUqq", "ionHlpSB97l", "XTZPCJqTcC", "woDOz9upnG0", "VYPp89oYJoL", "_fKPqTr31V", "NVUWQd1aBHk", "B1iJ0TWtChI", "BfYI16CUD6", "s8gvsKwZBZG", "LyegeAzCfY", "1D_ow3pgrh", "eH9zUp0cVE", "BAylkpGLD9E", "S6Raupa0dup", "XDDUUlfCHlr", "ML4G6Vc9VW", "Okjddv5dVWj", "R_5BlVd0mUD", ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_...
[ " We thank Reviewer uL2a and Reviewer Bycy again for the valuable comments. But we are sorry to still hear that the argument \"make deep ensembles Bayesian\" is not well-motivated.\n\nWe clarify that this argument is widely accepted in the Bayesian deep learning community. In particular, Pearce et al., 2020 stated ...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "iclr_2022_Y1O-K5itG09", "eH9zUp0cVE", "B1iJ0TWtChI", "VYPp89oYJoL", "s8gvsKwZBZG", "iclr_2022_Y1O-K5itG09", "R_5BlVd0mUD", "Okjddv5dVWj", "iclr_2022_Y1O-K5itG09", "LyegeAzCfY", "XLo2q0MSJTp", "RNknkdX384Q", "HGxrtdG8fM", "HGxrtdG8fM", "_fKPqTr31V", "iclr_2022_Y1O-K5itG09", "HGxrtdG8...
iclr_2022_IY6Zt3Qu0cT
Fragment-Based Sequential Translation for Molecular Optimization
Searching for novel molecular compounds with desired properties is an important problem in drug discovery. Many existing frameworks generate molecules one atom at a time. We instead propose a flexible editing paradigm that generates molecules using learned molecular fragments---meaningful substructures of molecules. To do so, we train a variational autoencoder (VAE) to encode molecular fragments in a coherent latent space, which we then utilize as a vocabulary for editing molecules to explore the complex chemical property space. Equipped with the learned fragment vocabulary, we propose Fragment-based Sequential Translation (FaST), which learns a reinforcement learning (RL) policy to iteratively translate model-discovered molecules into increasingly novel molecules while satisfying desired properties. Empirical evaluation shows that FaST significantly improves over state-of-the-art methods on benchmark single/multi-objective molecular optimization tasks.
Reject
Reviewers agree that the paper is well-motivated and the proposed method is somewhat interesting and well-experimented. However, reviewers feel that the paper relies on many existing methods and does not appear to be novel enough.
train
[ "7rr4Z70HMT", "CTFXD_OEpcw", "-2_xoYOtVzl", "EqKW_m9nAMh", "OgSCS-tMkGQ", "xlZqrLidmtb", "Te0FXFi8JfS", "FYegsyZ0jeD", "R9cD7sVhbj3", "0u9G_ScwnQq", "xL3BsmvGiM5", "XYAm9fd2-Fw", "UQLYfEoFK0v", "fDMZ4gNTvgl" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply and for updating the manuscript. The motivation is now much clearer, and there is evidence supporting some statements that were indicated by me and other reviewers. However, I am still not convinced by the experimental section. The additional experiment shown in Figure 2 is the first step...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "EqKW_m9nAMh", "-2_xoYOtVzl", "OgSCS-tMkGQ", "R9cD7sVhbj3", "fDMZ4gNTvgl", "UQLYfEoFK0v", "xlZqrLidmtb", "XYAm9fd2-Fw", "xL3BsmvGiM5", "iclr_2022_IY6Zt3Qu0cT", "iclr_2022_IY6Zt3Qu0cT", "iclr_2022_IY6Zt3Qu0cT", "iclr_2022_IY6Zt3Qu0cT", "iclr_2022_IY6Zt3Qu0cT" ]
iclr_2022_Tu6SpFYWTA
Antonymy-Synonymy Discrimination through the Repelling Parasiamese Neural Network
Antonymic and synonymic pairs may both occur nearby in word embeddings spaces because they have similar distributional information. Different methods have been used in order to distinguish antonyms from synonyms, making the antonymy-synonymy discrimination a popular NLP task. In this work, we propose the repelling parasiamese neural network, a model which considers a siamese network for synonymy and a parasiamese network for antonymy, both sharing the same base network. Relying in the antagonism between synoymy and antonymy, the model attempts to repell siamese and parasiamese outputs making use of the contrastive loss functions. We experimentally show that the repelling parasiamese network achieves state-of-the-art results on this task.
Reject
The paper presents a new approach for distinguishing synonyms and antonyms via an extension of a parasiamese neural network, called "the repelling parasiamese network". The strengths of the paper, as identified by reviewers, are a novel architecture for antonymy detection, a new dataset, and solid empirical results. However, there are major drawbacks identified by reviewers w5dj and hoTU. Specifically, there are clarity issues in writing, lack of a proper justification to the proposed architecture, insufficient details about the quality of datasets, insufficient contextualization in prior work. The scores are borderline, but unfortunately, the authors did not use the rebuttal opportunity to sufficiently address these questions/concerns raised by the reviewers. I thus recommend to reject the paper.
train
[ "Iyt7AhSZx30", "0kz0pXAwBo9", "Lt_e9lKqPqQ", "W3Oewv--uiC", "K-RT1R0nKyY", "zWgk11FTAqN" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n> I think there are some nice contributions here. It is always nice to find a new dataset such as Fallows. The triplet suggestion is also nice.\n\nThanks!\n\n>The repelling suggestion is a nice improvement over Etcheverry and Wonsever.\n\nThanks!\n\n> It would help the community to provide a github.\n\nA github...
[ -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "zWgk11FTAqN", "K-RT1R0nKyY", "W3Oewv--uiC", "iclr_2022_Tu6SpFYWTA", "iclr_2022_Tu6SpFYWTA", "iclr_2022_Tu6SpFYWTA" ]
iclr_2022_GU11Lbci5J
Understanding AdamW through Proximal Methods and Scale-Freeness
Adam has been widely adopted for training deep neural networks due to less hyperparameter tuning and remarkable performance. To improve generalization, Adam is typically used in tandem with a squared $\ell_2$ regularizer (referred to as Adam-$\ell_2$). However, even better performance can be obtained with AdamW, which decouples the gradient of the regularizer from the update rule of Adam-$\ell_2$. Yet, we are still lacking a complete explanation of the advantages of AdamW. In this paper, we tackle this question from both an optimization and an empirical point of view. First, we show how to re-interpret AdamW as an approximation of a proximal gradient method, which takes advantage of the closed-form proximal mapping of the regularizer instead of only utilizing its gradient information as in Adam-$\ell_2$. Next, we consider the property of ``scale-freeness'' enjoyed by AdamW and by its proximal counterpart: their updates are invariant to component-wise rescaling of the gradients. We provide empirical evidence across a wide range of deep learning experiments showing a correlation between the problems in which AdamW exhibits an advantage over Adam-$\ell_2$ and the degree to which we expect the gradients of the network to exhibit multiple scales, thus motivating the hypothesis that the advantage of AdamW could be due to the scale-free updates.
Reject
I agree that reviewer Vxer was confrontational and abusive, especially in the response to the author's rebuttal, and believe that some form of sanction or reprimand is appropriate. That said, I do think that "performance" should be evaluated on both convergence rate and generalization. Figure 3 does suggest some improvement on generalization for deep versions of resnet without batch normalization. The three less offensive reviewers all indicated weak acceptance. One reviewer pointed out the weakness of only getting positive results on deep versions of resnet with batch normalization removed. Results on transformers, where Adam is typically used, would be more compelling. This is my primary issue with the paper. It has not demonstrated improvement in the standard practice of resnet (with batch normalization) and has not presented experiments on transformers. The theoretical analysis is not aimed at explaining why the improvement is only observed on deep resnets with batch normalization removed or why L2 regularization seems to be of no value when batch normalization is present. I understand that these are very difficult questions. The paper has no champion and I am personally concerned about the significance of the contribution.
val
[ "IfXJDFERzUL", "58obNforPSk", "rlPrCTGdEBp", "PVlYFQiNtzO", "GngG9tChFe", "M-D-EB-N4S", "NZ0iqt3wV2", "CYBildVF5-Q", "ugQtQMSiN5H", "XGUrFvrLKcA", "SBAJBfcTugZ", "w2E61pF8Q5E", "GNiC4ab9s2", "2HtmTrqd-Wj", "3kJeEfaYiK5", "uD2QLshNmq", "NmwAgl8Ofm", "ZI9kVPy-D3G" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors first interpret AdamW as an approximation of proximal mapping, and then proposed their own algorithm AdamProx. They aim to unravel the connection between AdamW and AdamProx from both theoretical and empirical perspective. They delicately designed some probing experiments to verify their ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_GU11Lbci5J", "CYBildVF5-Q", "PVlYFQiNtzO", "w2E61pF8Q5E", "ugQtQMSiN5H", "w2E61pF8Q5E", "GNiC4ab9s2", "ugQtQMSiN5H", "3kJeEfaYiK5", "iclr_2022_GU11Lbci5J", "ZI9kVPy-D3G", "NmwAgl8Ofm", "uD2QLshNmq", "iclr_2022_GU11Lbci5J", "IfXJDFERzUL", "iclr_2022_GU11Lbci5J", "iclr_2022_...
iclr_2022_oaKw-GmBZZ
Learning Time-dependent PDE Solver using Message Passing Graph Neural Networks
One of the main challenges in solving time-dependent partial differential equations is to develop computationally efficient solvers that are accurate and stable. Here, we introduce a general graph neural network approach to finding efficient PDE solvers through learning using message-passing models. We first introduce domain invariant features for PDE-data inspired by classical PDE solvers for an efficient physical representation. Next, we use graphs to represent PDE-data on an unstructured mesh and show that message passing graph neural networks (MPGNN) can parameterize governing equations, and as a result, efficiently learn accurate solver schemes for linear/nonlinear PDEs. We further show that the solvers are independent of the initial training geometry and can solve the same PDE on more complex domains. Lastly, we show that a recurrent graph neural network approach can find a temporal sequence of solutions to a PDE.
Reject
Thank you for your submission to ICLR. While all reviewers felt that there were some interesting aspects to the proposed work, the consensus was also that the work didn't properly situate itself within the existing literature on related methods. In particular, I agree with Reviewer kLFD that a numerical comparison to Pfaff et al., is notably missing here; while the authors did provide qualitative comparisons in their discussion, it's not clear to me that these differences are ultimately that significant, and the methods need to be compared directly if a case is to be made for the advantages of the proposed approach.
test
[ "kF5NWzSZCUd", "pFAiXqiEw5j", "NmAFznVHc0-", "aYgGN67Twax", "KhAAHk6Dia8", "2a7TTlRz2Me", "fyq0d2T2b2l", "olo73ZYmIF", "zachQcoeq9R", "UXnJfSw8HTl", "s8-bPypWgV", "oIoFuljQpS4", "OcoRXVnui0s" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed explanations and revision. I can see the efforts in the rebuttal and updated paper did improve the paper, and that is also praised by other reviewers. But I am not convinced that the improvement is sufficient for me to adjust the initial score. I really appreciate the efforts authors ...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "UXnJfSw8HTl", "2a7TTlRz2Me", "olo73ZYmIF", "zachQcoeq9R", "iclr_2022_oaKw-GmBZZ", "fyq0d2T2b2l", "KhAAHk6Dia8", "OcoRXVnui0s", "oIoFuljQpS4", "s8-bPypWgV", "iclr_2022_oaKw-GmBZZ", "iclr_2022_oaKw-GmBZZ", "iclr_2022_oaKw-GmBZZ" ]
iclr_2022_MeMMmuWRXsy
Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models
Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent's latent state space. However, learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging. One source of difficulty is the presence of irrelevant but hard-to-model background distractions, and unimportant visual details of task-relevant entities. We address this issue by learning a recurrent latent dynamics model which contrastively predicts the next observation. This simple model leads to surprisingly robust robotic control even with simultaneous camera, background, and color distractions. We outperform alternatives such as bisimulation methods which impose state-similarity measures derived from divergence in future reward or future optimal actions. We obtain state-of-the-art results on the Distracting Control Suite, a challenging benchmark for pixel-based robotic control.
Reject
Meta Review of Robust Robotic Control from Pixels using Contrastive Recurrent State-Space Models This work investigates a recurrent latent space planning model for robotic control from pixels, but unlike some previous work such as Dreamer and RNN+VAE-based World Models, they use a simpler contrastive loss for next-observation prediction. They presented results on the DM-control suite (from pixels) with distracting background settings. All reviewers (including myself) agree that this is a well-written paper, with clear explanation of their approach. The main weaknesses of the approach are on the experimental side (see review responses to author’s rebuttal by skrV and cjX3). Another recommendation from me is to strengthen the related work section to clearly position the work to previous work - there is clear novelty in this work, but this should be done to avoid confusion. The positive sign is that in the discussion phase, even the very critical cjX3, had increased their score and acknowledged the novelty from previous related work. In the current state, I cannot recommend acceptance, but I’m confident that with more compelling experiments recommended by the reviewers, and better positioning of the paper to previous work, I believe that this paper will surely be accepted at a future ML conference or journal. I’m looking forward to seeing a revised version of this paper for publication in the future.
train
[ "bAsyaEtLsO", "0aCABPZdSwp", "SECOp22JoY", "ZRxfU4E9r1l", "PgkMsi7_nnU", "cJ6k2XDYlh", "gTmVgKU_yUh", "27PjNdRODxs", "u2n1VTkOARM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper shows a latent-planning RL model to\nlearn control policies directly from pixels. To learn a better\nrepresentation it uses recurrent model contrastive learning approach\nwhich enhances the representation learning performance of single\nframe based contrastive methods.\nThis was tested robotic control s...
[ 5, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ 3, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_MeMMmuWRXsy", "cJ6k2XDYlh", "iclr_2022_MeMMmuWRXsy", "PgkMsi7_nnU", "SECOp22JoY", "u2n1VTkOARM", "u2n1VTkOARM", "bAsyaEtLsO", "iclr_2022_MeMMmuWRXsy" ]
iclr_2022_-uZp67PZ7p
Multi-Agent Reinforcement Learning with Shared Resource in Inventory Management
We consider inventory management (IM) problem for a single store with a large number of SKUs (stock keeping units) in this paper, where we need to make replenishment decisions for each SKU to balance its supply and demand. Each SKU should cooperate with each other to maximize profits, as well as compete for shared resources e.g., warehouse spaces, budget etc. Co-existence of cooperation and competition behaviors makes IM a complicate game, hence IM can be naturally modelled as a multi-agent reinforcement learning (MARL) problem. In IM problem, we find that agents only interact indirectly with each other through some shared resources, e.g., warehouse spaces. To formally model MARL problems with above structure, we propose shared resource stochastic game along with an efficient algorithm to learn policies particularly for a large number of agents. By leveraging shared-resource structure, our method can greatly reduce model complexity and accelerate learning procedure compared with standard MARL algorithms, as shown by extensive experiments.
Reject
This is a pretty nice paper, but it suffers a bit from being in an 'uncanny valley' between application and research. The approach clearly has been made, and derives from, the application under consideration. However, the application is not a real application, and rather is a simplified simulation. That's okay, but it means that the application here is not the real goal. So, our attention should go to the solution technique. Unfortunately, this seems rather specific, exploiting known structure for the specific problem at hand, and lacking other reasonable baselines one could imagine. So, this is not really an application paper, as the application in the paper is a proxy. But this is also not really an algorithm paper, as the algorithm is not clearly shown to be generalisable to other settings. And this is also not a theory paper that tells us something general and meaningful. These are just observations - this is not criticism per se. But it means I struggle a little to find something meaningful to learn from this paper, that could be applied elsewhere. This, in addition to the overall recommendations by the reviewers, unfortunately lead me to reject the paper in its current form. I want to thank the authors for engaging with the discussion, and hope they have found it interesting and rewarding, despite the outcome this time around.
train
[ "dXDWfv9nAhb", "SdttygFxjF9", "y319vhYYBpL", "h8YCci__9G4", "9h3_uNXy5hN", "JhJ10OpBblH", "FcSczyDgWp", "UAS1qcxV7eS", "nD9QuompW1D", "vqXbB9hoVBC", "jMU4UR9JKPn", "wA-K20kRhpR", "ezC_MW_G8AT", "A5jScIYo45D" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors formulate an inventory optimisation problem as a stochastic game, where each SKU is a player. In order to reduce computational complexity, the game is formulated such that the only interactions among players are through the available storage space they have to share. The paper also proposes an algorith...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_-uZp67PZ7p", "ezC_MW_G8AT", "wA-K20kRhpR", "9h3_uNXy5hN", "A5jScIYo45D", "dXDWfv9nAhb", "dXDWfv9nAhb", "A5jScIYo45D", "ezC_MW_G8AT", "wA-K20kRhpR", "wA-K20kRhpR", "iclr_2022_-uZp67PZ7p", "iclr_2022_-uZp67PZ7p", "iclr_2022_-uZp67PZ7p" ]
iclr_2022_PVJ6j87gOHz
CoMPS: Continual Meta Policy Search
We develop a new continual meta-learning method to address challenges in sequential multi-task learning. In this setting, the agent's goal is to achieve high reward over any sequence of tasks quickly. Prior meta-reinforcement learning algorithms have demonstrated promising results in accelerating the acquisition of new tasks. However, they require access to all tasks during training. Beyond simply transferring past experience to new tasks, our goal is to devise continual reinforcement learning algorithms that learn to learn, using their experience on previous tasks to learn new tasks more quickly. We introduce a new method, continual meta-policy search (CoMPS), that removes this limitation by meta-training in an incremental fashion, over each task in a sequence, without revisiting prior tasks. CoMPS continuously repeats two subroutines: learning a new task using RL and using the experience from RL to perform completely offline meta-learning to prepare for subsequent task learning. We find that CoMPS outperforms prior continual learning and off-policy meta-reinforcement methods on several sequences of challenging continuous control tasks.
Accept (Poster)
This paper develops a novel continual meta-reinforcement learning algorithm that focuses on learning sequential tasks without revisiting previous tasks. The setting is compelling, and the method is well-developed with good empirical results. The initial version of the paper included a variety of issues, especially lack of clarity in some aspects and the contributions, that were remedied through discussion with the reviewers and subsequent revisions. The discussion among the reviewers seems to have settled on leaning toward a weak accept overall, with one low score that should be dismissed claiming lack of novelty (which isn't correct - the paper certainly is sufficiently novel). There do remain some concerns by two reviewers that although "the paper has enough meat to be accepted, ... [it] needs more careful and well thought out ablations and analysis to be truly valuable." Although the authors have revised the paper to address this issue of a precise analysis, adding material into the appendices with some changes to the main text, they are encouraged to make certain that these aspects are integrated and clear throughout.
train
[ "h8arZyO7pLJ", "L4GE3n3GCgo", "u2ZxzImtQ3h", "kkJX_cc3_cJ", "RUGDN8rbMOa", "ONGIDNgMa0Q", "LtKaPzBGNPX", "9KFaBRGjMva", "Je9WgoS1seQ", "7lUgylSeIce", "MirU_kxdVr", "cgZ25He7ONy", "aho5yOT3ElF", "TLGxwGbd5yt", "r_3KK40sKp", "0BbK56ozmht", "WlWv7t3bfQh", "k_wvr3U1J_g", "r13pUIT4vlZ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", ...
[ " Hello,\n\nThank you for your additional discussion on the paper.\n\nAs you say tasks new tasks can have different reward functions or transition models. Many meta-learning papers use these differences to construct different tasks for meta-training [MAML, GMPS, PEARL, MetaWorld]. For example, Metaworld includes en...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "kkJX_cc3_cJ", "u2ZxzImtQ3h", "9KFaBRGjMva", "2rhfDZYu3NI", "iclr_2022_PVJ6j87gOHz", "aho5yOT3ElF", "zy3neGHkGr6", "TLGxwGbd5yt", "r13pUIT4vlZ", "lVyo9jXWTt", "8E4g0Rt0ZM", "zy3neGHkGr6", "r13pUIT4vlZ", "r_3KK40sKp", "WlWv7t3bfQh", "iclr_2022_PVJ6j87gOHz", "_gJ9CklkLKw", "lVyo9jXWT...
iclr_2022_2DJwuD-elOt
Hybrid Cloud-Edge Networks for Efficient Inference
Although deep neural networks (DNNs) achieve state-of-the-art accuracy on large-scale and fine-grained prediction tasks, they are high capacity models and often cannot be deployed on edge devices. As such, two distinct paradigms have emerged in parallel: 1) edge device inference for low-level tasks, 2) cloud-based inference for large-scale tasks. We propose a novel hybrid option, which marries these extremes and seeks to bring the latency and computational cost benefits of edge device inference to tasks currently deployed in the cloud. Our proposed method is an end-to-end approach, and involves architecting and training two networks in tandem. The first network is a low-capacity network that can be deployed on an edge device, whereas the second is a high-capacity network deployed in the cloud. When the edge device encounters challenging inputs, these inputs are transmitted and processed on the cloud. Empirically, on the ImageNet classification dataset, our proposed method leads to substantial decrease in the number of floating point operations (FLOPs) used compared to a well-designed high-capacity network, while suffering no excess classification loss. A novel aspect of our method is that, by allowing abstentions on a small fraction of examples ($<20\%$), we can increase accuracy without increasing the edge device memory and FLOPs substantially (up to $7$\% higher accuracy and $3$X fewer FLOPs on ImageNet with $80$\% coverage), relative to MobileNetV3 architectures.
Reject
This paper aims to improve performance on edge devices by utilizing a large-capacity network in the cloud. To this end, the authors suggest using the routing network that decides whether to use the base model (on the edge device) or the global model (on the cloud). They also propose an overall training scheme for learning not only model parameters, but also network architectures. After the discussion period, 3 reviewers are on the negative side, and 1 reviewer is positive. AC thinks that the authors’ response was not enough to convince the negative reviewers. In particular, AC agrees with the negative comments of reviewers on limited novelty, unclear motivation for the proposed method, and unclear presentations. Overall, AC recommends rejection.
train
[ "8aQR8MDpj7m", "BJ0PY4JTWw9", "cJ1bdcSG0mK", "nGouEMiHdl6", "3OZs-zEz1V", "g8_dK8QVb9M", "LHkEsb26YV", "SnZs5golw9E", "Rl55uZo0Tf5", "8QO0Q_iHGuE", "yJmyHIySQNL", "NfKoDDz4E7a", "efwNnzX8MZ_", "p1FgeAnMW3", "7AhA0Dzdyi", "9eK1cRPZtt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to combine the inference on edge and cloud together, taking advantage of the communication-free inference on edge and the high-accuracy model on cloud. It utilizes the OFA to obtain models with different resource requirement at low cost. It achieves higher accuracy on the ImageNet dataset compar...
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_2DJwuD-elOt", "NfKoDDz4E7a", "3OZs-zEz1V", "yJmyHIySQNL", "LHkEsb26YV", "iclr_2022_2DJwuD-elOt", "8QO0Q_iHGuE", "Rl55uZo0Tf5", "g8_dK8QVb9M", "7AhA0Dzdyi", "9eK1cRPZtt", "8aQR8MDpj7m", "p1FgeAnMW3", "iclr_2022_2DJwuD-elOt", "iclr_2022_2DJwuD-elOt", "iclr_2022_2DJwuD-elOt" ]
iclr_2022_CES-KyrKcTM
The weighted mean trick – optimization strategies for robustness
We prove that minimizing a weighted mean results in optimizing the higher-order moments of the loss distribution such as the variance, skewness, and kurtosis. By optimizing the higher-order moments, one can tighten the upper bound of the loss mean deviating from the true expectation and improve the robustness against outliers. Such types of optimization problems often lead to non-convex objectives, therefore, we explore the extent to which the proposed weighted mean trick preserves convexity, albeit at times at a decrease in efficiency. Experimental results show that the weighted mean trick exhibits similar performance with other specialized robust loss functions when training on noisy datasets while providing a stronger theoretical background. The proposed weighted mean trick is a simple yet powerful optimization framework that is easy to integrate into existing works.
Reject
The paper investigates weighted empirical risk minimization where the weights on an example in the training set is given by a polynomial function evaluated on the loss on the given example. Authors show that the choice of the weighting function induces a data-dependent variance penalization in the training objective. Authors present an algorithm for weighted ERM and empirical results to support their claims. While the problem setting is broadly relevant and the approach the authors take in this paper is interesting, several questions remain unanswered. First, the authors argue that variance penalization helps but do not compare with other regularized ERM approaches. Second, it is not clear if the proposed algorithm is indeed gradient descent on the weighted ERM objective as pointed out by one of the reviewers. Finally, the writing can be improved with more emphasis on the novelty and significance of the contributions. I believe the initial comments from the reviewers has already helped improve the quality of the paper. I encourage the authors to further incorporate the feedback and work towards a stronger submission.
test
[ "nc1bInZpUwv", "qEX5pY9hC-f", "eBj8hnKylBW", "0dZqrtlIf0", "h-Nt1dMUfDt", "cfkVO9ERi80", "NBydEudeTWI", "A-mJMq6bzTs", "eKpJvv1MDzF", "oHy1uFcDrs1", "ndIhudB5gp", "t1I5OUDdZqZ", "IbX6A5OBItF", "DzkBXh-vZm9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their extensive response. The revisions provided improve the paper in my opinion.\n\nStill, in absence of any theoretical hint on how to choose the $\\lambda_i$ to improve ERM, I really feel that the empirical improvement is only due to the tuning (the best values found will always be bett...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "cfkVO9ERi80", "eBj8hnKylBW", "eKpJvv1MDzF", "DzkBXh-vZm9", "IbX6A5OBItF", "NBydEudeTWI", "A-mJMq6bzTs", "ndIhudB5gp", "oHy1uFcDrs1", "t1I5OUDdZqZ", "iclr_2022_CES-KyrKcTM", "iclr_2022_CES-KyrKcTM", "iclr_2022_CES-KyrKcTM", "iclr_2022_CES-KyrKcTM" ]
iclr_2022_BsDYmsrCjr
Scalable Robust Federated Learning with Provable Security Guarantees
Federated averaging, the most popular aggregation approach in federated learning, is known to be vulnerable to failures and adversarial updates from clients that wish to disrupt training. While median aggregation remains one of the most popular alternatives to improve training robustness, the naive combination of median and secure multi-party computation (MPC) is unscalable. To this end, we propose an efficient approximate median aggregation with MPC privacy guarantees on the multi-silo setting, e.g., across hospitals, with two semi-honest non-colluding servers. The proposed method protects the confidentiality of client gradient updates against both semi-honest clients and servers. Asymptotically, the cost of our approach scales only linearly with the number of clients, whereas the naive MPC median scales quadratically. Moreover, we prove that the convergence of the proposed federated learning method is robust to a wide range of failures and attacks. Empirically, we show that our method inherits the robustness properties of the median while converging faster than the naive MPC median for even a small number of clients.
Reject
There were concerns that the paper has a fairly limited novelty, being based on the combination of two known ideas: bucketing and 2-party secure median for distributed learning. Also, the scale of experiments is quite limited. Other issues include the lack of comparison to relevant related work, some doubts on correctness, and issues with independence and scalability that weren't fully resolved. Overall the reviewers felt that the paper shoud not be accepted in its current form.
train
[ "D67HX7sdJuu", "a1Ox3UIssm0", "Pjb9IbL8RDI", "L9gptgcwY9", "aryR-9dIr2M", "DYVQW2MDHtU", "KXFTBTu3r0D", "XYPZ-O8rxcD", "p5-LoBXuPSL", "tldh2bWBZgf" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for their timely response and reconsideration of our paper. Below are our responses to the suggestions the reviewer brings up.\n\n**[Threat Model and Justification]**\n\nThreat Model: We achieve security according to Appendix B.1 Def. 1 against static, semi-honest corruption of...
[ -1, -1, 5, 5, -1, -1, -1, -1, 5, 3 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, 4, 3 ]
[ "a1Ox3UIssm0", "aryR-9dIr2M", "iclr_2022_BsDYmsrCjr", "iclr_2022_BsDYmsrCjr", "Pjb9IbL8RDI", "L9gptgcwY9", "p5-LoBXuPSL", "tldh2bWBZgf", "iclr_2022_BsDYmsrCjr", "iclr_2022_BsDYmsrCjr" ]
iclr_2022_ErX-xMSek2
A Study on Representation Transfer for Few-Shot Learning
Few-shot classification aims to learn to classify new object categories well using only a few labeled examples. Transfering feature representations from other models is a popular approach for solving few-shot classification problems.In this work we perform a systematic study of various feature representations for few-shot classification, including representations learned from MAML, supervised classification, and several common self-supervised tasks. We find that learning from more complex tasks tend to give better representations for few-shot classification, and thus we propose the use of representations learned from multiple tasks for few-shot classification. Coupled with new tricks on feature selection and voting to handle the issue of small sample size, our direct transfer learning method offers performance comparable to state-of-art on several benchmark datasets.
Reject
This work studies a number of feature representations for few-shot classification, including representations learned from MAML, supervised classification, and some self-supervised tasks. The main conclusion of the study is that learning from more complex tasks result in better representations for few-shot classification. As a practical solution, then, the authors suggest using representations learned from multiple tasks for few-shot classification. The paper studies an important problem in machine learning, and reviewers all appreciate that. However, they raised concerns about the draft in its current state. Authors replied to these comments, and while reviewers acknowledged and appreciated the responses and the revision of the draft, unfortunately that did not convince the reviewers. Several major concerns remained unresolved at the end. Specifically, EEwV believes that the paper is a case study, which though useful, does not bring deep new insights or findings. 63j7 believes that even after revisions made to the paper, there are additional experiments required to understand and examine the claims. Tk8a finds the submission unready for publication due to weak experimental analysis and suggests running additional experiments to examine the hypotheses made by the authors (e.g, relating to spurious features, the need of input harmonization, the benefit of voting) and better tie in the findings of this work to related work. Tk8a provides a list of concrete suggestion along these lines. Similar to EEwV, 28ox also thinks that the paper lack novelty or does not really bring new insights on the way to train the backbone. Based on these comments, and the ratings, I encourage authors to address these issues and resubmit.
train
[ "wNedRwIqxp", "JEMz-45aOvw", "VdUtPlkb9gh", "yZ0ccqkU0eP", "L_ml8_nEnCj" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies few-shot learning from the viewpoint of transferring feature representations. The authors investigate few-shot performance with varying complexities of source tasks together with a few empirical tricks for improvements. The main finding in the paper is that transferring from more complex source ...
[ 3, 6, 3, 5, 5 ]
[ 4, 4, 5, 4, 5 ]
[ "iclr_2022_ErX-xMSek2", "iclr_2022_ErX-xMSek2", "iclr_2022_ErX-xMSek2", "iclr_2022_ErX-xMSek2", "iclr_2022_ErX-xMSek2" ]
iclr_2022_d5IQ3k7ed__
Finding General Equilibria in Many-Agent Economic Simulations using Deep Reinforcement Learning
Real economies can be seen as a sequential imperfect-information game with many heterogeneous, interacting strategic agents of various agent types, such as consumers, firms, and governments. Dynamic general equilibrium models are common economic tools to model the economic activity, interactions, and outcomes in such systems. However, existing analytical and computational methods struggle to find explicit equilibria when all agents are strategic and interact, while joint learning is unstable and challenging. Amongst others, a key reason is that the actions of one economic agent may change the reward function of another agent, e.g., a consumer's expendable income changes when firms change prices or governments change taxes. We show that multi-agent deep reinforcement learning (RL) can discover stable solutions that are $\epsilon$-Nash equilibria for a meta-game over agent types, in economic simulations with many agents, through the use of structured learning curricula and efficient GPU-only simulation and training.Conceptually, our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing, that are commonly used for analytical tractability. Our GPU implementation enables training and analyzing economies with a large number of agents within reasonable time frames, e.g., training completes within a day. We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes. We validate the learned meta-game $\epsilon$-Nash equilibria through approximate best-response analyses, show that RL policies align with economic intuitions, and that our approach is constructive, e.g., by explicitly learning a spectrum of meta-game $\epsilon$-Nash equilibria in open economic models.
Reject
The authors propose a deep multi-agent RL framework to compute equilibria in a economics problem. Several reviewers raised issues with the presentation, as well as issues with evaluating the impact of the work, partly because the novelty of the approach is made insufficiently clear. While the authors have resolved some of the confusions arising from the presentation in their rebuttal, resulting in 2 out of the 4 reviewers to increase their score, the concerns regarding novelty mostly remain. For these reasons, I don’t think this work is ready for publication at ICLR at the moment and recommend rejection.
train
[ "_ABFdrOArJy", "bPtidYciUM", "1uNQ0ghPci9", "YxbiOKU01YA", "dPCwQKQ0X18", "Eu6iW9bn7r7", "RvHx8E0OXf", "xUzfVm7FAt7", "fY87aIE_LBl", "hskB1HjuIm8", "g_Z64Jj2IL_", "gZJMHUt_2wJ", "SS8OLJCSAnt", "Mr1uR3fiqQ", "YDznI3WcBHw", "tGbhVjqFJnP", "I4u6njghPWN", "DDUhZrklnEa", "R6b2jH3Ol_S"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response. I have increased my score, although I still believe that the presentation of the paper should be improved before publication if accepted. ", "This paper proposes a multi-agent deep reinforcement learning (DRL) method to compute general equilibria in economic...
[ -1, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "SS8OLJCSAnt", "iclr_2022_d5IQ3k7ed__", "iclr_2022_d5IQ3k7ed__", "Eu6iW9bn7r7", "iclr_2022_d5IQ3k7ed__", "g_Z64Jj2IL_", "xUzfVm7FAt7", "fY87aIE_LBl", "hskB1HjuIm8", "g_Z64Jj2IL_", "YDznI3WcBHw", "Mr1uR3fiqQ", "bPtidYciUM", "YDznI3WcBHw", "1uNQ0ghPci9", "R6b2jH3Ol_S", "DDUhZrklnEa", ...