paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_G9JXCpShpni
The guide and the explorer: smart agents for resource-limited iterated batch reinforcement learning
Iterated batch reinforcement learning (RL) is a growing subfield fueled by the demand from systems engineers for intelligent control solutions that they can apply within their technical and organizational constraints. Model-based RL (MBRL) suits this scenario well for its sample efficiency and modularity. Recent MBRL techniques combine efficient neural system models with classical planning (like model predictive control; MPC). In this paper we add two components to this classical setup. The first is a Dyna-style policy learned on the system model using model-free techniques. We call it the guide since it guides the planner. The second component is the explorer, a strategy to expand the limited knowledge of the guide during planning. Through a rigorous ablation study we show that exploration is crucial for optimal performance. We apply this approach with a DQN guide and a heating explorer to improve the state of the art of the resource-limited Acrobot benchmark system by about 10%.
Reject
The paper studies dyna-style MBRL in a resource-limited setting. It is evaluated on an acrobat task where it shows very promising results. The reviewers appreciated the extensive replies, but they did not fundamentally change their opinion. In particular: - Lack of formal problem statement and definitions - The experiment on a single task (and that being a non-standard version) isn't sufficient to demonstrate the general merits of the method While the ideas are very promising, the paper cannot be published in its current form. We'd hence like to highly encourage the authors to revise the paper and to re-submit at a different venue.
train
[ "rrbTmmHIDjK", "YnA5KgZnwA6", "i7W8ZysbXvV", "Spe06-bx_38", "-KDz4np0Kfp", "d_3gaZqnBHQ", "OcYoO8sPDWk", "b7SF4KqN2b8", "yzqYWfJib04", "0nWdLHQTBsk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply and taking time to revise the paper with additional clarifications. At this point I will keep my score, as the paper needs major revision, especially taking into account suggestions that were common for the other reviews. I mean in particular\n1. The crucial statements, like the iterated ba...
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "-KDz4np0Kfp", "d_3gaZqnBHQ", "OcYoO8sPDWk", "0nWdLHQTBsk", "yzqYWfJib04", "b7SF4KqN2b8", "iclr_2022_G9JXCpShpni", "iclr_2022_G9JXCpShpni", "iclr_2022_G9JXCpShpni", "iclr_2022_G9JXCpShpni" ]
iclr_2022_EgkZwzEwciE
Adversarial Collaborative Learning on Non-IID Features
Federated learning has been a popular approach to enable collaborative learning on multiple parties without exchanging raw data. However, the model performance of federated learning may degrade a lot due to non-IID data. While most existing studies focus on non-IID labels, federated learning on non-IID features has largely been overlooked. Different from typical federated learning approaches, the paper proposes a new learning concept called ADCOL (Adversarial Collaborative Learning) for non-IID features. Instead of adopting the widely used model-averaging scheme, ADCOL conducts training in an adversarial way: the server aims to train a discriminator to distinguish the representations of the parties, while the parties aim to generate a common representation distribution. Our experiments on three real-world datasets show that ADCOL achieves better accuracy and is much more communication-efficient than state-of-the-art federated learning algorithms on non-IID features. More importantly, ADCOL points out a promising research direction for collaborative learning.
Reject
This manuscript proposes an adversarial method to address non-IID heterogeneity on federated learning. Distinct from existing methods, the mitigation is implemented by training and local communicating learned representations, so the metric of success is indistinguishability of representations across devices. There are four reviewers, all of whom agree that the method addresses an interesting and timely issue. However, reviewers are mixed on the paper score, and many raised concerns about communication overhead, apparent privacy costs, and convergence concerns with the adversarial methods. There is also some limited concern of novelty compared to existing methods. The authors provide a good rebuttal addressing these issues -- either based on experimental evidence (adding differential privacy), comparing communication overhead tradeoffs as it varies with model and sample size, and some existing convergence analysis. However, after reviews and discussion, the reviewers are unconvinced that the method is sufficiently robust to the concerns outlined. Authors are encouraged to address the highlighted technical concerns in any future submission of this work.
train
[ "Av3yKRRH0V", "xQ2m8XkMTso", "iH7sqsnxZ4M", "u1Rx5Ox-gJX", "ywdd98B8JXG", "HDohywAM47T", "jvCbicaJ9QE", "yD6U3LUd_Rh", "E5kWkGjXfPG", "P5hYHKg6nLq", "uVnDlZVPf8g", "jdPmPz7U12X" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments. We think you may overlook that the raw data are not allowed to exchange in the federated setting.\n* In FADA or DANN, one can simply let the target domain be the mixed sources.\n\n> We cannot let the target domain be the mixed sources. Note that the target generator in FADA needs to be u...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "iH7sqsnxZ4M", "iclr_2022_EgkZwzEwciE", "ywdd98B8JXG", "jdPmPz7U12X", "uVnDlZVPf8g", "P5hYHKg6nLq", "E5kWkGjXfPG", "iclr_2022_EgkZwzEwciE", "iclr_2022_EgkZwzEwciE", "iclr_2022_EgkZwzEwciE", "iclr_2022_EgkZwzEwciE", "iclr_2022_EgkZwzEwciE" ]
iclr_2022_gggnCQBT_iE
Connecting Data to Mechanisms with Meta Structual Causal Model
Recent years have seen impressive progress in theoretical and algorithmic developments of causal inference across various disciplines in science and engineering. However, there is still some unresolved theoretical problems, especially for cyclic causal relationships. In this article, we propose a meta structure causal model (Meta-SCM) framework inspired by understanding causality as information transfer. A key feature of our framework is the introduction of the concept of \emph{active mechanisms} to connect data and the collection of underlying causal mechanisms. We show that the Meta-SCM provides a novel approach to address the theoretical complications for modeling cyclic causal relations. In addition, we propose a \emph{sufficient activated mechanisms} assumption, and explain its relationship with existing assumptions in causal representation learning. Finally, we conclude the main idea of the meta-SCM framework with an emphasis on its theoretical and conceptual novelty.
Reject
This paper proposes a meta-structural causal model framework, to increase the representation capability of structural equation models. It also considers how to connect data to mechanisms. The paper is conceptually interesting. However, on the technical side, reviewers feel that without supporting proofs or empirical experiments, it is hard to justify the correctness of the proposal and judge its applicability to real-world problems. As authors claimed in their response, "it is our future work of interest to code our proposed framework into a working system and validate it in a proper setting given its early stage status on research in modeling causal cycles." I think some future version of the paper might be a great contribution to the field if a working system were included.
train
[ "wi0D1ipYfqr", "9_6hV2Kri_", "zvWNk1eKl0J", "wtSGcudVMiY", "VlUyrOaXH8_", "0aZgTNhM73", "0ubBIm1AJOE", "VFJ_hN9cX8B", "keB8K5CaLqs", "qvxxEaGeSDh", "cM0T3pyPnSm", "f1PzpVy6_Ky", "27Ack46--_3", "GgMsn8KAte", "2uwimh_reoY", "F3kaintDm2e" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer unQi, \n\n**Sincerely thanks** for you time and review, though which is not constructive enough. We have to **clarify our statement.**\n\nWe **only** evidently say \"you are a non-expert **in a particular research problem**\", which is true for everyone on earth, but still you might be able to be v...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "9_6hV2Kri_", "zvWNk1eKl0J", "iclr_2022_gggnCQBT_iE", "VlUyrOaXH8_", "VFJ_hN9cX8B", "0ubBIm1AJOE", "qvxxEaGeSDh", "keB8K5CaLqs", "cM0T3pyPnSm", "2uwimh_reoY", "27Ack46--_3", "F3kaintDm2e", "GgMsn8KAte", "iclr_2022_gggnCQBT_iE", "iclr_2022_gggnCQBT_iE", "iclr_2022_gggnCQBT_iE" ]
iclr_2022_i8d2kdxii1L
$p$-Laplacian Based Graph Neural Networks
Graph neural networks (GNNs) have demonstrated superior performance for semi-supervised node classification on graphs, as a result of their ability to exploit node features and topological information simultaneously. However, most GNNs implicitly assume that the labels of nodes and their neighbors in a graph are the same or consistent, which does not hold in heterophilic graphs, where the labels of linked nodes are likely to differ. Hence, when the topology is non-informative for label prediction, ordinary GNNs may work significantly worse than simply applying multi-layer perceptrons (MLPs) on each node. To tackle the above problem, we propose a new $p$-Laplacian based GNN model, termed as $^p$GNN, whose message passing mechanism is derived from a discrete regularization framework and could be theoretically explained as an approximation of a polynomial graph filter defined on the spectral domain of $p$-Laplacians. The spectral analysis shows that the new message passing mechanism works simultaneously as low-pass and high-pass filters, thus making $^p$GNNs effective on both homophilic and heterophilic graphs. Empirical studies on real-world and synthetic datasets validate our findings and demonstrate that $^p$GNNs significantly outperform several state-of-the-art GNN architectures on heterophilic benchmarks while achieving competitive performance on homophilic benchmarks. Moreover, $^p$GNNs can adaptively learn aggregation weights and are robust to noisy edges.
Reject
This work studies a variant of a message-passing scheme, aiming to improve the efficiency of GNNs to heterophilic graphs, as well as improving its stability to noise. The authors provide a new architecture, called $p$-Laplacian message passing, as well as some theoretical analysis and empirical evaluation. Reviewers highlighted several positive aspects on this work, such as the general idea of considering p-Laplacians, as well as the extensive empirical evaluation. However, during the review discussions, several important issues arose, namely important concerns regarding the theoretical contributions, as well as concerns in calibrating the baselines in some empirical evaluations. Overall, the AC is of the opinion that this paper requires a further iteration before it can be considered for publication, and encourages the authors to take the time to address the comments raised by the reviewers.
train
[ "LKPNjQxKLiT", "Ow6-RuEpM9", "9zFoAVOf64", "RYwFAPNFvob", "ONhEmAVZSzv", "gc38hVg3aGm", "j_jb6dg7GX7", "zvbh5P8Mid6", "0ZaxSrrhJpw", "erRpy4wee-B", "Gr9lWJ_XL2B", "hd8T2QfIbYB", "PA6ScHfe_Md", "GVG57wEjFhj", "dILSEKY2nOH", "G6W9yF0mm11", "BTe-jup53Hb", "NwmwZhHIKAz", "GfeDzmSaAx"...
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author...
[ " Dear Reviewer:\n\nThank you again for your valuable comments! We are wondering if your concerns have been addressed properly. Please let us know if you have any further questions.\n\nBest,\n\nThe authors.", " Dear Reviewer:\n\nThanks again for your valuable comments! It has been very helpful for improving the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "dILSEKY2nOH", "zvbh5P8Mid6", "cgYHYECo-9I", "CT7OVmNS2EG", "0ZaxSrrhJpw", "k2qU-Vb8dkU", "zvbh5P8Mid6", "ONhEmAVZSzv", "erRpy4wee-B", "N7Sxf5P5_7R", "cgYHYECo-9I", "CT7OVmNS2EG", "iclr_2022_i8d2kdxii1L", "iclr_2022_i8d2kdxii1L", "G6W9yF0mm11", "BTe-jup53Hb", "NwmwZhHIKAz", "0OOuZu...
iclr_2022_2g9m74He1Ky
Spatio-temporal Disentangled representation learning for mobility prediction
Spatio-temporal (ST) prediction task like mobility forecasting is of great significance to traffic management and public safety. There is an increasing number of works proposed for mobility forecasting problems recently, and they typically focus on better extraction of the features from the spatial and temporal domains. Although prior works show promising results on more accurate predictions, they still suffer in characterising and separating the dynamic and static components, making it difficult to make further improvements. Disentangled representation learning separates the learnt latent representation into independent variables associated with semantic factors. It offers a better separation of the spatial and temporal features, which could improve the performance of mobility forecasting models. In this work, we propose a VAE-based architecture for learning the disentangled representation from real spatio-temporal data for mobility forecasting. Our deep generative model learns a latent representation that (i) separates the temporal dynamics of the data from the spatially varying component and generates effective reconstructions; (ii) is able to achieve state-of-the-art performance across multiple spatio-temporal datasets. Moreover, we investigate the effectiveness of our method by eliminating the non-informative features from the learnt representations, and the results show that models can benefit from this operation.
Reject
The paper proposed to learn a disentangled representation of spatiotemporal mobility data using a VAE-based architecture, in order to separate spatial and temporal dependencies. This is an interesting and relevant problem, but the reviewers found the paper to be weak in motivation and empirical evaluations.
train
[ "nhSjaCvu01", "XC4f5vhbb8l", "HT4PHkPCyPn", "nSwCiAi7gKr", "5f1HNsOte9h", "m8DiY_3KOou", "C93ftWJC40M", "CgyKjFFFUI4" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Q1. Introduction: \nThank you for your comment. In the introduction, we wrote about the difficulty in characterising the dynamic and spatial components since the current state-of-the-art methods claimed that they can better extract long-range spatial and temporal features, but they failed to demonstrate why their...
[ -1, -1, -1, -1, 5, 3, 3, 3 ]
[ -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "CgyKjFFFUI4", "C93ftWJC40M", "m8DiY_3KOou", "5f1HNsOte9h", "iclr_2022_2g9m74He1Ky", "iclr_2022_2g9m74He1Ky", "iclr_2022_2g9m74He1Ky", "iclr_2022_2g9m74He1Ky" ]
iclr_2022_dvl241Sbrda
Unit Ball Model for Embedding Hierarchical Structures in the Complex Hyperbolic Space
Learning the representation of data with hierarchical structures in the hyperbolic space attracts increasing attention in recent years. Due to the constant negative curvature, the hyperbolic space resembles tree metrics and captures the tree-like properties naturally, which enables the hyperbolic embeddings to improve over traditional Euclidean models. However, most real-world hierarchically structured data such as taxonomies and multitree networks have varying local structures and they are not trees, thus they do not ubiquitously match the constant curvature property of the hyperbolic space. To address this limitation of hyperbolic embeddings, we explore the complex hyperbolic space, which has the variable negative curvature, for representation learning. Specifically, we propose to learn the embeddings of hierarchically structured data in the unit ball model of the complex hyperbolic space. The unit ball model based embeddings have a more powerful representation capacity to capture a variety of hierarchical structures. Through experiments on synthetic and real-world data, we show that our approach improves over the hyperbolic embedding models significantly. We also explore the competence of complex hyperbolic geometry on the multitree structure and 1-N structure.
Reject
The paper proposes to learn embeddings into complex hyperbolic space. This is an extension of the popular hyperbolic-space embeddings which have shown success on graph-like and tree-like data. Reviews and discussion mostly centered around the lack of clear motivation for the work (why complex hyperbolic spaces?) and the lack of a clear advantage over other manifold embedding methods that have varying curvature. The reviewers mentioned many questions and points that they thought the work should cover. There was also concern about the baselines against which the method was compared. There was not a consensus that the paper should be accepted, and no reviewer argued strongly for acceptance, even after the author response. As a result, I recommend that this paper not be accepted at this time. I expect a new version of this paper, incorporating this reviewer feedback and especially improving the explanation of the motivation, will be a good submission for a future conference.
test
[ "C8mLFrGiL_W", "U9hW1Pa8Ejk", "5qVm_CUEo4a", "MGBscTFOpI", "ahub2pCp9MW", "qjh2J9Y4P5W", "fX-Nv5oE32W", "IwqpF1qrZ4", "MNpdCjTFowC", "LMZjEGBbH69", "8sHBDc4dYk", "tljH9H7Yxvk", "DpGEtrXDuly", "q10Fext3vKm", "3J5OESTiTEK", "l5GzKiw18eR", "tHeABfrUe6", "LnX4Pg4Kf3q", "SUnnzkmWEm9",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. We have added the new results in our rebuttal revision. Please see our comment *Many thanks to all reviewers and Summary of the revisions* and check our updated revision for details. Thanks again for your constructive suggestions and your appreciation of our work!", " Thanks for the re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "U9hW1Pa8Ejk", "pDOOzPBUr0", "MGBscTFOpI", "tljH9H7Yxvk", "fX-Nv5oE32W", "SUnnzkmWEm9", "iclr_2022_dvl241Sbrda", "SUnnzkmWEm9", "SUnnzkmWEm9", "LnX4Pg4Kf3q", "LnX4Pg4Kf3q", "LnX4Pg4Kf3q", "tHeABfrUe6", "pDOOzPBUr0", "pDOOzPBUr0", "pDOOzPBUr0", "iclr_2022_dvl241Sbrda", "iclr_2022_dv...
iclr_2022_oxC2IBx8OuZ
Towards Federated Learning on Time-Evolving Heterogeneous Data
Federated Learning (FL) is an emerging learning paradigm that preserves privacy by ensuring client data locality on edge devices. The optimization of FL is challenging in practice due to the diversity and heterogeneity of the learning system. Despite recent research efforts on improving the optimization of heterogeneous data, the impact of time-evolving heterogeneous data in real-world scenarios, such as changing client data or intermittent clients joining or leaving during training, has not been well studied. In this work, we propose Continual Federated Learning (CFL), a flexible framework, to capture the time-evolving heterogeneity of FL. CFL covers complex and realistic scenarios---which are challenging to evaluate in previous FL formulations---by extracting the information of past local datasets and approximating the local objective functions. Theoretically, we demonstrate that CFL methods achieve a faster convergence rate than FedAvg in time-evolving scenarios, with the benefit being dependent on approximation quality. In a series of experiments, we show that the numerical findings match the convergence analysis, and CFL methods significantly outperform the other SOTA FL baselines.
Reject
This paper proposes “Continual Federated Learning (CFL)” to study time evolving heterogeneous data. To do this the authors introduce time-drift to capture data heterogeneity across time. The authors also present some preliminary convergence results. Finally, the authors carryout numerical experiments in time-varying and heterogeneous settings. The reviewers identified the following strengths: (1) combining FL and CL is interesting, (2) the development of a new algorithm and providing some initial analysis is a good step. They also identified weaknesses as follows: (1) limited technical novelty as the use of replay buffer is quite standard, (2) cumbersome and not easy to interpret results, (3) lack of time evolving patterns with a common component (4) lack of different metrics that demonstrate how the algorithm is able to maintain accuracy as time-shifts occur, (5) lack of questionable assumptions. The reviewers had a very bimodal view advocating acceptance with a score of 8 and 2 advocating a rejection and neither group changed their opinion. Although the authors thorough responses did alleviate the concerns IMO. My own reading of the paper is that this is an interesting paper working on an emerging area. However, I must agree with some of the reviewers that the final conclusions are not easy to interpret, and the assumptions are not fully motivated. After this is carried out, I think the novelty of the paper can also become much clear. Therefore, I cannot strongly advocate acceptance of the paper in its currently state given the scores. However, I very strongly encourage the authors to submit to a future ML venue after addressing the remaining comments of the reviewers. I would also like to commend the authors for a very strong rebuttal sorry the final decision couldn’t be more favorable given the borderline ratings and the aforementioned issues.
train
[ "E52aKb_JEkS", "CTUUJuIYQ8", "T0OgpDFfLin", "3AiW_ntXHJD", "OE7RbHk1wbx", "b5Wi0aP_ftu", "McEBkutezVU", "sdZUwy6SeX9", "XtKwUQiGlTK", "thUSeJ93a2Y", "O67adPkuzAb", "AzUkyU7IfPY", "hJDxUupU--p", "yjQbvZrmQSR", "cHTmjGbEkH", "kOa8vsxIygY", "78bcEUxD1R", "rygHq0BHGqd", "vjspsQ4az4",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ "This paper introduces the Continual Federated Learning framework that allows capturing time-evolving heterogeneity of FL. The authors introduce a new formulation of the problem and propose a carefully chosen approximation to work with the new problem. Also, the authors provide theoretical analysis for strongly con...
[ 8, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_oxC2IBx8OuZ", "yjQbvZrmQSR", "3AiW_ntXHJD", "sdZUwy6SeX9", "sdZUwy6SeX9", "sdZUwy6SeX9", "iclr_2022_oxC2IBx8OuZ", "ChS4Nb39tCZ", "ColNfppJ3dH", "ChS4Nb39tCZ", "ColNfppJ3dH", "McEBkutezVU", "ColNfppJ3dH", "E52aKb_JEkS", "ChS4Nb39tCZ", "ChS4Nb39tCZ", "ColNfppJ3dH", "iclr_2...
iclr_2022_36rU1ecTFvR
Can standard training with clean images outperform adversarial one in robust accuracy?
The deep learning network has achieved great success in almost every field. Unfortunately, it is very vulnerable to adversarial attacks. A lot of researchers have devoted themselves to making the network robust. The most effective one is adversarial training, where malicious examples are generated and fed to train the network. However, this will incur a big computation load. In this work, we ask: “Can standard training with clean images outperform adversarial one in robust accuracy?” Surprisingly, the answer is YES. This success stems from two innovations. The first is a novel loss function that combines the traditional cross-entropy with the feature smoothing loss that encourages the features in an intermediate layer to be uniform. The collaboration between these terms sets up the grounds for our second innovation, namely Active Defense. When a clean or adversarial image feeds into the network, the defender first adds some random noise, then induces this sample to a new smoother one via promotion of feature smoothing. At that point, it can be classified correctly with high probability. Thus the perturbations carefully generated by the attacker can be diminished. While there is an inevitable clean accuracy drop, it is still comparable with others. The great benefit is the robust accuracy outperforms most of the existing methods and is quite resilient to the increase of perturbation budget. Moreover, adaptive attackers also fail to generate effective adversarial samples as the induced perturbations overweight the initial ones imposed by an adversary.
Reject
This paper suggests a novel defense against adversarial perturbations where during training a loss term is added which enforces similar feature representations. At test time: i) noise is added, ii) the feature loss is minimized The authors report excellent results against AutoAttack but the problem is that AutoAttack expects a static, non-randomized defense. Both is not the case for the defense proposed in the present paper. Therefore, the evaluation with AutoAttack could significantly overestimate the actual robustness and the evaluation of the paper is therefore not valid. Thus adaptive attacks are needed, which are tailored to the defense mechanism, see e.g. Carlini et al, On Evaluating Adversarial Robustness, https://arxiv.org/abs/1902.0670. As two reviewers noticed, the suggested "adaptive attack" in the paper is not properly attacking the whole defense mechanism by unrolling the test time optimization and using additionally EOT. Thus it is unclear at the moment if the method is really robust. Moreover, the inference time is significantly increased so that it is questionable if this approach is practically relevant. Therefore this paper is not ready for publication yet.
train
[ "WV4wRL4xnve", "gNrSIT1JdCH", "7nOSY0zh0d", "lsn5Ypckk2n", "mfKiUqSbzr", "BtMrCaYxhY", "cIwxYmOVYPL", "8fOSE05QPN1", "mbZbSdg0qdd", "J3A64HaLBCH", "Xvd9RTnXLZN", "qGpWzNHMxLK", "Ael4uX8jkQS", "4bAAtlc2AGU", "nOJxPKMQrM5", "JFvhpT5ijT0", "H0mMd5fRcZq", "5LpULtI_9sG", "poUcH4N9ZF9"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "o...
[ " I thank the authors for addressing my questions, but I am prone to keep my rating, considering that the mentioned two points are necessary to verify the proposed method. That is, \n- randomized defenses \n- optimization at inference time.\n\nThus, I suggest the authors conduct the adaptive attack to validate the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "4bAAtlc2AGU", "nOJxPKMQrM5", "qGpWzNHMxLK", "mfKiUqSbzr", "BtMrCaYxhY", "cIwxYmOVYPL", "8fOSE05QPN1", "mbZbSdg0qdd", "iclr_2022_36rU1ecTFvR", "iclr_2022_36rU1ecTFvR", "Ael4uX8jkQS", "JFvhpT5ijT0", "poUcH4N9ZF9", "5LpULtI_9sG", "H0mMd5fRcZq", "iclr_2022_36rU1ecTFvR", "iclr_2022_36rU1...
iclr_2022_WRORN3GUCu
VISCOS Flows: Variational Schur Conditional Sampling with Normalizing Flows
We present a method for conditional sampling for pre-trained normalizing flows when only part of an observation is available. We derive a lower bound to the conditioning variable log-probability using Schur complement properties in the spirit of Gaussian conditional sampling. Our derivation relies on partitioning flow's domain in such a way that the flow restrictions to subdomains remain bijective, which is crucial for the Schur complement application. Simulation from the variational conditional flow then amends to solving an equality constraint. Our contribution is three-fold: a) we provide detailed insights on the choice of variational distributions; b) we discuss how to partition the input space of the flow to preserve bijectivity property; c) we propose a set of methods to optimise the variational distribution. Our numerical results indicate that our sampling method can be successfully applied to invertible residual networks for inference and classification.
Reject
This paper proposes a technique to perform data imputation with normalizing flow defining a joint density between observed and unobserved variables. This is achieved by introducing a variational posterior over the missing variables which is parametrized in terms of the original model by using the Schur complement of the model's Jacobian over the hidden variables. The idea is interesting, but the proposed setup is quite complex and the experimental results are not conclusive. The quality of the results shown can likely be matched or surpassed with much simpler techniques. The paper would substantially benefit from more detailed experiments.
train
[ "ubgnJqfGU8", "3xAUtrIXaxe", "p8DwjCFEdL-", "ImD0Q5j5pmz", "zv6FYSGmYY", "YbJHOja674N", "Z2EMpqg1Rvl", "xDzQcaLkDtf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for their response. Having considered it, I appreciate that there is value in not requiring the mask distribution to be specified at training time. However I don't think that this value is well-enough demonstrated in the experiments on image completion given the simple masks used (given t...
[ -1, -1, 5, -1, -1, -1, 3, 3 ]
[ -1, -1, 3, -1, -1, -1, 3, 4 ]
[ "ImD0Q5j5pmz", "zv6FYSGmYY", "iclr_2022_WRORN3GUCu", "Z2EMpqg1Rvl", "p8DwjCFEdL-", "xDzQcaLkDtf", "iclr_2022_WRORN3GUCu", "iclr_2022_WRORN3GUCu" ]
iclr_2022_7t_6BiC69a
Fieldwise Factorized Networks for Tabular Data Classification
Tabular data is one of the most common data-types in machine learning, however, deep neural networks have not yet convincingly outperformed classical baselines on such datasets. In this paper, we first investigate the theoretical connection between neural network and factorization machine techniques, and present fieldwise factorized neural networks (F2NN), a neural network architecture framework that is aimed for tabular classification. Our framework learns high-dimensional field representations by a low-rank factorization, and handles both categorical and numerical fields. Furthermore, we show that simply by changing our penultimate activation function, the framework recovers a range of popular tabular classification methods. We evaluate our method against state-of-the-art tabular baselines, including tree-based and deep neural network methods, on a range of tasks. Our findings suggest that our theoretically grounded but simple and shallow neural network architecture achieves as strong or better results than more complex methods.
Reject
The paper provides a deep learning technique aimed for tabular data via a unified view of factorization machines and other DNN approaches. The reviews are overall positive when discussing the provided technique, the motivation behind it and the writing. However, there are major concerns related to the experiments. The most dominant one is that of significance, meaning the advantage of the provided method when compared to existing literature. Other claims such as unclear details or different methods of reporting might be possible to resolve via minor edits, but this concern was not resolved in the rebuttal period. Before the paper can be published in a venue such as ICLR, it should provide a clearer comparison against previous works showing exactly where it improves upon them. At its current state, it doesn’t seem to be ready.
train
[ "aWl-ZuACEE", "8NYK_fV-Ckv", "ySi-lvj0PgU", "pBMBdVktOo2", "4YdU8y36op2", "MvA1gqg1J2n", "4oWwcji2Yh", "I0aRoMM_R_", "Nf6A6IDOGXS", "gY-pdHEiIwB", "LxUUqmgWPrV", "_dXrSwG-7OX", "kN2Whjy7AEw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read author's rebuttal and other reviewer's responses. The main concern is still on its marginal performance improvement. The theoretical contribution is impressive to me, while I can also understand and accept if its board-line and got missed in the conference. Personally IMO, its slightly over acceptance...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "gY-pdHEiIwB", "I0aRoMM_R_", "4oWwcji2Yh", "MvA1gqg1J2n", "iclr_2022_7t_6BiC69a", "_dXrSwG-7OX", "4YdU8y36op2", "Nf6A6IDOGXS", "kN2Whjy7AEw", "LxUUqmgWPrV", "iclr_2022_7t_6BiC69a", "iclr_2022_7t_6BiC69a", "iclr_2022_7t_6BiC69a" ]
iclr_2022_FFGDKzLasUa
Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning
This work addresses meta-learning (ML) by considering deep networks with stochastic local winner-takes-all (LWTA) activations. This type of network units result in sparse representations from each model layer, as the units are organized into blocks where only one unit generates a non-zero output. The main operating principle of the introduced units lies on stochastic arguments, as the network performs posterior sampling over competing units to select the winner. Therefore, the proposed networks are explicitly designed to extract input data representations of sparse stochastic nature, as opposed to the currently standard deterministic representation paradigm. We posit that these modeling arguments, inspired from Bayesian statistics, allow for more robust modeling when uncertainty is high due to the limited availability of task-related training data; this is exactly the case with ML, which is the focus of this work. At training time, we rely on the reparameterization trick for Discrete distributions to perform reliable training via Monte-Carlo sampling. At inference time, we rely on Bayesian Model Averaging, which effectively averages over a number of sampled representations. As we experimentally show, our approach produces state-of-the-art predictive accuracy on standard few-shot image classification benchmarks; this is achieved without compromising computational efficiency.
Reject
This paper presents a meta-learning method where the standard ReLU activation units are replaced by the stochastic LWTA units to learn data-driven sparse representation. Most of reviewers like the idea of embedding the stochastic LWTA into a MAML framework. Initial assessment pointed out the lack of clarity in various places in the paper. Authors did a good job in the author’s rebuttal period, to clarify the paper. Experiments demonstrated the competitive performance over existing meta-learning methods. Two of reviewers raised their overall score to 5 (from 3). However, all reviewers have concerns in the incremental technical novelty and feel that presentation should be improved to make the paper more clear and friendly to readers. While the idea is interesting, which is backed up by experiments, the paper is not ready for the publication at the current stage. I encourage to resubmit the paper after addressing these concerns.
train
[ "AhB8PgRSyc1", "6i2kZnwj8Ws", "7H9DpQuD-M", "j6BwpVQ1iNJ", "kIeRYdueyvL", "ARBpPTI_aJL", "gDczp-hpIPu", "7ifgEiGv2p6", "ZkQ-_CM76Rv", "1UFZrNYCnis", "XrRTGpjN2Aq", "t4WANsQfv1L", "um3NpaOJ9l4", "wcON-S9mRKg", "RcuOLck-V8O", "Y6NBFDfOjqe" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a stochastic local winner-takes-all (SLWTA) approach to learn data-driven (stochastic) sparsity. This is motivated by the limited availability of training data, especiially in few-shot meta learning scenarios. The authors propose a meta-learning algorithm for their SLWTA approach. As this is a ...
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_FFGDKzLasUa", "1UFZrNYCnis", "j6BwpVQ1iNJ", "gDczp-hpIPu", "XrRTGpjN2Aq", "iclr_2022_FFGDKzLasUa", "XrRTGpjN2Aq", "ARBpPTI_aJL", "ARBpPTI_aJL", "AhB8PgRSyc1", "iclr_2022_FFGDKzLasUa", "iclr_2022_FFGDKzLasUa", "RcuOLck-V8O", "Y6NBFDfOjqe", "iclr_2022_FFGDKzLasUa", "iclr_2022_...
iclr_2022_1Zxv7TdLquI
YOUR AUTOREGRESSIVE GENERATIVE MODEL CAN BE BETTER IF YOU TREAT IT AS AN ENERGY-BASED ONE
Autoregressive generative models are commonly used, especially for those tasks involving sequential data. They have, however, been plagued by a slew of inherent flaws due to the intrinsic characteristics of chain-style conditional modeling (e.g., exposure bias or lack of long-range coherence), severely limiting their ability to model distributions properly. In this paper, we propose a unique method for training the autoregressive generative model that takes advantage of a well-designed energy-based learning objective. We show that our method is capable of alleviating the exposure bias problem and increase temporal coherence by imposing a constraint which fits joint distributions at each time step. Besides, unlike former energy-based models, we estimate energy scores based on the underlying autoregressive network itself, which does not require any extra network. Finally, thanks to importance sampling, we can train the entire model efficiently without requiring an MCMC process. Extensive empirical results, covering benchmarks like language modeling, neural machine translation, and image generation, demonstrate the effectiveness of the proposed approach.
Reject
This paper proposes a method to train autoregressive model that takes advantage of a well-designed energy-based learning objective model. With the importance sampling, the model can be trained efficiently without requiring an MCMC sampling. Experiments are conducted to verify the effectiveness of the proposed method. The idea is interesting and well-motivated, but the experiments need to be improved. Reviewer FnWE’s major concerns include limited novelty, lack of discussion with closely related works, and insufficient experiments, and recommend rejecting the paper by assigning a rating of 3. Rebuttal doesn’t address his/her concerns. Reviewer in11 is concerned with the computational cost and training instability due to the extra EBM module and has a few unclear technical details that need to be clarified. The author’s reply along with additional experiments during rebuttal partially addresses the concerns of Reviewer in11, who eventually increases the rating to 6. Reviewer AQxn’s major concern is also about the lack of sufficient comparison with other relevant energy-based models. Reviewer DZsJ pointed out that the more insightful analysis about the model is missing in the experiments. Even though the authors provide additional experiments for Reviewer DZsJ, they are not satisfied with the feedback because the additional results are not supportive of the claims made in the paper, and end up with a rating of 6. Reviewer SjXn’s concerns include the lack of comparison with relevant works and the unclear motivation of the design of the joint distribution. After the rebuttal, Reviewer SjXn’s concerns remain and assign a rating of 5 to the paper. The overall rating of the paper after rebuttal is marginally below the acceptance rate. Even though this paper proposes an interesting idea, the reviewers’ comments are not well addressed. As a result, AC cannot recommend accepting the paper. The AC urges the authors to revise their paper according to the comments from the reviewers, and resubmit their work in a future venue.
test
[ "Y5CJnVe6t7F", "k35ZhYL0rB4", "d11zweg1dSx", "mPc7RAjL3Q3", "-KWDHtG12Im", "UtJ9AlY6f-p", "bZqePnj6iXI", "4a79httL-NB", "66Tv6EfiqVW", "RoDifdocTJG", "9YAPwr9gQNZ", "kvRLxbhq97K", "-k8Ddf8QOD5", "fL7_gjcUCnF", "F2NssRQaQ_J", "JLWIZmMpw7b", "w_w58MlNRF", "JEbtLKRaSHD", "60QDnSxa_V...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", ...
[ " We will report the results of comparison between our method and residual EBM[8] once current experiments finished, and update it in the final version of our paper. The experiment in [8] requires rather heavy overheads (they have applied 8 DGX nodes and each with 8 Nvidia V100s) and is very time-consuming.\n\n[8] ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "k35ZhYL0rB4", "fL7_gjcUCnF", "-KWDHtG12Im", "4a79httL-NB", "-k8Ddf8QOD5", "bZqePnj6iXI", "F2NssRQaQ_J", "XZY0qRkpPsW", "iclr_2022_1Zxv7TdLquI", "60QDnSxa_Vj", "lSfEjl4xBDX", "QJi3xnHQdGZ", "QJi3xnHQdGZ", "gMfGYBGgsaX", "lSfEjl4xBDX", "gMfGYBGgsaX", "XZY0qRkpPsW", "66Tv6EfiqVW", ...
iclr_2022_ZDYhm_o8MX
Neural Manifold Clustering and Embedding
Given a union of non-linear manifolds, non-linear subspace clustering or manifold clustering aims to cluster data points based on manifold structures and also learn to parameterize each manifold as a linear subspace in a feature space. Deep neural networks have the potential to achieve this goal under highly non-linear settings given their large capacity and flexibility. We argue that achieving manifold clustering with neural networks requires two essential ingredients: a domain-specific constraint that ensures the identification of the manifolds, and a learning algorithm for embedding each manifold to a linear subspace in the feature space. This work shows that many constraints can be implemented by data augmentation. For subspace feature learning, Maximum Coding Rate Reduction (MCR$^2$) objective can be used. Putting them together yields Neural Manifold Clustering and Embedding (NMCE), a novel method for general purpose manifold clustering, which significantly outperforms autoencoder-based deep subspace clustering and achieve state-of-the-art performance on several important benchmarks. Further, on more challenging natural image datasets, NMCE can also outperform other algorithms specifically designed for clustering. Qualitatively, we demonstrate that NMCE learns a meaningful and interpretable feature space. As the formulation of NMCE is closely related to several important Self-supervised learning (SSL) methods, we believe this work can help us build a deep understanding on SSL representation learning.
Reject
The paper received a majority voting of rejection, although the author response successfully convinced one reviewer to increase his/her score from 5 to 6. I have read all the materials of this paper including manuscript, appendix, comments and response. Based on collected information from all reviewers and my personal judgement, I can make the recommendation on this paper, *rejection*. Here are the comments that I summarized, which include my opinion and evidence. **Presentation** The presentation of this paper needs huge efforts to further improve. Several reviewers and I suffered from difficulties to understand the motivation and challenges of this paper. It seems that Section 3.5 is the novelty part of this paper, but I failed to catch their points. **Contribution** Two contributions points were claimed in this paper. (1) The combination of data augmentation and MCR$^2$. Without knowing the challenges in this paper, it is difficult to evaluate this point. Based on my current understanding (The presentation heavily affects my understanding), this point is very incremental. (2) The proposed method achieved state-of-the-art performance. This point is problematic. I will explain below. **Related Work** The authors failed to notice a huge body of manifold learning work and contrastive clustering work. Some state-of-the-art methods are not included for comparisons. **Experimental Evaluation** (1) Lack of state-of-the-art methods; (2) No standard deviation; (3) The experimental results are incomplete; and (4) It seems that the proposed method only achieved high performance on CIFAR-10 and CIFAR-20. I am not the person who requests the authors achieve the best performance on all the datasets. Everyone knows no algorithm always wins. But the authors should provide some analyses on the inferior performance for better understanding the model. No objection from reviewers was raised to again this recommendation.
train
[ "gabH8toklWd", "n4FPpcxRMhX", "p-rNJjgRhzX", "bFKjFhGDJ9", "KAwGd3ERGLr", "VcrU_2jWvw", "F_lrU55Oh0o", "oVc9YXlGchu", "FIjVhJvD_UW", "Znvoc3A76ZA", "Vke2E2KLBll", "gDxKdfOKd2", "UZSY_7ST0q_", "HD5oVGd9n0", "aqy5DuGDr-", "cgPBkuw0aZ", "VkYKkA0wOf", "YpbnEuRNN6" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi, \n\nIn case you missed our response message. We have added various experiments as requested, and reponded to many of your questions. Please kindly let us know if our response is satisfactory. \n\nMany thanks.", " Thanks for your reply! \n \nTo your remaining question about noise scale being \"empirically c...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "aqy5DuGDr-", "bFKjFhGDJ9", "KAwGd3ERGLr", "UZSY_7ST0q_", "gDxKdfOKd2", "iclr_2022_ZDYhm_o8MX", "oVc9YXlGchu", "Vke2E2KLBll", "iclr_2022_ZDYhm_o8MX", "YpbnEuRNN6", "VcrU_2jWvw", "VkYKkA0wOf", "cgPBkuw0aZ", "aqy5DuGDr-", "iclr_2022_ZDYhm_o8MX", "iclr_2022_ZDYhm_o8MX", "iclr_2022_ZDYhm...
iclr_2022_VINWzIM6_6
Contrastive Representation Learning for 3D Protein Structures
Learning from 3D protein structures has gained a lot of attention in the fields of protein modeling and structural bioinformatics. Unfortunately, the number of available structures is orders of magnitude lower than the number of available protein sequences. Moreover, this number is reduced even more when only annotated protein structures are considered. This makes the training of existing models difficult and prone to overfitting. To address this limitation, we introduce a new representation learning framework for 3D protein structures. Our framework uses unsupervised contrastive learning to learn meaningful representations of protein structures making use of annotated and un-annotated proteins from the Protein Data Bank. We show how these representations can be used to directly solve different tasks in the field of structural bioinformatics, such as protein function and protein structural similarity prediction. Moreover, we show how fine-tuned networks, pre-trained with our algorithm, lead to significantly improved task performance.
Reject
In this paper they adapt unsupervised contrastive learning to the problem of representation learning for proteins from 3D structure, using sub-structure sampling for the data transformation. The reviewers have concerns that the application tasks used for evaluation are not particularly impactful tasks, and that additionally, they are likely to not require protein representations that require more nuanced information. There are also concerns about the clarity of the manuscript, and novelty of the technical approach.
train
[ "o5Q5dmYKXmc", "MoFXUphM1Ik", "s8fDXQtKub7", "7Hd5yOti0ul", "gGOoDA7XJJT", "5t8f2aZOW9O", "kUnjY6zZ09", "kmNn9fFDUNB", "fLRhnhokEmT", "uexdDyN31Zw", "YW5TJavGz7u", "0wtd5zLCiMy", "bHC9eIwUAFk" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We referred the reviewer to the task of remote homology detection since the detection of domains based on sequence similarity, as the reviewer suggested, is by comparing sequences to homologous templates (Wang et al., Protein domain identification methods and online resources). However, these methods fail when no...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "MoFXUphM1Ik", "5t8f2aZOW9O", "fLRhnhokEmT", "YW5TJavGz7u", "0wtd5zLCiMy", "bHC9eIwUAFk", "uexdDyN31Zw", "iclr_2022_VINWzIM6_6", "uexdDyN31Zw", "iclr_2022_VINWzIM6_6", "iclr_2022_VINWzIM6_6", "iclr_2022_VINWzIM6_6", "iclr_2022_VINWzIM6_6" ]
iclr_2022_8rpv8g3zfF
Federated Learning with GAN-based Data Synthesis for Non-IID Clients
Federated learning (FL) has recently emerged as a popular privacy-preserving collaborative learning paradigm. However, it suffers from the non-IID (independent and identically distributed) data among clients. In this paper, we propose a novel framework, namely Synthetic Data Aided Federated Learning (SDA-FL), to resolve the non-IID issue by sharing differentially private synthetic data. Specifically, each client pretrains a local generative adversarial network (GAN) to generate synthetic data, which are uploaded to the parameter server (PS) to construct a global shared synthetic dataset. The PS is responsible for generating and updating high-quality labels for the global dataset via pseudo labeling with a confident threshold before each global aggregation. A combination of the local private dataset and labeled synthetic dataset leads to nearly identical data distributions among clients, which improves the consistency among local models and benefits the global aggregation. To ensure privacy, the local GANs are trained with differential privacy by adding artificial noise to the local model gradients before being uploaded to the PS. Extensive experiments evidence that the proposed framework outperforms the baseline methods by a large margin in several benchmark datasets under both the supervised and semi-supervised settings.
Reject
This paper proposes an idea to address the non-IID issue that is well-known in federated learning. After the discussion with the authors, there are still some concerns remained about the proposed approach. First and foremost, the training of local GAN at each client can be demanding computationally and statistically, which limits the practicality of their approach. Secondly, there has been other work that aims to study the non-IID issues in federated learning, as suggested by the reviewers. The authors should consider citing some of the work in this literature and compare the prior approaches with their GAN-based approach. Thirdly, there is a lack of a formal statement about the privacy guarantee in this paper. In particular, it seems that the privacy guarantee would only make sense in the cross-silo setting, in which each client has many users' data. If each client corresponds to a single user, it does not make sense to train a local GAN. The authors should consider elaborating on the privacy guarantee in the next revision.
test
[ "gZ4RBis5OgQ", "5ILDVcxP-JD", "yYqi1gMXb2", "YV8WJpOy-7h", "3EXO0BwXOHe", "-bmmPHEPQ98", "m4cMHUCTSqZ", "cDQAkY7pGwD", "gTuuOyrskbw", "oKXnUPaQ_4p", "nYf-dKizJL", "Yc1QmHFLZx", "PrMM5qwDEGI", "hjhZLQkwjrs", "a6EJk3uU6_8", "Nwzt8uPCJVq", "3AE9WtmbhOT" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comment.\n\nFor your concern about the privacy leakage, firstly in SDA-FL, all the clients only send their generated data rather than the generators to the server, which avoids the privacy issues introduced by them. To protect the clients' information in the generated data, we adopt the differen...
[ -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "yYqi1gMXb2", "3EXO0BwXOHe", "Yc1QmHFLZx", "iclr_2022_8rpv8g3zfF", "nYf-dKizJL", "cDQAkY7pGwD", "iclr_2022_8rpv8g3zfF", "hjhZLQkwjrs", "Nwzt8uPCJVq", "iclr_2022_8rpv8g3zfF", "YV8WJpOy-7h", "Nwzt8uPCJVq", "3AE9WtmbhOT", "m4cMHUCTSqZ", "m4cMHUCTSqZ", "iclr_2022_8rpv8g3zfF", "iclr_2022_...
iclr_2022_Rj-x5_ej6B
Partial Information as Full: Reward Imputation with Sketching in Bandits
We focus on the setting of contextual batched bandit (CBB), where a batch of rewards is observed from the environment in each episode. But these rewards are partial-information feedbacks where the rewards of the non-executed actions are unobserved. Existing approaches for CBB usually ignore the potential rewards of the non-executed actions, resulting in feedback information being underutilized. In this paper, we propose an efficient reward imputation approach using sketching in CBB, which completes the unobserved rewards with the imputed rewards approximating the full-information feedbacks. Specifically, we formulate the reward imputation as a problem of imputation regularized ridge regression, which captures the feedback mechanisms of both the non-executed and executed actions. To reduce the time complexity of reward imputation on a large batch of data, we use randomized sketching for solving the regression problem of imputation. We prove that the proposed reward imputation approach obtains a relative-error bound for sketching approximation, achieves an instantaneous regret with an exponentially-decaying bias and a smaller variance than that without reward imputation, and enjoys a sublinear regret bound against the optimal policy. Moreover, we present two extensions of our approach, including the rate-scheduled version and the version for nonlinear rewards, which makes our approach more feasible. Experimental results demonstrated that our approach can outperform the state-of-the-art baselines on a synthetic dataset, the Criteo dataset, and a dataset from a commercial app.
Reject
In this paper the authors consider a contextual batched bandit setting where they rely on imputationin order to estimated the non-executed actions in each batch. Even though the idea is quite ineteretsing, and can lead to new methods, there is still a lof of issues raised by the reviwers. In particular, part of the proof was incorrect (and the authors tried to fix it) but given the short time, the reviwers felt that this part should be rewritten and scrutanized further. Also, there are many suggestions by reviewers that the authors need to apply in order to make this work publishable.
train
[ "ogCpE3f5TYP", "pI3jchtCfgj", "H43LFF5BLdT", "WBhVrweT9pr", "JBT-kqFrAHy", "Z5Obt-mU9LD", "RrHFhwIGsJ", "JzQrg8FfTM", "VmfZPcrI1SA", "s-BwyVNt9p6" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper addresses batched contextual bandits, with a fixed set of actions, and separate unknown parameters for each action. The authors propose an approach where the unobserved rewards (i.e. the rewards that would have been obtained if actions that hadn’t been selected for a given context) are imputed, and thes...
[ 5, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_Rj-x5_ej6B", "H43LFF5BLdT", "WBhVrweT9pr", "JBT-kqFrAHy", "Z5Obt-mU9LD", "ogCpE3f5TYP", "s-BwyVNt9p6", "VmfZPcrI1SA", "iclr_2022_Rj-x5_ej6B", "iclr_2022_Rj-x5_ej6B" ]
iclr_2022_Ih7LAeOYIb0
Iterative Memory Network for Long Sequential User Behavior Modeling in Recommender Systems
Sequential user behavior modeling is a key feature in modern recommender systems, seeking to capture users' interest based on their past activities. There are two usual approaches to sequential modeling : Recurrent Neural Networks (RNNs) and the attention mechanism. As the user behavior sequence gets longer, the usual approaches encounter problems. RNN-based methods incur the problem of fast forgetting, making it difficult to model the user's interests long time ago. The self-attention mechanism and its variations such as the transformer structure have the unfortunate property of a quadratic cost with respect to the input length, which makes it difficult to deal with long inputs. The target attention mechanism, despite having only $O(L)$ memory and time complexity, cannot model intra-sequence dependencies. In this paper, we propose Iterative Memory Network (IMN), an end-to-end differentiable framework for long sequential user behavior modeling. In our model, the target item acts as a memory trigger, continuously eliciting relevant information from the long sequence to represent the user's memory on the particular target item. In the Iterative Memory Update module, the model walks over the long sequence multiple iterations and keeps a memory vector to memorize the content walked over. Within each iteration, the sequence interacts with both the target item and the current memory for both target-sequence relation modeling and intra-sequence relation modeling. The memory is updated after each iteration. The framework incurs only $O(L)$ memory and time complexity while reduces the maximum length of network signal travelling paths to $O(1)$, which is achieved by the self-attention mechanism with $O(L^2)$ complexity. Various designs of efficient self-attention mechanisms are at best $O(LlogL)$. Extensive empirical studies show that our method outperforms various state-of-the-art sequential modeling methods on both public and industrial datasets for long sequential user behavior modeling.
Reject
The paper proposes a new architecture named Iterative Memory Network (IMN) to encode long user behavior sequence for recommendations. Reviewers appreciate the clarity of the writing as well as practicality and the O(L) complexity of the proposed architecture, however do raise questions on novelty. Different design choices employed in the paper are not well explained. The rebuttal was not able to convince the reviewers to accept the work at this venue, but reviewers do feel the paper could fly in an application oriented venue.
val
[ "pwi0IvQsboY", "Hz4GkRp6ire", "axPSbcbXeqN", "9jkZS0tVNpY", "p3pOPnN5XNM", "E50XWOHTjdJ", "A-g_LZT7WX", "ZXSGMlLLhGO", "0rHNBmuCdrZ", "D17uL1aHIyY", "L347Oe6PxaB", "B1nDX9d02Fb", "8VymTprFInn", "eFGPN5Esev", "eTBbaf3QVa1", "i6-tQFr3yg", "C1t4dK9Qib3", "zirwMDGtS00", "VXr31VCIxVQ"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " I appreciate the detailed replies from the authors. Let me summarize what I learned:\n\nPros:\n* Explicit cross-features are an easy way to get high accuracy numbers, compared with latent low-rank approaches.\n* Within the realm of cross-features, the paper contains some interesting ideas to utilize the sparse / ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "axPSbcbXeqN", "9jkZS0tVNpY", "9jkZS0tVNpY", "p3pOPnN5XNM", "B1nDX9d02Fb", "_64-lK7NndV", "4m3uQHEpSni", "p3pOPnN5XNM", "L347Oe6PxaB", "iclr_2022_Ih7LAeOYIb0", "C1t4dK9Qib3", "zirwMDGtS00", "4m3uQHEpSni", "_64-lK7NndV", "D17uL1aHIyY", "VXr31VCIxVQ", "D17uL1aHIyY", "VXr31VCIxVQ", ...
iclr_2022_8qQ48aMXR_g
On Locality in Graph Learning via Graph Neural Network
Theoretical understanding on the learning process of graph neural network (GNN) has been lacking. The common practice in GNN training is to adapt strategies from other machine learning families, despite the striking differences between learning non-graph and graph-structured data. This results in unstable learning performance (e.g., accuracy) for GNN. In this paper, we study how the training set in the input graph effects the performance of GNN. Combining the topology awareness of GNN and the dependence (topology) of data samples, we formally derive a structural relation between the performance of GNN and the coverage of the training set in the graph. More specifically, the distance of the training set to the rest of the vertexes in the graph is negatively correlated to the learning outcome of GNN. We further validate our theory on different graph data sets with extensive experiments. Using the derived result as a guidance, we also investigate the initial data labelling problem in active learning of GNN, and show that locality-aware data labelling substantially outperforms the prevailing random sampling approach.
Reject
This paper analyses generalization ability of graph neural network from the aspect of the distance between the test data point and the training data point, where the labels of a part of the vertexes are observed as the training data and a test data point is selected from the remaining vertexes. The theoretical result indicates that if the training data "cover" the whole vertexes of the graph, then the test accuracy will be better. This theoretical finding is supported by some numerical experiments. The problem this paper considers is interesting and would be worth investigation. However, the theoretical results presented in the paper are based on quite strong assumptions, and the statement of the theorem is not well exposed. - First, the paper assumes that a distortion map is obtained by training and the training procedure can produce zero training errors. Although these assumptions are far from obvious in practice, the paper lacks justification of these assumption. Hence, these assumptions seem to be made only for the sake of proof. - Second, the constants appeared in the theorems are not correctly specified. How different constants are correlated is not properly exposed. As for the experiments, they are not so strong: only Cora is experimented and training data size is small. For these reasons, this paper is not sufficiently matured to appear in ICLR.
train
[ "NdtTV4LyuwA", "DD0z0-OcCz0", "RkLK9KcNx8N", "UV8vqFW_Q5", "yWOdjSss86B", "hjE6vNHN1XY", "d53FgjgvhJm", "Dyp4qTSc_Wf", "A30gFUlaeOW", "7obc8B6KWo3", "QyWGFlUhPvM", "NhbHIaFwjkD", "7yPRZJF5scQ", "Mf4RlhkUVCl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for providing a more clear revision regarding reviewer's comment. As stated by other reviewers as well, we found the results of this paper are based on some very strict assumptions, e.g. homophily and $\\epsilon=0$. The author modify the theorem 1 and 2 by replace $p$ with distortion rate $\\alpha$. Howeve...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "yWOdjSss86B", "7obc8B6KWo3", "Dyp4qTSc_Wf", "iclr_2022_8qQ48aMXR_g", "hjE6vNHN1XY", "A30gFUlaeOW", "Mf4RlhkUVCl", "7yPRZJF5scQ", "NhbHIaFwjkD", "QyWGFlUhPvM", "iclr_2022_8qQ48aMXR_g", "iclr_2022_8qQ48aMXR_g", "iclr_2022_8qQ48aMXR_g", "iclr_2022_8qQ48aMXR_g" ]
iclr_2022_wMXYbJB-gX
Towards Understanding Label Smoothing
Label smoothing regularization (LSR) is a prevalent component for training deep neural networks and can improve the generalization of models effectively. Although it achieves empirical success, the theoretical understanding about the power of label smoothing, especially about its influence on optimization, is still limited. In this work, we, for the first time, theoretically analyze the convergence behaviors of stochastic gradient descent with label smoothing in deep learning. Our analysis indicates that an appropriate LSR can speed up the convergence by reducing the variance in gradient, which provides a theoretical interpretation on the effectiveness of LSR. Besides, the analysis implies that LSR may slow down the convergence at the end of optimization. Therefore, a novel algorithm, namely Two-Stage LAbel smoothing (TSLA), is proposed to further improve the convergence. With the extensive analysis and experiments on benchmark data sets, the effectiveness of TSLA is verified both theoretically and empirically.
Reject
The authors proposed a two-stage algorithm for exploiting label smoothing and provided some analysis based on how label smoothing may have reduced the variance in the stochastic gradient. While the authors provided substantial experiments to justify their work (with additional ones during the response stage), none of the reviewers was very excited in the end, for obvious reasons perhaps: (a) the two-stage algorithm is a straightforward combination of existing practices (first run with label smoothing and then run without label smoothing), without any new, interesting insight from the authors' side; (b) the analysis is a direct consequence of the authors' assumptions. Basically, if label smoothing reduces variance, SGD would converge faster and vice versa, which is nothing surprising or insightful. The key is to understand when and how any particular way to smooth the label would lead to significant reduction of the variance, which the authors did not provide any guidance or insight other than offering some empirical results. Overall, we do not believe this work, in its current form, adds significant value to our understanding of label smoothing.
val
[ "b0rHmbRRLY5", "Fr4Ij_wpqsI", "Ymo0Z5xp8b3", "o6HHMGZOeE", "Z7hAHWdKuYt", "q8bF32mDl4y", "VDAJ4thfgHe", "VGjNO-JQmFu", "7Sas-ys1nf" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe would like to thank all reviewers for their precious time, their feedback, and their constructive suggestions. We tried our best to clarify the main concerns and we have updated the draft (all changes in the revision are marked in red for convenience). \n\nSpecifically, we added the results o...
[ -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "iclr_2022_wMXYbJB-gX", "7Sas-ys1nf", "VGjNO-JQmFu", "q8bF32mDl4y", "VDAJ4thfgHe", "iclr_2022_wMXYbJB-gX", "iclr_2022_wMXYbJB-gX", "iclr_2022_wMXYbJB-gX", "iclr_2022_wMXYbJB-gX" ]
iclr_2022_V0LnyelKACB
Accelerating HEP simulations with Neural Importance Sampling
Virtually all high-energy-physics (HEP) simulations for the LHC rely on Monte Carlo using importance sampling by means of the VEGAS algorithm. However, complex high-precision calculations have become a challenge for the standard toolbox. As a result, there has been keen interest in HEP for modern machine learning to power adaptive sampling. Despite previous work proving that normalizing-flow-powered neural importance sampling (NIS) sometimes outperforms VEGAS, existing research has still left major questions open, which we intend to solve by introducing ZüNIS, a fully automated NIS library. We first show how to extend the original formulation of NIS to reuse samples over multiple gradient steps, yielding a significant improvement for slow functions. We then benchmark ZüNIS over a range of problems and show high performance with limited fine-tuning. This is crucial for ZüNIS to be a mature tool for the wider HEP public. We outline how the the library allows for non-experts to employ it with minimal effort, an essential condition to widely assess the value of NIS for LHC simulations.
Reject
The paper considers neural importance sampling (that is, importance sampling with a trained flow proposal) and its application to high-energy physics. The two contributions of the paper are: (a) a methodological improvement in the training of the proposal; (b) a description of a software library that implements the framework. All reviewers were critical of the paper and recommended rejection. The main issue raised was that the methodological contribution was not novel or significant enough, and not sufficiently evaluated. The authors disagreed with the reviewers that the methodological contribution was not significant enough, but they acknowledged that the first version of the paper did not present the contribution clearly; consequently, they submitted a heavily revised second version following the reviewers' feedback. Although it seems that the second version is an improvement over the first one, it's clear that the paper requires a second round of reviewing to ascertain whether it satisfies the requirements for acceptance. At this stage, the consensus among reviewers remains that the paper should be rejected. For that reason, I cannot recommend acceptance to ICLR. I sincerely hope the reviewers' feedback will be useful to the authors for a future submission to a different venue.
train
[ "0OrMLz-OSn", "PXYaHdT1g--", "BGuwDeOQ_S", "RQClelT_lB8", "W8bTU8xjgGe", "SQE9IxMPEm", "sn7zI_1lA9B", "Gh0MWaI7vB", "6HRNVgUHWK4", "aUkzj2ydz9y", "LkrPFin2GAW", "Uk7lMBvG7IY" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " OK, the experiment results are more clear for me. But the description and discussion on it in the manuscript are too brief. In addition, after reading the rebuttal and comments from other reviewers, I still think the significance of this work is marginal and \"not good enough\" for ICLR.", " We thank the review...
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "PXYaHdT1g--", "BGuwDeOQ_S", "sn7zI_1lA9B", "iclr_2022_V0LnyelKACB", "aUkzj2ydz9y", "LkrPFin2GAW", "RQClelT_lB8", "Uk7lMBvG7IY", "iclr_2022_V0LnyelKACB", "iclr_2022_V0LnyelKACB", "iclr_2022_V0LnyelKACB", "iclr_2022_V0LnyelKACB" ]
iclr_2022_4JlwgTbmzXQ
EqR: Equivariant Representations for Data-Efficient Reinforcement Learning
We study different notions of equivariance as an inductive bias in Reinforcement Learning (RL) and propose new mechanisms for recovering representations that are equivariant to both an agent’s action, and symmetry transformations of the state-action pairs. Whereas prior work on exploiting symmetries in deep RL can only incorporate predefined linear transformations, our approach allows for non-linear symmetry transformations of state-action pairs to be learned from the data itself. This is achieved through an equivariant Lie algebraic parameterization of state and action encodings, equivariant latent transition models, and the use of symmetry-based losses. We demonstrate the advantages of our learned equivariant representations for Atari games, in a data-efficient setting limited to 100k steps of interactions with the environment. Our method, which we call Equivariant representations for RL (EqR), outperforms many previous methods in a similar setting by achieving a median human-normalized score of 0.418, and surpassing human-level performance on 8 out of the 26 games.
Reject
This paper proposes to learn a latent space representation such that some linear equivariance and symmetry constraints are respected in the latent space, with the goal to improve sample efficiency. One core idea is that the latent space is also the same as the space of linear transformation used in the constraints, which is shown to simplify some of the mathematical derivations. Experiments on the Atari 100K benchmark demonstrate a statistical improvement over the SPR baseline when using the SE(2) group of linear transformations as latent space. Following the discussion period, most reviewers were in favor of acceptance. However, one reviewer remained unconvinced, and after carefully reading the paper, I actually share the same concerns, i.e., that it is unclear under which conditions the proposed approach actually works, and what makes it work. I believe that, as a research community, we should value understanding over moving the needle on benchmarks, especially when proposing such a complex method as this one (see Fig. 5). More specifically: 1. The method is only evaluated on Atari games, showing some improvements when using SE(2), and arguing that there are corresponding symmetries in such games. There is however no analysis demonstrating (or even hinting at the fact) that the proposed technique is actually learning to take advantage of such symmetries (NB: I had a quick look at the animation added by the authors in the supplementary material, but I do not see if/how they help on this point). Even if analyzing representations on Atari may be tricky, I believe that given the motivation of this new algorithm, it *must* be evaluated on some toy example (e.g., the pendulum mentioned throughout the paper) to validate that it is learning what we want it to learn (although I also agree with the authors that experimenting on a more complex benchmark like Atari is equally important). 2. The idea of embedding states into the same space as transformations is interesting, and brings some advantages when writing down equations, as demonstrated by the authors. However, there is no justification besides mathematical convenience, and it doesn't seem intuitive to me at all that why this should be a good idea, considering that it ties the state representation to the mathematical representation of group transformations. For instance, what does the spcial group element $e$ mean for a state? And this coupling makes it difficult to interpret the effect of using a different group of transformations: for instance when moving from GL(2) to SE(2), is the observed benefit because we are using only specific transformations, or simply because we are reducing the dimensionality of the state embedding? (note that in Fig. 4(c) the MLP variant has similar performance to GL(2), and based on my understanding they use the same embedding dimensionality ==> I believe it would be important to check what would happen with an MLP variant using the same dimensionality as SE(2)) 3. The effect of the $L_{GET}$ loss is not convincing, as pointed out by several reviewers. I think it would have been an opportunity for the authors to investigate why, especially since it seems to work in some games and not others. But just focusing on "here are the 17/26 games where it works better" doesn't really bring added value here. Do these games have some specific properties that make them better candidates to take advantage of $L_{GET}$? This could have been a very interesting insight if that was the case, but as it is now, I am not sure what we can learn from that. 4. There are several implementation "details", some moving the final algorithm farther from its theoretical justification, that are not ablated, making it difficult to understand their impact (ex: using target networks, the choice of the value of M, using projections onto the unit sphere of some arbitrary dimensionality, how the $s'$ state is chosen in $L_{GET}$) As a result, we have here an algorithm with some interesting theoretical background, but with a lot of moving components which -- when properly tweaked -- can lead to a statistically meaningful improvement on Atari 100K -- without really understanding why. I believe this is not quite enough for publication at ICLR, and I would encourage the authors to delve deeper into the understanding of their algorithm, which I hope will bring useful insights to the research community working on representation learning.
test
[ "j-QYh0F05__", "6N8e0_JJFH8", "-BEZJn2fkfs", "z78k-sW4-pU", "CNI1l-nvuup", "wpMm3g35sv", "CGV7g63JHnG", "BHQYNuv3xa", "RSy-4dLrA-W", "L2b6XlHKdpe", "DmhLDHideRW", "IDIdtRXAkH3", "nd-yiewWjce", "f1h5AJp-6Ox", "t_cgOVD156C", "WIxrHDUAt3e", "VkzXuV_ttzI", "t-iwwcykFg", "xae4_mHuIku"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", ...
[ "The paper focuses on the problem of incorporating symmetries into RL agents’ representations to improve data efficiency. It proposes a model-based representation learning method parameterized by a group action that is based on encoding states and state-action pairs as elements of the group's representation in late...
[ 5, -1, -1, -1, 8, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_4JlwgTbmzXQ", "z78k-sW4-pU", "z78k-sW4-pU", "CGV7g63JHnG", "iclr_2022_4JlwgTbmzXQ", "VkzXuV_ttzI", "j-QYh0F05__", "iclr_2022_4JlwgTbmzXQ", "DmhLDHideRW", "IDIdtRXAkH3", "IDIdtRXAkH3", "t_cgOVD156C", "j-QYh0F05__", "j-QYh0F05__", "BHQYNuv3xa", "CNI1l-nvuup", "xT7RJCX491v", ...
iclr_2022_BrfHcL-99sy
Defending Graph Neural Networks via Tensor-Based Robust Graph Aggregation
Graph Neural Networks (GNNs) have achieved outstanding success in a wide variety of domains and applications. However, they are still vulnerable to unnoticeable perturbations of graphs specially designed by attackers, causing significant performance drops. Developing algorithms to defend GNNs with robust graphs vaccinating from adversarial attacks still remains a challenging issue. Existing methods treat every edges individually and regularize them by specific robust properties, which ignores the structural relationships among edges and correlations among different properties. In this paper, we propose a tensor-based framework for GNNs to learn robust graphs from adversarial graphs by aggregating predefined robust graphs to enhance the robustness of GNNs via tensor approximation. All the predefined robust graphs are linearly compressed into and recovered from a low-rank space, which aggregates the robust graphs and the structural information in a balanced manner. Extensive experiments on real-world graph datasets show that the proposed framework effectively mitigates the adverse effects of adversarial attacks and outperforms state-of-the-art defense methods.
Reject
This paper has been reviewed by four reviewers with three borderline scores leaning towards an accept and one clear reject. Reviewers have raised a number of issues. They feel that *the paper is borderline* as *the paper may not have great novelty* due to the use of low-rankness even though it is used for the low-rank tensor approximation and that *larger datasets* should be used to demonstrate the effectiveness of the proposed approach (even though there are no papers doing it on the large scale graphs to be fair). Also, reviewers note that they would like to see more theoretical justifications rather than just to see authors *propose a method for the adversary scenario* without full theoretical analysis. For instance, reviewers xxhm and WHUo were seeking the novel theoretical analysis in the context of adversarial robustness rather than a statement that *the problem of recovering the data under gross error has gained much attention* followed by the list of prior papers and an outline of their findings. While all reviewers agree that the empirical results look very promising, they also agree that the theoretical analysis needs an improvement. For the above reasons, however tempting, even if overlooking the reject score from the reviewer xxhm, it is difficult for AC to advocate for a clean accept.
train
[ "ojC4hLvOIJ1", "H1A3P7-R0Qd", "NgN10wHTaBW", "0ZXoYTy04Gu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a tensor-based framework to improve robustness of GNNs by denoising the adversarial graphs. By leveraging the low-rank and feature similarity properties of graph data, the authors use three predefined robust graphs with the adversarial graph to form a graph tensor, and then apply tensor decompos...
[ 6, 3, 6, 6 ]
[ 4, 4, 4, 5 ]
[ "iclr_2022_BrfHcL-99sy", "iclr_2022_BrfHcL-99sy", "iclr_2022_BrfHcL-99sy", "iclr_2022_BrfHcL-99sy" ]
iclr_2022_xo_5lb5ond
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains
Neural network pruning techniques can substantially reduce the computational cost of applying convolutional neural networks (CNNs). Common pruning methods determine which convolutional filters to remove by ranking the filters individually, i.e., without taking into account their interdependence. In this paper, we advocate the viewpoint that pruning should consider the interdependence between series of consecutive operators. We propose the LongEst-chAiN (LEAN) method that prunes CNNs by using graph-based algorithms to select relevant chains of convolutions. A CNN is interpreted as a graph, with the operator norm of each operator as distance metric for the edges. LEAN pruning iteratively extracts the highest value path from the graph to keep. In our experiments, we test LEAN pruning on several image-to-image tasks, including the well-known CamVid dataset, and a real-world X-ray CT dataset. Results indicate that LEAN pruning can result in networks with similar accuracy but 3--20x fewer convolutional filters than networks pruned with methods that rank filters individually.
Reject
I do not recommend accepting this paper, although I make this decision with reservations. The review quality for this paper was not particularly strong, and I wish to emphasize to the authors that I read the paper myself in detail in the process of writing this metareview. This paper proposes a new structured pruning technique called LEAN. It involves computing an operator norm of the convolutions in a convolutional neural network, multiplying these norms over paths through the network, and keeping the paths with the highest such values (and pruning everything else). This paper makes the argument that this metric is robust to scaling and prevents network discontinuities. (In this way, the technique is very reminiscent of SynFlow (Tanaka et al., Pruning Neural Networks without any data by iteratively conserving synaptic flow) in terms of motivation and resulting technique, although SynFlow is unstructured. I do not mean this as a criticism - just a suggestion for the authors of a connection they might be able to make in the future.) One big concern I have about this paper based on the methodology alone is as follows: the paper states a number of hypotheses about why this is a sensible way to prune (e.g., in the beginning of Section 4). I see no reason why any of these hypotheses are wrong, but the paper never makes an effort to evaluate any of them. I don't mean a theoretical justification here - that's difficult and unlikely to yield useful information about what happens in practice. I mean experiments to ablate the salient properties of the heuristic mentioned in the paragraph at the beginning of Section 4 (Does scaling invariance actually matter in practice? Is network disconnectivity actually a risk in practice?). My biggest concerns about the paper, however, are in the evaluation. I share two major reviewer concerns that were mentioned: (1) The paper compares to a very limited set of baseline pruning methods, and relatively older ones at that (2019 is indeed old in the world of pruning). (2) The paper does not look at standard, "large-scale" benchmarks for computer vision - namely, ResNet-50 on ImageNet. Neither of these concerns is necessarily decisive in my view. For example, with respect to Concern 1, the reviewers unhelpfully do not suggest very many additional structured pruning benchmarks to consider, and I think the additional baselines added during the revision process have softened this concern. I would also recommend taking a look at "Growing Efficient Deep Networks by Structured Sparsification" (Yuan et al) for a useful method and a good set of baselines. There are an arbitrary number of baselines one could add and the structured pruning space is a confusing mess, but I think the claims in this paper merit more than are currently present. With respect to Concern 2, I'm even more conflicted. On the one hand, I have rarely seen any pruning techniques proposed for or evaluated on vision tasks beyond image classification, despite the fact that - in the real world - segmentation is much more popular than it would seem by reading the ICLR proceedings. To that end, I applaud the authors for focusing on those settings and I see substantial value in a paper that does so. On the other hand, ResNet-50 on ImageNet (among other standard classification benchmarks) is the de facto measuring stick for evaluating pruning methods in computer vision, and the exclusive focus on segmentation here means it is very difficult to compare the proposed technique to other benchmarks. If the paper is to focus on segmentation alone, this places a higher burden on adding many additional comparisons to other methods (i.e., Concern 1). Finally, I don't see any reason why the paper *couldn't* also include ResNet-50 on ImageNet or the like in addition to segmentation; if it is a limitation on the compute available to the authors (something I empathize with), they did not say so in any of the author responses. Upon reading the author responses, I was left asking, "Why not both?" For those reasons, I do not recommend accepting the paper, although I think there are some good reasons to value the paper's contributions. At the end of the day, there are some relatively simple things that could be changed to make the paper much easier to contextualize within the pruning literature. As of right now, it would be very difficult for me to say whether or in what contexts this method should actually be used in practice. (P.S. I agree with the reviewers that Figure 3 is exceptionally hard to parse.)
train
[ "K4hQQlCCKWl", "95XZaFRYSj_", "rvIC6hHo1t", "2xJ7o0QH-tX", "7CDuzGlCy8-", "FAO-K513FI_", "_me5tGumKt1", "qkxMrrbbSHh", "IqypP-vXVib", "gq1DKcvfLpE", "Aid0tTVN0ah", "eBKLG186D9x", "RB8YtKrm1f1", "KkpBUaBmLU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response.\nI think the content of this paper is trying to claim that it proposes a new pruning method, however, the paper only provides comparisons on several small datasets, instead of the most decent benchmark dataset ImageNet. I cannot buy it this way. The comparisons with several NON-sota...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "FAO-K513FI_", "iclr_2022_xo_5lb5ond", "7CDuzGlCy8-", "KkpBUaBmLU", "95XZaFRYSj_", "RB8YtKrm1f1", "eBKLG186D9x", "Aid0tTVN0ah", "iclr_2022_xo_5lb5ond", "KkpBUaBmLU", "iclr_2022_xo_5lb5ond", "iclr_2022_xo_5lb5ond", "iclr_2022_xo_5lb5ond", "iclr_2022_xo_5lb5ond" ]
iclr_2022_famc03Gg231
Physical Gradients for Deep Learning
Solving inverse problems, such as parameter estimation and optimal control, is a vital part of science. Many experiments repeatedly collect data and employ machine learning algorithms to quickly infer solutions to the associated inverse problems. We find that state-of-the-art training techniques are not well-suited to many problems that involve physical processes since the magnitude and direction of the gradients can vary strongly. We propose a novel hybrid training approach that combines higher-order optimization methods with machine learning techniques. We replace the gradient of the physical process by a new construct, referred to as the physical gradient. This also allows us to introduce domain knowledge into training by incorporating priors about the solution space into the gradients. We demonstrate the capabilities of our method on a variety of canonical physical systems, showing that physical gradients yield significant improvements on a wide range of optimization and learning problems.
Reject
The paper introduces a method to solve inverse problems: given y, find x such that P(x)=y, for a given physical simulator P. A standard approach is to learn a neural net such that the inverse x=NN(y;\theta). The authors state that this is problematic because it is difficult to take "higher order" gradient information into consideration when using this standard approach. The method assumes that there is an approximate inverse solver inv(P) and discusses an alternative "Physical Gradient" objective that can incorporate knowledge of an approximate inv(P) and a neural network. The experiments are good though comparing performance on an iteration basis is not always fair since an iteration of the PG method can be much more expensive than standard approaches. The biggest issue that reviewers had was the clarity of the presentation. The authors have made a reasonable attempt to correct this, but I'm inclined to agree with the general reviewer sentiment that the presentation is still not at the required level. I agree that there are many things that are not clear, including the confusing discussion in section 2.1 about how the method takes higher order information into consideration. It only becomes partially transparent later in the experiments what is meant by higher order information. Overall, I feel this is the basis of a potentially valuable contribution but that the current presentation is quite confusing. As mentioned by others, I would also suggest to find a different name since Physical Gradient is also rather misleading. The following points were not part of the review process and I do not base the final decision on them, but the authors may want to consider the following: I believe there is also an error in the basic approach, or at least an approximation is made which is not explained. The error is that the approximate inv(P) depends also on the parameter \theta (since this is used to initialise inv(P)). This dependency is not taken into consideration in the paper. For example, in theorem 1 in appendix A, the calculation of the gradient dM/d\theta is incorrect since the authors assume that inv(P) is independent of \theta, which it is not (since the preconditioner value depends on \theta). If we do take this into account, we would need to know the derivative of inv(P|x) with respect to the preconditioner x. This dependency would alter the gradient, potentially considerably. The gradient in figure 2 for the PG is also incorrect. One may of course simply say that the paper discounts this correction term in order to retain tractability; however, this would need to be stated as an approximation.
train
[ "VbKPR1N_T2H", "33hHRHsc0S", "Y4_q6j-5xZs", "0oV7KiiTVPF", "_Vr_PtQsnaI", "YhvCbsg0MDI", "7t7M5SPgKI4", "zJnETX31oME", "NeytnpnKa4D", "FQFiFb-RdYX", "Phw8uyhhJ0", "Y7PKnB-eHY1", "8spuWh6jocM", "_krnD6_0u_l", "WF_8-eyRjgb", "TWHLgC9sFJ9", "hb20W5MCmH", "aZyIgEgWpqg", "vOaZ6i6osSG"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_revi...
[ " Sorry now for my delayed response! Maybe I'm misunderstanding, but I don't believe training requires forward simulation here -- rather, just a cheap approximation to the inverse solver. (Notably Equation 5, the equation the authors optimize, does not involve $\\mathcal{P}$, but instead $\\mathcal{P}^{-1}_n$.)", ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "zJnETX31oME", "iclr_2022_famc03Gg231", "0oV7KiiTVPF", "_Vr_PtQsnaI", "7t7M5SPgKI4", "Phw8uyhhJ0", "jt7ZcbS5iN4", "NeytnpnKa4D", "FQFiFb-RdYX", "TWHLgC9sFJ9", "G5rH31qtJFA", "TWHLgC9sFJ9", "iclr_2022_famc03Gg231", "aZyIgEgWpqg", "vOaZ6i6osSG", "iclr_2022_famc03Gg231", "iclr_2022_famc...
iclr_2022_eIvzaLx6nKW
Multi-Domain Self-Supervised Learning
Contrastive self-supervised learning has recently gained significant attention owing to its ability to learn improved feature representations without the use of label information. Current contrastive learning approaches, however, are only effective when trained on a particular dataset, limiting their utility in diverse multi-domain settings. In fact, training these methods on a combination of several domains often degrades the quality of learned representations compared to the models trained on a single domain. In this paper, we propose a Multi-Domain Self-Supervised Learning (MDSSL) approach that can effectively perform representation learning on multiple, diverse datasets. In MDSSL, we propose a three-level hierarchical loss for measuring the agreement between augmented views of a given sample, agreement between samples within a dataset and agreement between samples across datasets. We show that MDSSL when trained on a mixture of CIFAR-10, STL-10, SVHN and CIFAR-100 produces powerful representations, achieving up to a $25\%$ increase in top-1 accuracy on a linear classifier compared to single-domain self-supervised encoders. Moreover, MDSSL encoders can generalize more effectively to unseen datasets compared to both single-domain and multi-domain baselines. MDSSL is also highly efficient in terms of the resource usage as it stores and trains a single model for multiple datasets leading up to $17\%$ reduction in training time. Finally, for multi-domain datasets where domain labels are unknown, we propose a modified approach that alternates between clustering and MDSSL. Thus, for diverse multi-domain datasets (even without domain labels), MDSSL provides an efficient and generalizable self-supervised encoder without sacrificing the quality of representations in individual domains.
Reject
This paper introduces a multi-domain self-supervised representation learning method. Its objective consists of three terms: the first two terms are identical to SimCLR and the last one is to minimize the similarity of pairs across different datasets which is similar to the second term of SimCLR. In the experiment, it tests the methods across multiple common datasets. The method is simple but results are pretty good at the multi-domain setting. It seems to demonstrate the importance of domain clustering and moving the domains apart. However, there are several important questions the paper may need more clarification on: 1. What is the definition of the domain? How to determine the pair of data is from different domains? What is the motivation/theory that you used to choose those datasets as different domains in your experiment? 2. Is there any of the public datasets that would cover multiple domains? Without solving these questions, I think it would constrain the future research/adoption of the method.
train
[ "zPJS3AnQPS", "sYprvfPtQ5c", "61tIaHQ94kj", "W8O_RKKdsj9", "jARBCxtYMyr", "yOAcuWLdVHX", "EEPffVxEq54", "_nzPfu7UAks", "Tjw2Zcz-KQ", "keI66gGMISx", "vYu6Q1UeIDu", "4RtpObWU7j", "3ndeeMjN3WX", "bdzeCokk71j" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank Reviewer ZUR1 for their response. \n\n2. We would like to highlight Table1, Table 3 and Figure 5 where we provide the example of multi-domain setups including CIFAR-10 and STL-10 which overlap on 9 out of 10 classes. We believe this is an appropriate example for largely overlapped multi-domain setups tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "sYprvfPtQ5c", "61tIaHQ94kj", "W8O_RKKdsj9", "yOAcuWLdVHX", "keI66gGMISx", "Tjw2Zcz-KQ", "vYu6Q1UeIDu", "iclr_2022_eIvzaLx6nKW", "bdzeCokk71j", "4RtpObWU7j", "3ndeeMjN3WX", "iclr_2022_eIvzaLx6nKW", "iclr_2022_eIvzaLx6nKW", "iclr_2022_eIvzaLx6nKW" ]
iclr_2022_ygGMP1zkiD1
Debiasing Pretrained Text Encoders by Paying Attention to Paying Attention
Recent studies in fair Representation Learning have observed a strong inclination for natural language processing (NLP) models to exhibit discriminatory stereotypes across gender, religion, race and many such social constructs. In comparison to the progress made in reducing bias from static word embeddings, fairness in sentence-level text encoders received little consideration despite their wider applicability in contemporary NLP tasks. In this paper, we propose a debiasing method for pre-trained text encoders that both reduces social stereotypes, and inflicts next to no semantic offset. Unlike previous studies that directly manipulate the embeddings, we suggest to dive deeper into the operation of these encoders, and pay more attention to the way they pay attention to different social groups. We find that the attention mechanism is the root of all stereotypes. Then, we work on model debiasing by redistributing the attention scores of a text encoder such that it forgets any preference to historically advantaged groups, and attends to all social classes with the same intensity. Our experiments confirm that we successfully reduce bias with little damage to semantic representation.
Reject
This paper presents a debiasing technique that modifies a model's attention mechanism by equalizing attention across social groups. The authors show that their approach (which is perhaps the first of its kind to look at transformer based models and debiasing instead of fixed word representations) work well in debiasing across certain social group indicators while maintaining overall performance. However, there is disagreement between reviewers in terms of acceptance of the paper (especially Reviewer 7L6Q wants the paper to be rejected and points to recent critiques such as https://aclanthology.org/2021.acl-long.81.pdf that point out pitfalls with the benchmarks used in this paper). I agree with said reviewer that a lot of these benchmarks are toy-ish and finding real impact of bias in NLP models is quite elusive. Hence, I am recommending the paper be rejected for ICLR 2022 and the suggestions below be incorporated towards a better draft for the future.
train
[ "LKrG4vksfiR", "ouMGoiIYOXw", "J5k-oIpmBIR", "qS0ePcLpz6C", "WGKEXRwj76I", "WTDbOckDWYQ", "MUXUbRfM4Dk", "At6SaAN2o8n", "baAlTaPq10", "1W7BHPJkVzn" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Glad to hear you found the suggestion useful and that the results for HateXplain task are promising. Good luck with your work!", " We would like to thank all reviewers for spending time and effort in reviewing our work and providing valuable feedback. In our answers, we did our best to address all of your comme...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "J5k-oIpmBIR", "iclr_2022_ygGMP1zkiD1", "1W7BHPJkVzn", "At6SaAN2o8n", "At6SaAN2o8n", "MUXUbRfM4Dk", "baAlTaPq10", "iclr_2022_ygGMP1zkiD1", "iclr_2022_ygGMP1zkiD1", "iclr_2022_ygGMP1zkiD1" ]
iclr_2022_vEIVxSN8Xhx
Log-Polar Space Convolution
Convolutional neural networks use regular quadrilateral convolution kernels to extract features. Since the number of parameters increases quadratically with the size of the convolution kernel, many popular models use small convolution kernels, resulting in small local receptive fields in lower layers. This paper proposes a novel log-polar space convolution (LPSC) method, where the convolution kernel is elliptical and adaptively divides its local receptive field into different regions according to the relative directions and logarithmic distances. The local receptive field grows exponentially with the number of distance levels. Therefore, the proposed LPSC not only naturally encodes local spatial structures, but also greatly increases the single-layer receptive field while maintaining the number of parameters. We show that LPSC can be implemented with conventional convolution via log-polar space pooling and can be applied in any network architecture to replace conventional convolutions. Experiments on different tasks and datasets demonstrate the effectiveness of the proposed LPSC.
Reject
This paper proposes an alternative for constructing convolution kernels: instead of uniform spatial resolution, it proposes a spatially varying resolution with higher precision at the center of the kernel. The resolution decreases logarithmically as a function of the distance to the center. All reviewers agree that the idea is interesting, but in its current form, the submission is not mature enough to be published. In particular, reviewers raised some concerns about computational efficiency of the method. The authors explain that their method runs slower than conventional convolution because the implementation uses of-the-shell conventional convolution modules, and they speculate that the speed can be accelerated if the method is directly implemented with CUDA or by directly adapting the underlying code of convolutions in the integrated framework. While this is a reasonable argument, it is not actually verified. This it is not clear if there would be other road blockers to achieve the promised performance. It would be great if authors could present actual performance of the method using either of their suggested solutions (CUDA or modifying code of convolutions). In addition, reviewers raised concerns about some aspects of the evaluation setup, where test data is used to report the best performance. Authors respond that baselines are trained in the same fashion, hence the comparison is still fair. However, the reviewers were not convinced by this response. In concordance, I also think the use of test data during training is misleading, even if all methods use the same strategy, because this may tell us more about which approach can better (over)fit to the data as opposed to how well the methods are able to generalize to unseen samples. Another concern relates to the diminishing return in the performance as networks get larger. The authors respond that this might be because only the first layer uses the proposed log-polar convolution, speculating the problem will go away if the proposed approach is used in all layers. However, this is not empirically verified again and remains unclear if this is indeed the reason. I suggest authors resubmit after accommodating the provided feedback.
train
[ "LGESf7JU69", "V-L1agUezCl", "Ned0LfAIuwB", "461bpAQj29c" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper revisits the old of idea encodings that have higher density around a particular point and lower density as one moves away from a particular analysis point. In the literature, these have often been called log-polar filters, and including classical computer vision techniques such as shape context features...
[ 3, 5, 5, 5 ]
[ 4, 4, 4, 4 ]
[ "iclr_2022_vEIVxSN8Xhx", "iclr_2022_vEIVxSN8Xhx", "iclr_2022_vEIVxSN8Xhx", "iclr_2022_vEIVxSN8Xhx" ]
iclr_2022_mFpP0THYeaX
Gradual Domain Adaptation in the Wild: When Intermediate Distributions are Absent
We focus on the problem of domain adaptation when the goal is shifting the model towards the target distribution, rather than learning domain invariant representations. It is shown that under the following two assumptions: (a) access to samples from intermediate distributions, and (b) samples being annotated with the amount of change from the source distribution; self-training can be successfully applied on gradually shifted samples to adapt the model toward the target distribution. We hypothesize having (a) is enough to enable iterative self-training to slowly adapt the model to the target distribution, by making use of an implicit curriculum. In the case where (a) does not hold, we observe that iterative self-training falls short. We propose GIFT (Gradual Interpolation of Features toward Target), a method that creates virtual samples from intermediate distributions by interpolating representations of examples from source and target domains. Our analysis of various synthetic distribution shifts shows that in the presence of (a) iterative self-training naturally forms a curriculum of samples which helps the model to adapt better to the target domain. Furthermore, we show that when (a) does not hold, more iterations hurt the performance of self-training, and in these settings GIFT is advantageous. Additionally, we evaluate self-training, iterative self-training and GIFT on two benchmarks with different types of natural distribution shifts and show that when applied on top of other domain adaptation methods, GIFT improves the performance of the model on the target dataset.
Reject
Most reviewers agree that the paper addresses a relevant problem. However, they also believe that the paper lacks in several points : not-well supported claim, sometimes clarity, incremental in term of novelty.
train
[ "xlZbTsLCYPo", "B8ZvCWLrHjP", "ig-5MA9i1PT", "cYOka4iJ3vt", "5OZMo1eUSu", "4EzPxKhWKRF", "J0vbBiCxqnp", "ysl4B34aJwx", "gnPV1DpuImE", "m1D-KNtniT", "E5jAsXJnoqV", "RqkoDjjAQBt", "54ddxKpqJa", "yzc-EFatwkq", "pDxsuhBoNui", "KtEdpfB5wN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors’ efforts in answering my questions. I keep my score.", " Dear authors, thank you for your response and updates on the paper. Although the authors addressed some of the points I raised in my review, the most important concerns I have regarding this work still remain. For example, it is n...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 5, 4 ]
[ "4EzPxKhWKRF", "m1D-KNtniT", "E5jAsXJnoqV", "J0vbBiCxqnp", "iclr_2022_mFpP0THYeaX", "KtEdpfB5wN", "pDxsuhBoNui", "yzc-EFatwkq", "54ddxKpqJa", "gnPV1DpuImE", "RqkoDjjAQBt", "iclr_2022_mFpP0THYeaX", "iclr_2022_mFpP0THYeaX", "iclr_2022_mFpP0THYeaX", "iclr_2022_mFpP0THYeaX", "iclr_2022_mFp...
iclr_2022_R612wi_C-7w
Stable cognitive maps for Path Integration emerge from fusing visual and proprioceptive sensors
Spatial navigation in biological agents relies on the interplay between external (visual, olfactory, auditory, $\dots$) and proprioceptive (motor commands, linear and angular velocity, $\dots$) signals. How to combine and exploit these two streams of information, which vastly differ in terms of availability and reliability is a crucial issue. In the context of a new two--dimensional continuous environment we developed, we propose a direct-inverse model of environment dynamics to fuse image and action related signals, allowing reconstruction of the action relating the two successive images, as well as prediction of the new image from its current value and the action. The definition of those models naturally leads to the proposal of a minimalistic recurrent architecture, called Resetting Path Integrator (RPI), that can easily and reliably be trained to keep track of its position relative to its starting point during a sequence of movements. RPI updates its internal state using the (possibly noisy) proprioceptive signal, and occasionally resets it when the image signal is present. Notably, the internal state of this minimal model exhibits strong correlation with position in the environment due to the direct-inverse models, is stable across long trajectories through resetting, and allows for disambiguation of visually confusing positions in the environment through integration of past movement, making it a prime candidate for a \textbf{cognitive map}. Our architecture is compared to state-of-the-art LSTM networks on identical tasks, and consistently shows better performance while also offering more interpretable internal dynamics and higher-quality representations.
Reject
The paper considers the problem of path integration in cognitive maps, where combining proprioception with visual inputs is required to estimate the displacement. The paper proposes a small mechanism (a resetting path integrator) that extends a conventional LSTM for this purpose. The resulting networks demonstrate better performance and interpretability than a conventional LSTM on tested problems. The reviewers raised many issues with the paper. One concern was whether the problem was to model biological, artificial, or robot problems (reviewer hpxs, PuPV), which the authors successfully addressed by stating that it is a minimal model. Many other minor concerns were also addressed. However, significant concerns remained. One is the emphasis on the cognitive map (reviewer CUYu, hpxs) for which path-integration is a small part. Another major concern is the significance of the results, with reference to the baselines and $R^2$ (AjJt, CUYu). A third is on the generalizability of the method beyond single small examples (AjJT, hpxs, CUYu). All reviewers indicate reject due to concerns that the paper is not ready for publication. The paper is therefore rejected.
train
[ "P3sdSapY_pA", "byCvcWfOCwu", "e5mfO7tOYNe", "ZHwJIAgVOv", "GJf4OdbwCDT", "cqIDckyl6S", "zaxJHm0xTcr", "lEKnMNsuvXu", "Ut2_Ro9Zb5t", "xwXKDWuCpDW", "rOjNU02MrmH", "unRPlZFr5aM", "STiy8Dt0Ejg", "DGYKLEtlLNw", "2Z5cD6U_T_", "ZbH-zxfty9K", "RgYlutrxOGh", "QS1z77VZmTs", "FBHwztKGklU"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " > We understand the significance of these comparisons, which are in fact subsets of already existing experiments:\n\nThe outcome from the G=0 case is not captured in the current experiments. I think this is an important baseline, because it establishes to what degree the learning system benefits from its predicti...
[ -1, 5, -1, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ -1, 2, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "loe9nAyC1YI", "iclr_2022_R612wi_C-7w", "unRPlZFr5aM", "iclr_2022_R612wi_C-7w", "rOjNU02MrmH", "ZbH-zxfty9K", "Ut2_Ro9Zb5t", "iclr_2022_R612wi_C-7w", "QS1z77VZmTs", "ZHwJIAgVOv", "ZHwJIAgVOv", "byCvcWfOCwu", "hKMauw4OAgr", "hKMauw4OAgr", "hKMauw4OAgr", "hKMauw4OAgr", "lEKnMNsuvXu", ...
iclr_2022_krI-ahhgN2
Self-Contrastive Learning
This paper proposes a novel contrastive learning framework, called Self-Contrastive (SelfCon) Learning, that self-contrasts within multiple outputs from the different levels of a multi-exit network. SelfCon learning does not require additional augmented samples, which resolves the concerns of multi-viewed batch (e.g., high computational cost and generalization error). Furthermore, we prove that SelfCon loss guarantees the lower bound of label-conditional mutual information between the intermediate and the last feature. In our experiments including ImageNet-100, SelfCon surpasses cross-entropy and Supervised Contrastive (SupCon) learning without the need for a multi-viewed batch. We demonstrate that the success of SelfCon learning is related to the regularization effect associated with the single-view and sub-network.
Reject
This paper modifies the loss of supervised contrastive (SupCon) learning by adding a self-contrastive loss. Utilizing a multi-exit network and contrasting the multiple outputs of this network, the proposed self-contrastive (SelfCon) learning removes the requirement of additional data augmentation samples for creating positive pairs. The proposed SelfCon loss is theoretically connected to the lower bound of a label conditional mutual information between the intermediate and last feature. The paper focuses its study on SupCon & SelfCon-M, which are multi-batch variates that first augment the images, and then contrast the views between both augmented and non-augmented samples of the same class, and on SupCon-S and SelfCon-S, which are single-batch variates that only contrast between the samples of the same class and do not require additional data augmentations. A wide variety of experiments have been done on CIFAR-10, CIFAR-100, TinyImageNet, ImageNet-100, and ImageNet, but mostly with relatively small networks. The ratings for the paper were mixed [3,5,5,8 before rebuttal; 5,5,6,8 after rebuttal]. All four reviewers had provided detailed initial reviews, pointing out a long list of issues. The authors had incorporated these reviews to make a large number of improvements to their initial submission. After the author rebuttal period, while one reviewer raised the score from 5 to 6, two reviewers maintained their negative positions: Reviewer ZiPE is clearly concerned about the risk of accepting a method that may break as soon as a slightly larger model (ResNet50 instead 18) is used, the model is trained a bit longer, or the baselines are tuned, while Reviewer MBzi is unsatisfied with how the paper motivates its empirical construction from the perspective of mutual information maximization. Given the disagreements between the reviewers, the AC has carefully read the paper to provide an additional review. Some concerning observations of the AC are summarized as follows: 1. Echoing the concern of Reviewer ZiPE, the performance gain of SelfCon-S over SupCon diminishes in ImageNet with ResNet-18, as shown in Table 13, making it become even more important for the authors to conduct experiments following more standard settings (e.g., ResNet-50 on ImageNet). 2. The main paper seems to suggest SelfCon-S outperforms SelfCon-M and SupCon outperforms SupCon-S, while Table 13 in the Appendix suggests the opposite. 3. Table 3 that compares SelfCon-S with SupCon appears very misleading, as SupCon consumes more memory and computation than SelfCon-S simply because it has used data augmentations. If SupCon-S is used, it would take less memory and computation than SelfCon-S. 4. SelfCon-S adds a subnetwork to the backbone to boost its performance, so technically, it has more parameters than the backbone. Comparing it with a baseline that only uses a backbone model does not seem to be that fair. This point has not been discussed in the paper. 5. Last but not least, echoing the concerns of Reviewer ZiPE and MBzi, the paper seems to try to validate the motivating of the added loss with mutual information maximization. However, establishing the causal relationship between maximizing the mutual information of the intermediate and last layers and the classification performance needs much more than the correlation analysis provided in the paper. Given the above-mentioned concerns, the AC does not consider the paper to be ready for publication at its current stage.
train
[ "g7KQCJyhgWC", "3_B_N_9jw5V", "f1bXQE7voU0", "STrZCGlEoN", "KGAUpk399Z2", "wDqRa73CKWP", "QggIC3SMTNd", "yzp015TlTp", "RIV0LOeOMaP", "M35z8uzVjnC", "tflyMHhVO-j", "Ww4oPdgSSQz", "S2astyDp_ar", "7JHm4nw2SH", "xvUGUs1aNEr", "bS6_ReqRBWC", "PvAELjTP3zK", "sOeIew3fJUx", "HHlcX1ODo4",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " Thank you for the detailed feedback on my comments. Although I think that the quality of the manuscript would be further increased if revised according to the authors' feedbacks, unfortunately, I also agree with some of the concerns from other reviewers. Therefore, I lowered my rating to 6.", "Recent contrastiv...
[ -1, 6, -1, 6, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "3_B_N_9jw5V", "iclr_2022_krI-ahhgN2", "wRxPrFj-FhF", "iclr_2022_krI-ahhgN2", "QggIC3SMTNd", "yzp015TlTp", "7pju4PiRezq", "1wBzKoX-rZB", "iclr_2022_krI-ahhgN2", "3_B_N_9jw5V", "RIV0LOeOMaP", "STrZCGlEoN", "wRxPrFj-FhF", "xvUGUs1aNEr", "HHlcX1ODo4", "awm6dLZnsR0", "STrZCGlEoN", "icl...
iclr_2022_pVU7Gp7Nq4k
Representation mitosis in wide neural networks
Deep neural networks (DNNs) defy the classical bias-variance trade-off: adding parameters to a DNN that interpolates its training data will typically improve its generalization performance. Explaining the mechanism behind this ``benign overfitting'' in deep networks remains an outstanding challenge. Here, we study the last hidden layer representations of various state-of-the-art convolutional neural networks and find evidence for an underlying mechanism that we call "representation mitosis": if the last hidden representation is wide enough, its neurons tend to split into groups which carry identical information, and differ from each other only by a statistically independent noise. Like in a mitosis process, the number of such groups, or "clones'', increases linearly with the width of the layer, but only if the width is above a critical value. We show that a key ingredient to activate mitosis is continuing the training process until the training error is zero
Reject
This paper undertakes an empirical investigation of overparameterized neural networks, studying the last hidden representation and identifying "representation mitosis," a cloning effect whereby neurons split into groups that carry the same information. The effect is observed for a variety of architectural configurations/datasets, and a detailed set of experiments are performed to investigate the behavior. The reviewers had split opinions about this paper, with most reviewers appreciating the novelty and salience of the observations, but with some reviewers expressing skepticism about the generality of the effect. While the experiments are thorough and revealing, the practical importance of representation mitosis remains somewhat unconvincing. A primary motivating factor for the analysis is the search for an explanation of the unexpectedly good generalization behavior of oveparameterized networks and the origin of "benign overfitting." As highlighted in the reviews, the sensitivity of the mitosis effect to (1) training to zero loss and (2) optimal regularization suggests that it cannot be the sole explanation for benign overfitting, since the latter can and does occur without these conditions. The authors acknowledge this situation, and respond that their focus is on state-of-the-art models used by the community, rather than on toy settings. For this to be a persuasive response, more compelling results in these state-of-the-art situations should be evidenced -- in particular, as several reviewers pointed out, the negative results on ImageNet undermine this point to some extent. Overall, representation mitosis does seem like an interesting and potentially important phenomenon, but further work is needed to develop persuasive evidence in support of the interpretations and implications. While this is a borderline submission, I believe it falls just short of the mark, and cannot recommend acceptance.
train
[ "FowRtwr4fl3", "t8PqqrHLpke", "ijI2sUT9pDY", "NWLUSfwZ7cH", "nJ3KGDeuHX-", "Bxw6oO9rFeZ", "4CNWzepdijr", "ar837Z5-l5q", "BYkR24K0LOD", "FhqHe5BQu16", "OngPktxVB2", "Pq6tYRZfh8b", "QAtTeIUAkK8", "XjvPVQ0p4M" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper identify and studies a phenomenon called \"representation mitosis\": as the neural network width increases above a critical threshold, the neurons in the last layer becomes \"redundant\" in the sense that they start to form groups of neurons where they carry identical information, and differ from each o...
[ 5, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, 5 ]
[ 4, -1, -1, -1, -1, 2, -1, 4, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2022_pVU7Gp7Nq4k", "ar837Z5-l5q", "4CNWzepdijr", "Pq6tYRZfh8b", "QAtTeIUAkK8", "iclr_2022_pVU7Gp7Nq4k", "FhqHe5BQu16", "iclr_2022_pVU7Gp7Nq4k", "ar837Z5-l5q", "OngPktxVB2", "Bxw6oO9rFeZ", "FowRtwr4fl3", "XjvPVQ0p4M", "iclr_2022_pVU7Gp7Nq4k" ]
iclr_2022_fyLvrx9M9YP
Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles
Linking neural representations to linguistic factors is crucial in order to build and analyze NLP models interpretable by humans. Among these factors, syntactic roles (e.g. subjects, direct objects,$\dots$) and their realizations are essential markers since they can be understood as a decomposition of predicative structures and thus the meaning of sentences. Starting from a deep probabilistic generative model with attention, we measure the interaction between latent variables and realizations of syntactic roles, and show that it is possible to obtain, without supervision, representations of sentences where different syntactic roles correspond to clearly identified different latent variables. The probabilistic model we propose is an Attention-Driven Variational Autoencoder (ADVAE). Drawing inspiration from Transformer-based machine translation models, ADVAEs enable the analysis of the interactions between latent variables and input tokens through attention. We also develop an evaluation protocol to measure disentanglement with regard to the realizations of syntactic roles. This protocol is based on attention maxima for the encoder and on disturbing individual latent variables for the decoder. Our experiments on raw English text from the SNLI dataset show that $\textit{i)}$ disentanglement of syntactic roles can be induced without supervision, $\textit{ii)}$ ADVAE separates more syntactic roles than classical sequence VAEs, $\textit{iii)}$ realizations of syntactic roles can be separately modified in sentences by mere intervention on the associated latent variables. Our work constitutes a first step towards unsupervised controllable content generation. The code for our work is publicly available.
Reject
This paper proposes improving human interpretability and manipulability of neural representations by obtaining syntactic roles (here, subject, object, prepositional object, and main verb) without supervision by means of them becoming linked to latent variables in a novel proposed attention-driven VAE (ADVAE) model, which provides cross attention between a language transformer and latent variables. The paper argues that syntactic roles are quite central to meaning interpretation and that the ADVAE recovers them better than LSTM or Transformer (with mean pooling) VAEs. This is a quite interesting direction and paper. There was active discussion with the reviewers, one of whom (9pDc) moved their rating from reject to quite strong support, while the other reviewers either sat on the fence or raised from reject to borderline. Nevertheless, I overall tend to agree that the paper is still lacking in empirical support, a view clearly shared by reviewers WuPD and 7uFL. The SNLI data is very simple descriptive sentences, nearly all in the form of S V O or S V PP. Would this work on more complex data, in other languages, or with more word order variation? There isn't very much investigation, but the new results added during reviewing based on Yelp data seem to offer more concerns than confidence. These are also very short sentences but with more varied structure and some complementation. It seems like D_{dec} is now very low (much lower than for the sequence VAE), the ability to distinguish out grammatical roles seems limited to {subj} vs. {dobj, pobj} in the encoder and none at all in the decoder (Figure 6/7). And then for the examples in Appendix D, the disentanglement abilities barely seem stronger than being able to pick out subjects, though when there are sentences with subordinate clauses, it is perhaps random which subject you get. The resampled realizations in appendix H also seem to show limited disentanglement: resampling the subject usually seems to change the object as well, often markedly. No convincing downstream applications are shown. As such, while I agree that disentanglement is at the heart of representation learning, I can't get on board with reviewer 9pDc feeling that this paper now has convincing results. Reviewer 7uFL also emphasizes that there is no strong reason that the latent variables have to align with syntactic roles. In particular, the motivation in NMT whereby constituents clump and reorder together does not exist here. It may only work for the very simple and regular sentences of SNLI. Hence, overall, I feel that this method needs more extensive validation on harder, more varied data sets before it becomes a convincing contribution, and so I propose rejecting the paper at this point in time. Nevertheless, I do think the topic is interesting and this approach has the potential to be good.
train
[ "y5tQPA-NDaR", "hFVygtLwbVg", "hs_93HVuQo", "FXEljQlkjv", "yqQ4iEiV2lk", "0tiHMEr6SmX", "3tLzgXO-rt", "41qjnP-Wy9H", "5Fb373suXGT", "Qf4jOkojtIx", "FpfnKwacjfD", "O9LMZEVmnv", "O9WtFjklFmU", "mH8EwnR2nk6", "42ZbP3kx_2S", "l0u_Koq6he", "qEwAVPc4j7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " While I accept that Yelp is a harder dataset with more variable syntactic constructions, don't the results in Appendix B tend to undermine the idea that in general this method is able to disentangle grammatical roles. E.g., doesn't Figure 7 show that 4 of the 8 latent variables strongly represent the subject whil...
[ -1, -1, -1, -1, 5, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_fyLvrx9M9YP", "l0u_Koq6he", "FXEljQlkjv", "O9LMZEVmnv", "iclr_2022_fyLvrx9M9YP", "iclr_2022_fyLvrx9M9YP", "41qjnP-Wy9H", "5Fb373suXGT", "Qf4jOkojtIx", "O9WtFjklFmU", "iclr_2022_fyLvrx9M9YP", "yqQ4iEiV2lk", "0tiHMEr6SmX", "qEwAVPc4j7", "l0u_Koq6he", "iclr_2022_fyLvrx9M9YP", ...
iclr_2022_HI99z0aLsl
Benign Overfitting in Adversarially Robust Linear Classification
``Benign overfitting'', where classifiers memorize noisy training data yet still achieve a good generalization performance, has drawn great attention in the machine learning community. To explain this surprising phenomenon, a series of works have provided theoretical justification in over-parameterized linear regression, classification, and kernel methods. However, it is not clear if benign overfitting still occurs in the presence of adversarial examples, i.e., examples with tiny and intentional perturbations to fool the classifiers. In this paper, we show that benign overfitting indeed occurs in adversarial training, a principled approach to defend against adversarial examples. In detail, we prove the risk bounds of the adversarially trained linear classifier on the mixture of sub-Gaussian data under $\ell_p$ adversarial perturbations. Our result suggests that under moderate perturbations, adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks, despite overfitting the noisy training data. Numerical experiments validate our theoretical findings.
Reject
The paper studies the benign overfitting phenomenon for linear models with adversarial training. The main issue is that the result is quite expected for experts versed in the benign overfitting papers, and indeed the reviewers pointed out that they could not see much technical novelty. However, even more importantly, the original benign overfitting papers had the advantage of proposing of simpler model (linear!!!) with the same behavior as the complex ones in practice. This is not the case here, as the result diverge from empirical observations on deep networks. The authors argue that it is a valuable finding that the empirical observation is not "universal", but this is a somewhat moot point as linear models are a priori very very different from the setting in which these empirical observations were made. For these reasons I believe the paper does not meet the bar for ICLR (yet it could still be publishable elsewhere).
train
[ "l9x3ur_DQu", "3AdgsB_LDM3", "0C0x1UhCbvI", "a6FpJ4Y9vFm", "NTpwAl-JEWd", "BLrbX0cx4Sq", "6gvkXHQiY_J", "HGIjeKGVapa", "sYzVa4Shox_", "IOUzN8U1RM" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply and additional questions.\n\n**\"Regarding Q1, I understand that the empirical observations do not refute your results. My point is that, given the theory produces predictions that are inconsistent with the empirical data, this is a sign that the assumptions underlying the theoretical set...
[ -1, -1, 6, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 3, 2 ]
[ "3AdgsB_LDM3", "NTpwAl-JEWd", "iclr_2022_HI99z0aLsl", "HGIjeKGVapa", "IOUzN8U1RM", "0C0x1UhCbvI", "sYzVa4Shox_", "iclr_2022_HI99z0aLsl", "iclr_2022_HI99z0aLsl", "iclr_2022_HI99z0aLsl" ]
iclr_2022_VwSHZgruNEc
Safe Opponent-Exploitation Subgame Refinement
Search algorithms have been playing a vital role in the success of superhuman AI in both perfect information and imperfect information games. Specifically, search algorithms can generate a refinement of Nash equilibrium (NE) approximation in games such as Texas hold'em with theoretical guarantees. However, when confronted with opponents of limited rationality, an NE strategy tends to be overly conservative, because it prefers to achieve its low exploitability rather than actively exploiting the weakness of opponents. In this paper, we investigate the dilemma of safety and opponent exploitation. We present a new real-time search framework that smoothly interpolates between the two extremes of strategy search, hence unifying safe search and opponent exploitation. We provide our new strategy with a theoretically upper-bounded exploitability and lower-bounded reward against an opponent. Our method can exploit the weakness of its opponent without significantly sacrificing its exploitability. Empirical results show that our method significantly outperforms NE baselines when opponents play non-NE strategies and keeps low exploitability at the same time.
Reject
The paper presents tackles the problem of finding strategies that are -- unlike Nash which is safe -- both safe (non-exploitable, to some extent) and able to exploit the opponent. The proposed solution is a convex combination of exploitation and safety that is efficient to compute. Overall, the paper is borderline. Given that the objective and its analysis are not especially surprising, a lot rides on the thoroughness of the empirical results, which could be improved.
train
[ "UJhT0-DimIO", "0TOENXbzXl9", "UictqAO3LG7", "L9Z9EJMBuGX", "EYyC0m2rfVx", "fakrBKvWHbv", "01gVkqJPqM6", "EIQsfwMiOd", "G0krHZR2zH-", "1XvTdxzZXhE", "BwglrFdp12", "MKmkzYby9k4", "eBkpoDZGSxy", "gdUGObstJbv", "yf4OrK4esBh", "l5WjDHL_mfk", "e8S9pvak_Nx", "fV53FrEUl5", "hflQY-eXj5t"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n\nWe are grateful to all reviewers for their beneficial comments and advice on our paper. Especially, reviewer SWZo and f5FT propose an interesting method that is closely related to our algorithm SES. The method, which we call EXP-STRATEGY, builds on the same gadget game of SES (Figure 1 in this paper). It is d...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_VwSHZgruNEc", "UictqAO3LG7", "EYyC0m2rfVx", "iclr_2022_VwSHZgruNEc", "fV53FrEUl5", "L9Z9EJMBuGX", "l5WjDHL_mfk", "BwglrFdp12", "MKmkzYby9k4", "BwglrFdp12", "hflQY-eXj5t", "eBkpoDZGSxy", "L9Z9EJMBuGX", "yf4OrK4esBh", "fV53FrEUl5", "e8S9pvak_Nx", "iclr_2022_VwSHZgruNEc", "...
iclr_2022_aNCZ8151BjY
Design and Evaluation for Robust Continual Learning
Continual learning is the ability to learn from new experiences without forgetting previous experiences. Different continual learning methods are each motivated by their own interpretation of the continual learning scenario, resulting in a wide variety of experiment protocols, which hinders understanding and comparison of results. Existing works emphasize differences in accuracy without considering the effects of experimental settings. However, understanding the effects of experimental assumptions is the most crucial part of any evaluation, as the experimental protocol may supply implicit information. We propose six rules as a guideline for experimental design and execution to conduct robust continual learning evaluation for better understanding of the methods. Using these rules, we demonstrate the importance of experimental choices regarding the sequence of incoming data and the sequence of the task oracle. Even when task oracle-based methods are desired, the rules can guide experimental design to support better evaluation and understanding of the continual learning methods. Consistent application of these rules in evaluating continual learning methods makes explicit the effect and validity of many assumptions, thereby avoiding misleading conclusions.
Reject
This paper is a scholarly examination for how to conduct continual learning evaluations, proposing six rules that in large part synthesize work from other papers. While there is certainly scholarly benefit to such an exploration, all reviewers believe that the contribution is not substantial enough in its current form to warrant acceptance. It is certainly true that not all continual learning papers follow all of the guidelines/rules for evaluation, and consequently, papers such as this are useful to improve the scientific process. However, the contribution needs to be substantially deepened, including more extensive and in-depth experiments with novel insights as described in the reviews, before the paper is ready for publication.
train
[ "OsPfKZvUIXt", "cxfutG2-a_", "lNykhpwMywr", "qK7nCY103SP", "4cA8z7pC6CU", "cd9D1N4H9tT", "1xogSOhD2uD", "VN7BCF01TZk", "kWf1sZ8TPF8", "RznfkKRXU8", "v_ztqP8_enp", "0DpvA_lwkYA", "-UhtCI0n8L" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for taking the time to write a proper rebuttal.\nI tend to agree with most of the points and I think that continual-learning papers attempting to change the status-quo for the better are important.\nI, however, still think the contribution is not important enough in its current state to merit a ICLR public...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 1, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5, 2, 3 ]
[ "kWf1sZ8TPF8", "qK7nCY103SP", "iclr_2022_aNCZ8151BjY", "4cA8z7pC6CU", "lNykhpwMywr", "-UhtCI0n8L", "VN7BCF01TZk", "0DpvA_lwkYA", "RznfkKRXU8", "v_ztqP8_enp", "iclr_2022_aNCZ8151BjY", "iclr_2022_aNCZ8151BjY", "iclr_2022_aNCZ8151BjY" ]
iclr_2022_m716e-0clj
Communicate Then Adapt: An Effective Decentralized Adaptive Method for Deep Training
Decentralized adaptive gradient methods, in which each node averages only with its neighbors, are critical to save communication and wall-clock training time in deep learning tasks. While different in concrete recursions, existing decentralized adaptive methods share the same algorithm structure: each node scales its gradient with information of the past squared gradients (which is referred to as the adaptive step) before or while it communicates with neighbors. In this paper, we identify the limitation of such adapt-then/while-communicate structure: it will make the developed algorithms highly sensitive to heterogeneous data distributions, and hence deviate their limiting points from the stationary solution. To overcome this limitation, we propose an effective decentralized adaptive method with a communicate-then-adapt structure, in which each node conducts the adaptive step after finishing the neighborhood communications. The new method is theoretically guaranteed to approach to the stationary solution in the non-convex scenario. Experimental results on a variety of CV/NLP tasks show that our method has a clear superiority to other existing decentralized adaptive methods.
Reject
The paper proposes a ''communicate-then-adapt'' framework for decentralized optimization, with both theoretical and empirical analysis. The reviewers' main concern is the comparison in theory with prior methods like the GT-DAdam. The convergence to a stationary point of GT-DAdam seems to be faster than the proposed method in the important non-convex optimization. The reviewers are not convinced by the strong claim that ''communicate-then-adapt'' is better than ''adapt-then-communicate'' as such ''adapt-then-communicate'' method can also achieve same or better rates, possibly with less hyper-parameter tuning. I would suggest the authors to make more proper comparison with related methods.
train
[ "pPSbJmuwHk", "03kKHvTJKwG", "qyDSvrTSFbh", "KdE490BuDW6", "FRv8ZEQC1XK", "cCxQTbpZLH", "qRb-dOJDoYt", "jx_qC-8zoE7", "_5FmE3WVYn", "ii1zjE1Wqm", "S4v6d2L6-U3", "BoE9sPjZc2O", "fxoLQHWUhzd", "2wBHIQ7np2s", "1yqpuiuAcZM", "RS1QF62ZjMk" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **[Q2-2. Influence of $\\nu$ on DAdam]** Since DAdam is an immediate variant of DSGD, the influence of $\\nu$ on DAdam will be similar to that on DSGD. After imposing the hyper-parameter $\\nu$ to DAdam, we achieve the new recursion \n\n$\nx^{(t+1)} = ((1-\\nu) I+\\nu W) x^{(t)} - \\gamma (H^{(t)} + \\epsilon I )...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "1yqpuiuAcZM", "iclr_2022_m716e-0clj", "KdE490BuDW6", "FRv8ZEQC1XK", "RS1QF62ZjMk", "RS1QF62ZjMk", "RS1QF62ZjMk", "fxoLQHWUhzd", "1yqpuiuAcZM", "2wBHIQ7np2s", "fxoLQHWUhzd", "fxoLQHWUhzd", "iclr_2022_m716e-0clj", "iclr_2022_m716e-0clj", "iclr_2022_m716e-0clj", "iclr_2022_m716e-0clj" ]
iclr_2022_L_sHGieq1D
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation
In this paper, we consider the problem of domain generalization in semantic segmentation, which aims to learn a robust model using only labeled synthetic (source) data. The model is expected to perform well on unseen real (target) domains. Our study finds that the image style variation can largely influence the model's performance and the style features can be well represented by the channel-wise mean and standard deviation of images. Inspired by this, we propose a novel adversarial style augmentation (AdvStyle) approach, which can dynamically generate hard stylized images during training and thus can effectively prevent the model from overfitting on the source domain. Specifically, AdvStyle regards the style feature as a learnable parameter and updates it by adversarial training. The learned adversarial style feature is used to construct an adversarial image for robust model training. AdvStyle is easy to implement and can be readily applied to different models. Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains and show that we can achieve the state of the art. Moreover, AdvStyle can be employed to domain generalized image classification and produces a clear improvement on the considered datasets.
Reject
This paper presents a domain generalization method for semantic segmentation. The model is trained on synthetic data (source) and is tested on unseen real datasets (target). The authors propose a simple data augmentation method, AdvStyle, generating unconstrained adversarial examples for the training on the source domain. There was no consensus on the method among the reviewers. Several issues have been raised. After rebuttal and discussion, no one really changed her/his mind. The motivation of why focus just on driving scenes is still questionable. Definitively, it could be interesting to investigate further why it is not straightforward to have gains on other kinds of scenes. Finally, we encourage the authors to address the raised concerns regarding the discussion with previous works and the comparisons for future publication.
val
[ "NZ4NAU_UeEh", "j6Kb8sxdJ5g", "dkCX6tN1o-W", "WCoYvJIHNpj", "JcBi8CWogi-", "KLAb37elPo4", "eWBwOYrXLQt", "yJQafPCdY3_", "5vk301nJVWP", "X9lSfuOf8yK", "tQsEKLutv69", "yHQ0ZKSgFis", "qFoG5cqc_ht" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nWe sincerely thank you for the valuable feedback and suggestions for improvement. According to your suggestions, we have carefully revised our manuscript. The modifications are summarized below.\n\n* Redraw Fig.1 (in Section 1, Page 2). Suggestion from Reviewer 38TQ.\n* Provide a discussion on ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "iclr_2022_L_sHGieq1D", "dkCX6tN1o-W", "iclr_2022_L_sHGieq1D", "tQsEKLutv69", "dkCX6tN1o-W", "tQsEKLutv69", "yHQ0ZKSgFis", "yHQ0ZKSgFis", "dkCX6tN1o-W", "qFoG5cqc_ht", "iclr_2022_L_sHGieq1D", "iclr_2022_L_sHGieq1D", "iclr_2022_L_sHGieq1D" ]
iclr_2022_0EL4vLgYKRW
Plan Better Amid Conservatism: Offline Multi-Agent Reinforcement Learning with Actor Rectification
The idea of conservatism has led to significant progress in offline reinforcement learning (RL) where an agent learns from pre-collected datasets. However, it is still an open question to resolve offline RL in the more practical multi-agent setting as many real-world scenarios involve interaction among multiple agents. Given the recent success of transferring online RL algorithms to the multi-agent setting, one may expect that offline RL algorithms will also transfer to multi-agent settings directly. Surprisingly, when conservatism-based algorithms are applied to the multi-agent setting, the performance degrades significantly with an increasing number of agents. Towards mitigating the degradation, we identify that a key issue that the landscape of the value function can be non-concave and policy gradient improvements are prone to local optima. Multiple agents exacerbate the problem since the suboptimal policy by any agent could lead to uncoordinated global failure. Following this intuition, we propose a simple yet effective method, \underline{O}ffline \underline{M}ulti-Agent RL with \underline{A}ctor \underline{R}ectification (OMAR), to tackle this critical challenge via an effective combination of first-order policy gradient and zeroth-order optimization methods for the actor to better optimize the conservative value function. Despite the simplicity, OMAR significantly outperforms strong baselines with state-of-the-art performance in multi-agent continuous control benchmarks.
Reject
This paper makes a key observation that the gradient-based method gets more likely to suffer from poor local optima in multi-agent reinforcement learning (MARL) with more agents particularly in the offline setting. The paper proposes the use of zeroth order optimization method to avoid local optima. Specifically, it samples multiple actions and regularize the policy to get closer to the optimal action among those. The use of such zeroth order method to avoid poor local optima is not particularly new, although finding its effective in MARL and the empirical support are valuable. The main discussion point was the insufficiency of experimental support, and the additional experiments during the discussion have addressed the original concerns of the reviewers to some extent. Overall, given the limited novelty and inefficiency of support (either theoretical or empirical), the paper is slightly below the borderline.
train
[ "DJ-jwEj8eJS", "LVCTemrxEVg", "tenS5jIZjn7", "5ClXuXyOjwH", "c3UwGzICAPm", "KB25wmg_IFE", "W3qS1viEGZg", "SXWr5rFkpwS", "uBjJ8lp34V4", "599VyOQ0Zld", "8iw8KnAMb56", "PyRuMVpe1zq", "XiLtKeRWny7", "twL_QKjWwpn", "Zp9SzlISTw-", "RjwHJMsJ25U", "sWe804NpBo" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a simple yet effective multi-agent offline reinforcement learning algorithm, called Offline Multi-Agent RL with Actor Rectification, which combines CQL (first-order optimization) and evolutionary algorithms (zero-order optimization) to enhance the applicability of the CQL algorithm on multi-age...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2022_0EL4vLgYKRW", "PyRuMVpe1zq", "Zp9SzlISTw-", "RjwHJMsJ25U", "DJ-jwEj8eJS", "sWe804NpBo", "Zp9SzlISTw-", "DJ-jwEj8eJS", "XiLtKeRWny7", "iclr_2022_0EL4vLgYKRW", "sWe804NpBo", "SXWr5rFkpwS", "RjwHJMsJ25U", "iclr_2022_0EL4vLgYKRW", "iclr_2022_0EL4vLgYKRW", "iclr_2022_0EL4vLgYKRW"...
iclr_2022_JXSZuWSPH85
Deep Inverse Reinforcement Learning via Adversarial One-Class Classification
Traditional inverse reinforcement learning (IRL) methods require a loop to find the optimal policy for each reward update (called an inner loop), resulting in very time-consuming reward estimation. In contrast, classification-based IRL methods, which have been studied recently, do not require an inner loop and estimate rewards quickly, although it is difficult to prepare an appropriate baseline corresponding to the expert trajectory. In this study, we introduced adversarial one-class classification into the classification-based IRL framework, and consequently developed a novel IRL method that requires only expert trajectories. We experimentally verified that the developed method can achieve the same performance as existing methods.
Reject
This paper studies the problem of inverse reinforcement learning by relying on only demonstrations and no interaction (like imitation learning). The reviewers liked the premise but had major concerns with evaluation and baselines. The paper initially received reviews tending to reject. One of the questions was about missing behavior cloning baseline which the authors added in rebuttal. But the BC baseline seems to be really competitive (in fact, better in 3 out of 4 envs) as compared to the proposed approach. In conclusion, all reviewers still believed that their concerns regarding insufficient evidence for justifying approach and missing comparisons to other prior work still stand. AC agrees with the reviewers' consensus that the paper is not yet ready for acceptance.
train
[ "iBIyNPFInxn", "87AY6Gft6z", "bYseRuHxeiF", "OfOQk959uZv", "xHdoBPudlnG", "8Hhga-ITu-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a state-only offline ILR algorithm (learning reward function), SOLO-IRL, by reducing IRL to adversarial one-class classification. Compared to most existing ILR algorithms, the proposed algorithm is more efficient and requires fewer assumptions. It does not require solving RL problems in the inn...
[ 5, 3, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, 2 ]
[ "iclr_2022_JXSZuWSPH85", "iclr_2022_JXSZuWSPH85", "iBIyNPFInxn", "87AY6Gft6z", "8Hhga-ITu-", "iclr_2022_JXSZuWSPH85" ]
iclr_2022_0SiVrAfIxOe
Closed-Loop Control of Additive Manufacturing via Reinforcement Learning
Additive manufacturing suffers from imperfections in hardware control and material consistency. As a result, the deposition of a large range of materials requires on-the-fly adjustment of process parameters. Unfortunately, learning the in-process control is challenging. The deposition parameters are complex and highly coupled, artifacts occur after long time horizons, available simulators lack predictive power, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing. To achieve this goal, we assume that the perception of a deposition device is limited and can capture the process only qualitatively. We leverage this assumption to formulate an efficient numerical model that explicitly includes printing imperfections. We further show that in combination with reinforcement learning, our model can be used to discover control policies that outperform state-of-the-art controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by implementing a first-of-its-kind self-correcting printer.
Reject
All reviewers agreed that the paper contains interesting experiments. However, as this paper is a systems paper without much algorithmic contributions, all reviewers felt that the paper felt short in terms of describing the results, has too many unsupported claims and it is unclear how the presented results transfer to slightly different domains. I therefore agree with the reviewers and recommend rejection of the paper.
train
[ "dG_81WbDny0", "_LkaL8knDY8", "f7KnTomFL9s", "J8DUJHRQJL", "nzfuIzvQUpL", "3WZmdLUTMmE", "bMq0PMZ_gW", "u-WtifEau7X", "ZmI0MqFPyUa", "xJeQOta150m", "vfJBvTMmjT", "6RumO89--pm", "7Rx81IrZSt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for the clarifications, and for the restructuring/changes to the paper in response to reviewer comments.\n\nHowever, I will agree with the other reviewers and leave my score unchanged. As has been stated previously, this is an interesting paper and I encourage the authors to continue this...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "xJeQOta150m", "nzfuIzvQUpL", "u-WtifEau7X", "iclr_2022_0SiVrAfIxOe", "bMq0PMZ_gW", "bMq0PMZ_gW", "vfJBvTMmjT", "ZmI0MqFPyUa", "7Rx81IrZSt", "6RumO89--pm", "iclr_2022_0SiVrAfIxOe", "iclr_2022_0SiVrAfIxOe", "iclr_2022_0SiVrAfIxOe" ]
iclr_2022_ajIC9wlTd52
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Neural network models often generalize poorly to mismatched domains or distributions. In NLP, this issue arises in particular when models are expected to generalize compositionally, that is, to novel combinations of familiar words and constructions. We investigate learning representations that facilitate transfer learning from one compositional task to another: the representation and the task-specific layers of the models are strategically trained differently on a pre-finetuning task such that they generalize well on mismatched splits that require compositionality. We apply this method to semantic parsing, using three very different datasets, COGS, GeoQuery and SCAN, used alternately as the pre-finetuning and target task. Our method significantly improves compositional generalization over baselines on the test set of the target task, which is held out during fine-tuning. Ablation studies characterize the utility of the major steps in the proposed algorithm and support our hypothesis.
Reject
The authors attempt to tackle the problem of compositional generalization, i.e., the problem of generalizing to novel combinations of familiar words or structures. The authors propose a transfer learning strategy based on pretraining language models. The idea is to introduce a pre-finetuning task where a model is first trained on compositional train-test splits from other datasets, before transferring to fine-tuning on the training data from the target dataset. Although the technique brings some improvements, and the authors do their best the address the reviewers' questions, it is still unclear: a) Why the method should work in principle, whether there is a theoretical backing and how it formally relates to meta-learning b) How the approach compares to data augmentation methods since pre-finetuning requires more data, albeit from a different dataset. See for example: https://openreview.net/forum?id=PS3IMnScugk c) The whole approach would be more convincing if the authors could articulate *how* their method renders a model more robust to distribution shifts (e.g., based on GOGS results it does not help structural generalization, do the gains come from lexical generalization?) d) it would also be interesting whether this method works on larger scale or more realistic datsets like CFQ, ATIS or machine translation https://arxiv.org/pdf/1912.09713.pdf https://arxiv.org/abs/2010.11818
train
[ "4DQm7HTXFQx", "BdzBO_caMD0", "uZ9wW8-Lg7R", "UIADjw1qZ1S", "OYsoTU08TIi", "uknpBJMWPlJ", "255bMWN3Tmu", "cVWNdToNf5" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a training procedure for encoder--decoder models (applied to semantic parsing) which aims to improve the models' ability to compositionally generalize (successfully handle novel combinations of words and structures, where combinations were not seen in training). The approach relies on pre-finet...
[ 6, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_ajIC9wlTd52", "iclr_2022_ajIC9wlTd52", "4DQm7HTXFQx", "cVWNdToNf5", "255bMWN3Tmu", "255bMWN3Tmu", "iclr_2022_ajIC9wlTd52", "iclr_2022_ajIC9wlTd52" ]
iclr_2022_1ch9DLxqF-
Dominant Datapoints and the Block Structure Phenomenon in Neural Network Hidden Representations
Recent work has uncovered a striking phenomenon in large-capacity neural networks: they contain blocks of contiguous hidden layers with highly similar representations. This block structure has two seemingly contradictory properties: on the one hand, its constituent layers have highly similar dominant first principal components (PCs), but on the other hand, their representations, and their common first PC, are highly dissimilar across different random seeds. Our work seeks to reconcile these discrepant properties by investigating the origin of the block structure in relation to the data and training methods. By analyzing properties of the dominant PCs, we find that the block structure arises from dominant datapoints — a small group of examples that share similar image statistics (e.g. background color). However, the set of dominant datapoints, and the precise shared image statistic, can vary across random seeds. Thus, the block structure reflects meaningful dataset statistics, but is simultaneously unique to each model. Through studying hidden layer activations and creating synthetic datapoints, we demonstrate that these simple image statistics dominate the representational geometry of the layers inside the block structure. We also explore how the phenomenon evolves through training, finding that the block structure takes shape early in training, but the underlying representations and the corresponding dominant datapoints continue to change substantially. Finally, we study the interplay between the block structure and different training mechanisms, introducing a targeted intervention to eliminate the block structure, as well as examining the effects of pretraining and Shake-Shake regularization.
Reject
This paper experimentally shows that the block-structure of similarities between layers typically appears for different models and such a structure is mainly induced by small set of dominant datapoints. Moreover, the dominant datapoints are not just noisy artifacts but represent some common image patterns such as background colors. The authors also found that the block structure can easily disappear by removing the dominant datapoints, and the authors also proposed a method to suppress the block structure by regularizing PCs, Shake-Shake regularization, and transfer learning. This paper gives thorough experiments that clarify the mechanism of appearance of a block structure. However, its significance is a bit minor. Indeed, the block structure does not affect the generalization ability very much, and it can be removed without changing the predictive performance. I agree that investigating the behavior of the internal representation is of scientific interest as the authors pointed out, but on the other hand, its significance would not be convincing. Indeed, this concern was pointed out by several reviewers. Next, the main focus of this study is about the setting of large model with small data size. It is not clear whether it is universal across different model size relative to the dataset size. There is no theoretical investigation (for example, the block-structure phenomenon could be explained by a high dimensional random matrix theory). In summary, this paper investigates a somehow interesting phenomenon but its significance is not convincing. Thus, it would be a bit below the threshold of the acceptance.
train
[ "GQP35qRYnFw", "FjmMdk9DAeH", "WgttAJ2z-c", "uSJYvbC4UKN", "U-NOHPQHn-9", "SVuDw4rtx5O", "T-awTE0Ayj", "j91ms_yM4kN", "Z6AiLvMABLx", "BFzzTsO-Cim", "-qa_jwLBuef", "HCRvJYcBwoU", "VnR8UHAUsRU", "yX1Cqh1-SN", "w_6J-fIwgOW" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their comments. We agree that the relationship between the block structure and dominant datapoints is straightforward. To restate what is shown in the paper: Across many different layers, activations of dominant datapoints point in similar directions and have large norms. Kernels that ar...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4, 3 ]
[ "FjmMdk9DAeH", "Z6AiLvMABLx", "BFzzTsO-Cim", "VnR8UHAUsRU", "w_6J-fIwgOW", "w_6J-fIwgOW", "yX1Cqh1-SN", "HCRvJYcBwoU", "-qa_jwLBuef", "VnR8UHAUsRU", "iclr_2022_1ch9DLxqF-", "iclr_2022_1ch9DLxqF-", "iclr_2022_1ch9DLxqF-", "iclr_2022_1ch9DLxqF-", "iclr_2022_1ch9DLxqF-" ]
iclr_2022_qTTccuW4dja
AriEL: volume coding for sentence generation comparisons
Saving sequences of data to a point in a continuous space makes it difficult to retrieve them via random sampling. Mapping the input to a volume makes it easier, which is the strategy followed by Variational Autoencoders. However optimizing for prediction and for smoothness, forces them to trade-off between the two. We analyze the ability of standard deep learning techniques to generate sentences through latent space sampling. We compare toAriEL, an entropic coding method to construct volumes without the need for extra loss terms. We benchmark on a toy grammar, to automatically evaluate the language learned and generated, and find where it is stored in the latent space. Then, we benchmark on a dataset of human dialogues and using GPT-2 inside AriEL. Our results indicate that the random access to stored information can be improved since AriEL is able to generate a wider variety of correct language by randomly sampling the latent space. This supports the hypothesis that encoding information into volumes, leads to improved retrieval of learned information with random sampling.
Reject
The paper proposes an entropic coding approach for sentence embedding. Reviewers have spent good efforts in reviewing. They generally feel the problem is important/interesting, but also found it difficult to understand the paper. Thus, the authors are encouraged to thoroughly revise the paper according to the reviews provided, and another round of review is needed to better determine the merits of this paper.
test
[ "SobwUSxPDL3", "XUFJyLykqLt", "9C2WlEM4zP", "ak09ZR73lDI", "FsxTutM4Vg9", "7Z43vll8Qqd", "fM3EtDhItI", "5jgAu1CEIw6", "u38uWlLLRk" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [My first concern is that the proposed method...] Still, Ariel is closer to a \"LM interface\" than an actual model like autoencoders and Transformers are. Is that a wrong impression?\n\n[My next biggest concern is that...] My point is that Ariel doesn't \"fix a flaw\" so much as \"do something completely differe...
[ -1, -1, -1, -1, -1, 3, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "XUFJyLykqLt", "5jgAu1CEIw6", "fM3EtDhItI", "7Z43vll8Qqd", "u38uWlLLRk", "iclr_2022_qTTccuW4dja", "iclr_2022_qTTccuW4dja", "iclr_2022_qTTccuW4dja", "iclr_2022_qTTccuW4dja" ]
iclr_2022_DaQVj6qY2-s
Understanding Graph Learning with Local Intrinsic Dimensionality
Many real-world problems can be formulated as graphs and solved by graph learning techniques. Whilst the rise of Graph Neural Networks (GNNs) has greatly advanced graph learning, there is still a lack of understanding of the intrinsic properties of graph data and their impact on graph learning. In this paper, we narrow the gap by studying the intrinsic dimension of graphs with \emph{Local Intrinsic Dimensionality (LID)}. The LID of a graph measures the expansion rate of the graph as the local neighborhood size of the nodes grows. With LID, we estimate and analyze the intrinsic dimensions of node features, graph structure and representations learned by GNNs. We first show that feature LID (FLID) and structure LID (SLID) are well correlated with the complexity of synthetic graphs. Following this, we conduct a comprehensive analysis of 12 popular graph datasets of diverse categories and show that 1) graphs of lower FLIDs and SLIDs are generally easier to learn; 2) GNNs learn by mapping graphs (feature and structure together) to low-dimensional manifolds that are of much lower representation LIDs (RLIDs), i.e., RLID $\ll$ FLID/SLID; and 3) when the layers go deep in message-passing based GNNs, the underlying graph will converge to a complete graph of $\operatorname{SLID}=0.5$, losing structural information and causing the over-smoothing problem.
Reject
The paper investigates the interesting problem of the local intrinsic dimension (LID) of graphs, and interpreted the GNN learning from Feature LID (FLID), Structure LID (SLID), and Representation LID (RLID). The concepts are novel but the paper needs better insights on how LID can improve graph learning and stronger empirical evidence to support their claims.
train
[ "Yth8Pj3B4U", "v7odCrv-NKQ", "tLSUSD5JB7n", "6lsOmWBmq9R", "yMDKIuovwm", "GBV66u6s4Si", "zUDQfsDKk64", "nHlDQg59GI", "q2-yt2BPyQe", "xySdsCk25TR" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " For the added Section 7 (improving GNN performance using RLID): \nIs there a relationship between RLID and final accuracy? The authors should give some analysis as in Section 6.1, which will make the RLID regularizer more reasonable. \n\n\nFor my previous questions: \nQ1. The explanation for using the average...
[ -1, -1, -1, -1, -1, -1, -1, 1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "GBV66u6s4Si", "nHlDQg59GI", "xySdsCk25TR", "nHlDQg59GI", "nHlDQg59GI", "nHlDQg59GI", "q2-yt2BPyQe", "iclr_2022_DaQVj6qY2-s", "iclr_2022_DaQVj6qY2-s", "iclr_2022_DaQVj6qY2-s" ]
iclr_2022_LNmNWds-q-J
3D Pre-training improves GNNs for Molecular Property Prediction
Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts. Including 3D molecular structure as input to learned models their performance for many molecular tasks. However, this information is infeasible to compute at the scale required by several real-world applications. We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs. Using methods from self-supervised learning, we maximize the mutual information between 3D summary vectors and the representations of a Graph Neural Network (GNN) such that they contain latent 3D information. During fine-tuning on molecules with unknown geometry, the GNN still generates implicit 3D information and can use it to improve downstream tasks. We show that 3D pre-training provides significant improvements for a wide range of properties, such as a 22% average MAE reduction on eight quantum mechanical properties. Moreover, the learned representations can be effectively transferred between datasets in different molecular spaces.
Reject
This paper aims to use pre-training to bridge the gap in performance between 2D GNN and 3D GNN. Specifically, during pretraining, it trains both 2D GNNs and 3D GNNs on data equipped with 3D geometry to maximize the mutual information between the 2D GNN representation with the 3D GNN representation. The proposed approach is interesting and novel and the paper presents some promising results showing that the pre-training does provide some benefits for downstream tasks where 3D geometry information is not available in comparison to several other baseline pretraining methods. While the reviewers agree that property prediction without only 2D graph is a practically important setting for high throughput screening, there are concerns about whether the current set of results paint a clear picture on the benefits and superiority of the proposed methods to alternatives (e.g., vs conf-gen) even after the revision. This is not due to lacking of results, but more of a presentation issue where results are not organized and discussed clearly to provide a coherent story. We do see clear and strong potential for this paper but it needs a careful rewrite/re-organization to tease out the key messages and how the experiments support them.
test
[ "HpoLiLgd5Tt", "Gt1Me-4-GfD", "cvrK8xplPb8", "bkwtJMHgtiE", "FihYsXdSEg2", "7Dxg4TgCMjz", "Bkvh9ohXoCD", "9piGxT7Sfje", "DzbX5XU6BY-", "c2HboeURPs7", "ZGBmRSJDf40", "z6lO7F-24bA", "WRCnKw5qsNi", "FjA3z85O5VK", "23933zHJnuM", "cthkpaZXSh" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to note that \"Conf-gen\" is another 3D pre-training approach proposed in our paper. We see its good performance on some of the OGB datasets as further evidence that pre-training with 3D information provides significant improvements and is a valuable approach.\n\nWhen comparing 3D Infomax to Conf-ge...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2 ]
[ "Gt1Me-4-GfD", "7Dxg4TgCMjz", "iclr_2022_LNmNWds-q-J", "23933zHJnuM", "bkwtJMHgtiE", "c2HboeURPs7", "iclr_2022_LNmNWds-q-J", "DzbX5XU6BY-", "ZGBmRSJDf40", "z6lO7F-24bA", "FjA3z85O5VK", "cvrK8xplPb8", "cthkpaZXSh", "iclr_2022_LNmNWds-q-J", "iclr_2022_LNmNWds-q-J", "iclr_2022_LNmNWds-q-J...
iclr_2022_yjxVspo7gXt
Scaling Fair Learning to Hundreds of Intersectional Groups
Bias mitigation algorithms aim to reduce the performance disparity between different protected groups. Existing techniques focus on settings where there is a small number of protected groups arising from a single protected attribute, such as skin color, gender or age. In real-world applications, however, there are multiple protected attributes yielding a large number of intersectional protected groups. These intersectional groups are particularly prone to severe underrepresentation in datasets. We conduct the first thorough empirical analysis of how existing bias mitigation methods scale to this setting, using large-scale datasets including the ImageNet People Subtree and CelebA. We find that as more protected attributes are introduced to a task, it becomes more important to leverage the protected attribute labels during training to promote fairness. We also find that the use of knowledge distillation, in conjunction with group-specific models, can help scale existing fair learning methods to hundreds of protected intersectional groups and reduce bias. We show on ImageNet's People Subtree that combining these insights can further reduce the bias amplification of fair learning algorithms by 15% ---a surprising reduction given that the dataset has 196 protected groups but fewer than 10% of the training dataset has protected attribute labels.
Reject
The manuscript performs an empirical analysis of existing bias mitigation methods on two large datasets CelebA and ImageNet People Subtree where there are multiple sensitive attributes and some unavailable sensitive attribute labels. The results show that existing methods can mitigate intersectional bias at scale but unlabeled mitigation methods generalize poorly. The manuscript further proposes a knowledge distillation approach which can augment other labeled mitigation approaches. On the positive aspect, the manuscript studies an important problem: intersectional subgroups on deep learning methods. Reviewers acknowledged that an empirical study on this problem is as an opportunity to make a contribution as it can highlight previously unknown issues. There are however several major concerns including: 1. Methodological contribution (knowledge distillation) is under-developed, while empirical investigation is interesting but can be further developed; 2. The fairness metrics adopted in this manuscript need to be clarified; 3. A discussion on the hyperparameter tuning, maybe involving a fairness-accuracy tradeoff; 4. The claimed O(1) complexity for the knowledge distillation approach is implausible because it assumes the availability of G group-specific models. This has been clarified in the rebuttal that the claim is only for the inference complexity, and the approach does not improve the training complexity. Reviewers also concluded that while the empirical analysis is interesting, the results on CelebA to be of limited use because the sensitive attributes are "purely illustrative." It's not clear that the insights from these illustrative intersectional groups (e.g. big nose & attractive) will hold for groups that are meaningful in a fairness sense.
train
[ "t5Ohke6MKBK", "rRBJOd9m3xc", "G1b-nXjEos5", "KpJChs6iXy8", "lU5TaaRXiS", "LnC1Vh8qupu", "Qr1mEZkEJmq", "-pk82BCzj6a", "-9R9Gp_Y6xY" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper conducts an empirical analysis of existing bias mitigation methods on two large datasets CelebA and ImageNet where there are multiple sensitive attributes and some unavailable protected labels. The results show the existing can mitigate intersectional bias at scale but the unlabeled methods generalize p...
[ 5, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_yjxVspo7gXt", "-9R9Gp_Y6xY", "iclr_2022_yjxVspo7gXt", "Qr1mEZkEJmq", "t5Ohke6MKBK", "-pk82BCzj6a", "iclr_2022_yjxVspo7gXt", "iclr_2022_yjxVspo7gXt", "iclr_2022_yjxVspo7gXt" ]
iclr_2022_gfUPGPMxB7E
Data Sharing without Rewards in Multi-Task Offline Reinforcement Learning
Offline reinforcement learning (RL) bears the promise to learn effective control policies from static datasets but is thus far unable to learn from large databases of heterogeneous experience. The multi-task version of offline RL enables the possibility of learning a single policy that can tackle multiple tasks and allows the algorithm to share offline data across tasks. Recent works indicate that sharing data between tasks can be highly beneficial in multi-task learning. However, these benefits come at a cost -- for data to be shared between tasks, each transition must be annotated with reward labels corresponding to other tasks. This is particularly expensive and unscalable, since the manual effort in annotating reward grows quadratically with the number of tasks. Can we retain the benefits of data sharing without requiring reward relabeling for every task pair? In this paper, we show that, perhaps surprisingly, under a binary-reward assumption, simply utilizing data from other tasks with constant reward labels can not only provide a substantial improvement over only using the single-task data and previously proposed success classifiers, but it can also reach comparable performance to baselines that take advantage of the oracle multi-task reward information. We also show that this performance can be further improved by selectively deciding which transitions to share, again without introducing any additional models or classifiers. We discuss how these approaches relate to each other and baseline strategies under various assumptions on the dataset. Our empirical results show that it leads to improved performance across a range of different multi-task offline RL scenarios, including robotic manipulation from visual inputs and ant-maze navigation.
Reject
Although sharing data between tasks benefits multitask RL, this requires that rewards be relabeled across tasks. This paper shows that, for binary rewards, directly reusing data from other tasks with constant reward relabels is effective, and the paper develops a method around this idea that is highly effective. The reviewers found that the idea and execution were impressive, that the paper was well written, and that the empirical analysis was convincing. In response to concerns in the preliminary reviews about certain shortcomings in the empirical analysis and some lack of theoretical analysis, the authors provided substantial revisions to the paper. Due to some lack of reviewer response to the discussion, this meta-reviewer examined whether those revisions were sufficient to address the reviewers' concerns. The authors did a good job in providing the requested improvements and the analysis is stronger, but remaining similarities to existing methods (CDS) means that this paper still remains borderline. These same concerns were also shared by reviewers that continued to engage in discussion with the authors. To remedy this, the authors are encouraged to better and more substantially address differences with prior work in the writing and motivation throughout the entire paper. In addition, although space is a concern, it would be beneficial to integrate the high-level takeaways from the new analyses in the appendices into the main paper.
train
[ "YCYmvWWVdR2", "gvLvZBqjp7", "qFE3s0bjN9", "6Ocv4VXsji3", "OGRBq3PF-oH", "jHRCm8_oXX6", "oSMpkCpJegW", "F2G3-wYFYvC", "uO4X5KJZAul", "6XM0uPJt9tp", "wE80ngfHCM", "6CYtORozW0t", "M0TLS9hRsHm", "JQp-u2WhK2s", "YHz-b95gqK", "AsXjLR6kdkL", "4p-GDjZKYGj", "ycvVvGVaoLc", "XlIvSLdiC6S",...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " Thank you for your reply and raising the score! \n\nOn an average of all our experiments, we find that CUDS and UDS only perform comparable to CDS and Sharing All with true rewards, and do not outperform these significantly. That said, we agree with the reviewer that this result is surprising. We never claimed th...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "OGRBq3PF-oH", "qFE3s0bjN9", "VmUok4zfgjK", "iclr_2022_gfUPGPMxB7E", "6XM0uPJt9tp", "wE80ngfHCM", "M0TLS9hRsHm", "JQp-u2WhK2s", "iclr_2022_gfUPGPMxB7E", "wE80ngfHCM", "AsXjLR6kdkL", "6Ocv4VXsji3", "w6DnidgCKar", "tIlZUTX6xp", "iclr_2022_gfUPGPMxB7E", "4p-GDjZKYGj", "6Ocv4VXsji3", "...
iclr_2022_VZC5Lzyl0le
Automated Mobile Attention KPConv Networks via A Wide & Deep Predictor
Kernel Point Convolution (KPConv) achieves cutting-edge performance on 3D point cloud applications. Unfortunately, the large size of KPConv network limits its usage in mobile scenarios. In addition, we observe that KPConv ignores the kernel relationship and treats each kernel point equally when formulating neighbor-kernel correlation via Euclidean distance. This leads to a weak representation power. To mitigate the above issues, we propose a module named Mobile Attention Kernel Point Convolution (MAKPConv) to improve the efficiency and quality of KPConv. MAKPConv employs a depthwise kernel to reduce resource consumption and re-calibrates the contribution of kernel points towards each neighbor point via Neighbor-Kernel attention to improve representation power. Furthermore, we capitalize Inverted Residual Bottleneck (IRB) to craft a design space and employ a predictor-based Neural Architecture Search (NAS) approach to automate the design of efficient 3D networks based on MAKPConv. To fully exploit the immense design space via an accurate predictor, we identify the importance of carrying feature engineering on searchable features to improve neural architecture representations and propose a Wide & Deep Predictor to unify dense and sparse neural architecture representations for lower error in performance prediction. Experimental evaluations show that our NAS-crafted MAKPConv network uses 96% fewer parameters on 3D point cloud classification and segmentation benchmarks with better performance. Compared with state-of-the-art NAS-crafted model SPVNAS, our NAS-crafted MAKPConv network achieves ~1% better mIOU with 83% fewer parameters and 52% fewer Multiply-Accumulates.
Reject
Four reviewers have reviewed this manuscript and two found it borderline leaning towards acceptance, two other reviewers scored it below the acceptance threshold. While the authors *identify the key challenges and bottlenecks in 3D point cloud model*, the most positive two reviewers notice that the depthwise kernel and the attention mechanism (and similar tools) are well-known in the literature and that this work is *more of an engineering improvement than a technical contribution*, which erodes the novelty of the proposed idea on that front. While authors noticed some discrepancies in the numbers quoted by Rev. 3, the model gains are also nonetheless modest compared to other models. Overall, the feeling amongst the reviewers was that the presentation of NK attention could be further improved and that the paper uses very heavy machinery to achieve results comparable with SOTA. On this occasion, the manuscript is below the acceptance threshold with even the borderline positive reviewers having their doubts about clear cut technical novelty.
train
[ "nNWK5e0e20g", "j7c_4YsGx_b", "DSjsIBzWMoo", "_MoBuvv5Saf", "_CjjdUyZb7P", "OUrp-Vd-DRZ", "pNvkCkRoPt", "UxzkpcHOJZm", "K0y4sjj2XmC", "SZI823Sibne", "r6Cb_FK1VD3", "sGDHIXFr_yq", "E-ImQHhu50", "KC9t3opM3oY", "PvgwPxE7mDD", "4Tv2tPmQn4", "F-5mxFhjFv", "F8txde1eCtN", "FDiJxWnPDog",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " The authors would like to clarify that **we have absolutely no means of any intimidation**. If anything makes you misunderstand our intention, we apologize. \nThe authors would like to clarify the key misunderstandings that may have caused the extensive discussions here. \n\n- For comparison results with SimpleVi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "j7c_4YsGx_b", "DSjsIBzWMoo", "_MoBuvv5Saf", "OUrp-Vd-DRZ", "r6Cb_FK1VD3", "pNvkCkRoPt", "UxzkpcHOJZm", "SZI823Sibne", "iclr_2022_VZC5Lzyl0le", "F-5mxFhjFv", "gnF_GCfbu15", "iclr_2022_VZC5Lzyl0le", "iclr_2022_VZC5Lzyl0le", "PvgwPxE7mDD", "F8txde1eCtN", "y91_FC7dRmT", "K0y4sjj2XmC", ...
iclr_2022_zKbMQ2NY1y
Aug-ILA: More Transferable Intermediate Level Attacks with Augmented References
An intriguing property of deep neural networks is that adversarial attacks can transfer across different models. Existing methods such as the Intermediate Level Attack (ILA) further improve black-box transferability by fine-tuning a reference adversarial attack, so as to maximize the perturbation on a pre-specified layer of the source model. In this paper, we revisit ILA and evaluate the effect of applying augmentation to the images before passing them to ILA. We start by looking into the effect of common image augmentation techniques and exploring novel augmentation with the aid of adversarial perturbations. Based on the observations, we propose Aug-ILA, an improved method that enhances the transferability of an existing attack under the ILA framework. Specifically, Aug-ILA has three main characteristics: typical image augmentation such as random cropping and resizing applied to all ILA inputs, reverse adversarial update on the clean image, and interpolation between two attacks on the reference image. Our experimental results show that Aug-ILA outperforms ILA and its subsequent variants, as well as state-of-the-art transfer-based attacks, by achieving $96.99\%$ and $87.84\%$ average attack success rates with perturbation budgets $0.05$ and $0.03$, respectively, on nine undefended models.
Reject
This paper develops a new method, named Augmented Intermediate Level Attack (Aug-ILA), to improve the transferability of black-box attacks. Specifically, the proposed Aug-ILA contains three key modules: image transformations, reverse-adversarial update, and attack interpolation. Overall, the reviewers think it is an interesting paper, but are concerned that the original ablations are not enough to support the effectiveness of the proposed method, including missing strong baseline attacks and defense methods, and only one dataset is considered. During the discussion period, the authors actively provide new results. However, the Reviewer TcRw and the Reviewer 2dLg are not fully convinced by the rebuttal, especially regarding 1) in these additional experiments, no comparison is provided with other SOTA attacks beyond ILA-based approaches; 2) Table 11 shows the proposed method even degrades (rather than improves) the performance of VNI-CT-FGSM on defense models; 3) the attack rate of the proposed method is sensitive to the selection of layers, therefore, need to be carefully tuned in experiments (which could lead to unfair comparisons to other attacks). These concerns are indeed legitimate, and should be addressed carefully before publication. I encourage the authors to incorporate all the reviewers' comments and make a stronger submission next time.
train
[ "9hX7w9-adHF", "x5diiyntoan", "PFZxs4Z7Kn", "QZjkSfSxk81", "2MilnuFi2lG", "Hef_S8xXdF", "N79SeqRG7Iq", "cZ0A1yUUWIh", "VP-OBKhGher", "hzFhgfPCLh1", "E6IxtuVG3d2", "icKkgyHVaN5", "HElz5vYaGT", "YPCMte9btvk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have read the responses from the authors and the changes in the revised paper. The followings are my comments. \n\n\nWeakness (Main)\n\nWhen all the recent transferable attack methods published in top conferences are examined on defended models, generally, the authors are expected to do the same. Thus, I don’t ...
[ -1, 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "HElz5vYaGT", "iclr_2022_zKbMQ2NY1y", "icKkgyHVaN5", "iclr_2022_zKbMQ2NY1y", "QZjkSfSxk81", "iclr_2022_zKbMQ2NY1y", "QZjkSfSxk81", "QZjkSfSxk81", "HElz5vYaGT", "HElz5vYaGT", "YPCMte9btvk", "x5diiyntoan", "iclr_2022_zKbMQ2NY1y", "iclr_2022_zKbMQ2NY1y" ]
iclr_2022_t3E10H8UNz
Transferring Hierarchical Structure with Dual Meta Imitation Learning
Hierarchical Imitation learning (HIL) is an effective way for robots to learn sub-skills from long-horizon unsegmented demonstrations. However, the learned hierarchical structure lacks the mechanism to transfer across multi-tasks or to new tasks, which makes them have to learn from scratch when facing a new situation. Transferring and reorganizing modular sub-skills require fast adaptation ability of the whole hierarchical structure. In this work, we propose Dual Meta Imitation Learning (DMIL), a hierarchical meta imitation learning method where the high-level network and sub-skills are iteratively meta-learned with model-agnostic meta-learning. DMIL uses the likelihood of state-action pairs from each sub-skill as the supervision for the high-level network adaptation, and use the adapted high-level network to determine different data set for each sub-skill adaptation. We theoretically prove the convergence of the iterative training process of DMIL and establish the connection between DMIL and the Expectation-Maximization algorithm. Empirically, we achieve state-of-the-art few-shot imitation learning performance on the meta-world benchmark.
Reject
The paper proposes a hierarchical meta imitation learning framework for few-shot transfer in the context of long-horizon control tasks. Underlying the framework is a hierarchical adaptation of model-agnostic meta learning (MAML) that jointly learns the high-level policy together with the set of modular low-level policies (sub-skills), both of which are fine-tuned at test time based on a small number of demonstrations. Experimental evaluations on the meta-world benchmark as well as a kitchen environment benchmark compare the proposed framework with recent baselines. As several reviewers note, the problem of jointly learning modular policies together with the high-level policy for composing these sub-skills is both challenging and interesting to the robotics and learning communities. The manner by which the paper extends existing work in meta-learning (MAML) and hierarchical imitation learning is novel and technically sound. The reviewers raised some concerns, notably those regarding (1) the the framework's sensitivity to various hyperparameter settings and its ability to generalize to other domains; (2) the merits of joint optimization over decoupled optimization of the sub-skills and high-level policy; and (3) the need for experiments/evaluations on different domains. The authors provided a detailed response to each of the reviewers that includes the addition of a different benchmark evaluation (the kitchen environment), new ablation studies, and updates to the text. After a thorough review, however, concerns remain regarding the reproducibility of the results, which call into question some of the key contributions that the paper claims to provide over the existing state-of-the-art. The authors are encouraged to provide a more balanced discussion of the contributions along with evidence to support reproducibility in any future version of the paper.
train
[ "T4C0wCjlZi-", "C532F0spVuB", "cpg1xld3GT0", "J23oEgmD948", "hB3NEGNUZ5X", "B7KLMAoxXHX", "cHApdJfJXZi", "y52bQkpv9Ih", "rgfO5rN-l2", "LGaiiV1uk9i", "-8wnwDlhh_M", "N-IMEBYrgnS", "AtBKduddrGk", "KzXy9hy0GiB", "GawkxGd0vEO", "vsdis4fMuDf", "8YRBM0fFB59", "UExpdrT58qw", "G9oPjxX887...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your appreciation of our work. For the MLSH problem, in our paper, we are in an imitation learning scenario, so we use an offline variant of MLSH, which is introduced in detail in Appendix E.2 (at the bottom of page 21). We are sorry that we did not make this clear in the main text of this version. We ...
[ -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, 2, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "C532F0spVuB", "N-IMEBYrgnS", "B7KLMAoxXHX", "iclr_2022_t3E10H8UNz", "GawkxGd0vEO", "KzXy9hy0GiB", "iclr_2022_t3E10H8UNz", "UExpdrT58qw", "J23oEgmD948", "G9oPjxX887R", "cHApdJfJXZi", "G9oPjxX887R", "cHApdJfJXZi", "cHApdJfJXZi", "J23oEgmD948", "J23oEgmD948", "iclr_2022_t3E10H8UNz", ...
iclr_2022_s3V9I71JvkD
Offline Meta-Reinforcement Learning with Online Self-Supervision
Meta-reinforcement learning (RL) methods can meta-train policies that adapt to new tasks with orders of magnitude less data than standard RL, but meta-training itself is costly and time-consuming. If we can meta-train on offline data, then we can reuse the same static dataset, labeled once with rewards for different tasks, to meta-train policies that adapt to a variety of new tasks at meta-test time. Although this capability would make meta-RL a practical tool for real-world use, offline meta-RL presents additional challenges beyond online meta-RL or standard offline RL settings. Meta-RL learns an exploration strategy that collects data for adapting, and also meta-trains a policy that quickly adapts to data from a new task. Since this policy was meta-trained on a fixed, offline dataset, it might behave unpredictably when adapting to data collected by the learned exploration strategy, which differs systematically from the offline data and thus induces distributional shift. We propose a hybrid offline meta-RL algorithm, which uses offline data with rewards to meta-train an adaptive policy, and then collects additional unsupervised online data, without any reward labels to bridge this distribution shift. By not requiring reward labels for online collection, this data can be much cheaper to collect. We compare our method to prior work on offline meta-RL on simulated robot locomotion and manipulation tasks and find that using additional unsupervised online data collection leads to a dramatic improvement in the adaptive capabilities of the meta-trained policies, matching the performance of fully online meta-RL on a range of challenging domains that require generalization to new tasks.
Reject
The paper describes a new offline meta RL technique that addresses the distributional shift problem with a self-supervised online exploration phase where reward labels are not available. The framework is novel and interesting. The authors addressed many concerns of the reviewers. However, the additional experiments raised additional questions. For instance, why does meta-BC perform so well, even better than the proposed method without online data, and other baselines seem not to work at all? In the discussion, the reviewers expressed concerns about the experimental results in the case of changing dynamics. Those experiments are questionable since the proposed method only considers the reward information to deal with different dynamics. Finally, an important question regarding SMAC remains unanswered: how much does the proposed method depend on the quality of the offline dataset and the quality of the reward decoder? Overall, the work is promising and the authors are encouraged to continue their work by addressing the reviewers concerns.
train
[ "JklVLivSgZ", "VHgPpXQNf0w", "nP6fqWeF1V", "ltZIFUZfzDM", "rUvCazfVQo", "uv7yGMinxWt", "HnhMvdVobqw", "7fDqWPcGvni", "0CNJBXEtZn2", "aNoNya82vd", "JEC795S4YGK", "BoEsr3GI93u", "8lcALIuc-Y", "RjwPWJZ6j2g", "mk8XHW4OL7j", "2cvfq96NKZF", "miwPTHjAB-8" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering the questions and acknowledge the constraint with resets but also pointing out the broader context in which this is still okay. I will keep my original score.", " Dear Reviewer,\n\nWe hope that you've had a chance to read our response. We would really appreciate a reply as to whether our re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "BoEsr3GI93u", "miwPTHjAB-8", "rUvCazfVQo", "uv7yGMinxWt", "aNoNya82vd", "2cvfq96NKZF", "7fDqWPcGvni", "miwPTHjAB-8", "mk8XHW4OL7j", "JEC795S4YGK", "0CNJBXEtZn2", "8lcALIuc-Y", "RjwPWJZ6j2g", "iclr_2022_s3V9I71JvkD", "iclr_2022_s3V9I71JvkD", "iclr_2022_s3V9I71JvkD", "iclr_2022_s3V9I7...
iclr_2022_UQQgMRq58O
Understanding Generalized Label Smoothing when Learning with Noisy Labels
Label smoothing (LS) is an arising learning paradigm that uses the positively weighted average of both the hard training labels and uniformly distributed soft labels. It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model. Later it was reported LS even helps with improving robustness when learning with noisy labels. However, we observe that the advantage of LS vanishes when we operate in a high label noise regime. Puzzled by the observation, we proceeded to discover that several proposed learning-with-noisy-labels solutions in the literature instead relate more closely to $\textit{negative label smoothing}$ (NLS), which defines as using a negative weight to combine the hard and soft labels! We show that NLS differs substantially from LS in their achieved model confidence. To differentiate the two cases, we will call LS the positive label smoothing (PLS), and this paper unifies PLS and NLS into $\textit{generalized label smoothing}$ (GLS). We provide understandings for the properties of GLS when learning with noisy labels. Among other established properties, we theoretically show NLS is considered more beneficial when the label noise rates are high. We provide extensive experimental results on multiple benchmarks to support our findings too.
Reject
This paper studies generalized label smoothing (GLS), which unifies positive label smoothing (PLS) and negative label smoothing (NLS), and studies its connections to existing loss functions. It also shows the benefit of NLS in the high noise regime. Although the reviewers acknowledge that the idea of NLS in this paper is interesting, they also expressed the concerns that: the practicality of GLS is not thoroughly evaluated against prior works; the empirically best setting of parameter r is only verified in a limited number of datasets; the theoretical results' difference with prior works is limited. We encourage the authors to take the reviewers' feedback to strengthen the paper in the next iteration.
train
[ "WPfs98iRAld", "NUlc7zkAN-5", "zSJsENNFTp6", "eLxrWzxYfn2", "PgbRS2bCFa", "NA5Jyhg_S06", "MZliyeLgi69", "nj3nMnBHZGn", "5R5TrOEoKH", "8BKwBscJ4Fp", "7G-7aC0loaO", "UNyqOYJG0Ag", "Zoqt9xSn3A" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies label smoothing when learning with noisy label. It proposed generalized label smoothing (GLS) containing positive label smoothing (PLS) and negative label smoothing (NLS), where NLS allows the smoothing parameter to be negative. The authors found that when the noisy rate is high, NLS is more ben...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_UQQgMRq58O", "iclr_2022_UQQgMRq58O", "eLxrWzxYfn2", "8BKwBscJ4Fp", "iclr_2022_UQQgMRq58O", "MZliyeLgi69", "Zoqt9xSn3A", "NUlc7zkAN-5", "8BKwBscJ4Fp", "UNyqOYJG0Ag", "WPfs98iRAld", "iclr_2022_UQQgMRq58O", "iclr_2022_UQQgMRq58O" ]
iclr_2022_e-IkMkna5uJ
Spectral Bias in Practice: the Role of Function Frequency in Generalization
Despite their ability to represent highly expressive functions, deep learning models trained with SGD seem to find simple, constrained solutions that generalize surprisingly well. Spectral bias – the tendency of neural networks to prioritize learning low frequency functions – is one possible explanation for this phenomenon,but so far spectral bias has only been observed in theoretical models and simplified experiments. In this work, we propose methodologies for measuring spectral bias in modern image classification networks. We find that these networks indeed exhibit spectral bias, and that networks that generalize well strike a balance between having enough complexity (i.e. high frequencies) to fit the data while being simple enough to avoid overfitting. For example, we experimentally show that larger models learn high frequencies faster than smaller ones, but many forms of regularization, both explicit and implicit, amplify spectral bias and delay the learning of high frequencies. We also explore the connections between function frequency and image frequency and find that spectral bias is sensitive to the low frequencies prevalent in natural images. Our work enables measuring and ultimately controlling the spectral behavior of neural networks used for image classification, and is a step towards understanding why deep models generalize well.
Reject
This paper expands the spectral bias, which has been studied in a constrained situation such as the fully-connected network, to a more practical situation of a multi-class classification situation, and proposes a novel technique that can measure the smoothness through linear interpolation of test examples. Two reviewers highly evaluated the importance of the research question considered in this study and the value of diverse experiments applying the proposed method in various directions, and suggested acceptance. On the other hand, two other reviewers suggested rejection due to the lack of rigor in writing and experiments. I strongly agree with the reviewer's concern that the method was only verified on CIFAR10 and the rigor of the experiment was lacking. Unlike the spectral bias paper, which is the basis of this study, this submission is not a theoretical paper, but rather an experimental paper. I admit that it is impossible to verify in various domains as mentioned by the author. However, I believe that verification on more diverse, especially larger-scale datasets is essential at least focusing on the image classification task.
train
[ "ioME9sihBdb", "bQlD-xMuNhp", "dyUUxN0y_zA", "6aVgwOp0P58", "ada45ld2cG-", "c5dbo_AONN8", "QF8g8qC-Skk", "8ojFviluL50", "93XTMClJElM", "Zr_okw1XrRA", "cmfQMnw1RmC", "mOyuD_CPHrw", "1LIQ9YJrUbD", "kuQeMme7-3Q", "YmYNjk04pKZ", "03xdGCRJkm", "-_9asko_3eZ", "an5A8ZtydbH", "Pn5r5sZBxK...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " Thank you for pointing out your concern about the specificity of claims on CIFAR10. Indeed we do not want our claims to be interpreted too broadly; we have updated our abstract to specifically state that our experiments use CIFAR10 (note that we cannot update the posted version at this time). ", " I am thankful...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "bQlD-xMuNhp", "8ojFviluL50", "iclr_2022_e-IkMkna5uJ", "c5dbo_AONN8", "GajTc5ifcOp", "N4s6DpYgaiE", "93XTMClJElM", "QF8g8qC-Skk", "7cXSqqfXt10", "kuQeMme7-3Q", "iclr_2022_e-IkMkna5uJ", "-_9asko_3eZ", "an5A8ZtydbH", "03xdGCRJkm", "Pn5r5sZBxK1", "woAd783Tymo", "2XADQergjL2", "xN_LHLN...
iclr_2022_NP9T_pViXU
VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning
Video understanding relies on perceiving the overall global content and modeling its internal connections (e.g., causality, movement, and spatio-temporal correspondence). To learn these interactions, we apply a mask-then-predict pre-training task on the discretized video tokens generated via VQ-VAE. Unlike language, where the text tokens are more independent, neighboring video tokens typically have strong correlations (e.g., consecutive video frames usually look very similar), and hence uniformly masking individual tokens will make the task too trivial to learn useful representations. To deal with this issue, we propose a block-wise masking strategy where we mask neighboring video tokens in both spatial and temporal domains. We also add an augmentation-free contrastive learning method to further capture the global content by predicting whether the video clips are sampled from the same video. We pre-train our model on uncurated videos and show that our pre-trained model can reach state-of-the-art results on several video understanding datasets (e.g., SSV2, Diving48). Lastly, we provide detailed analyses of the model scalability and pre-training method design.
Reject
None of the reviewers recommended this paper. There were concerns that it is hard to draw meaningful conclusions from the experimental work due to the comparisons provided. While the design of the block masking + contrastive learning proposed in this paper was rated as potentially being quite important, there remained some concern that subsequent tokenization steps could be problematic for "spatial heavy" datasets. The AC recommends rejection.
train
[ "89ERqAp9Yiz", "9EzK2uXub_", "jMW-vuDKxLK", "nbIJWmtfSNi", "miKzg8LBDo", "f4lfWr5CwZ0", "w29rKRFZs5P", "t1_yGoQkwQq", "G1_vr7m-s-x", "pXDo-FwD8O0", "6VP4xDeFBu", "_WRc-p_lVMG", "ChwBji79v6n", "xzRQGYHcvXe" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for reading our response and the follow-up comments. Next, we clarify the concerns in the review as below.\n\n**The contrastive learning design is unimportant and additive to the paper?**\n\nWe first want to point out that a similar question was asked by Reviewer E8qx in their initial review...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "f4lfWr5CwZ0", "iclr_2022_NP9T_pViXU", "miKzg8LBDo", "w29rKRFZs5P", "t1_yGoQkwQq", "pXDo-FwD8O0", "6VP4xDeFBu", "G1_vr7m-s-x", "9EzK2uXub_", "xzRQGYHcvXe", "_WRc-p_lVMG", "ChwBji79v6n", "iclr_2022_NP9T_pViXU", "iclr_2022_NP9T_pViXU" ]
iclr_2022_DXRwVRh4i8g
Reachability Traces for Curriculum Design in Reinforcement Learning
The objective in goal-based reinforcement learning is to learn a policy to reach a particular goal state within the environment. However, the underlying reward function may be too sparse for the agent to efficiently learn useful behaviors. Recent studies have demonstrated that reward sparsity can be overcome by instead learning a curriculum of simpler subtasks. In this work, we design an agent's curriculum by focusing on the aspect of goal reachability, and introduce the idea of a reachability trace, which is used as a basis to determine a sequence of intermediate subgoals to guide the agent towards its primary goal. We discuss several properties of the trace function, and in addition, validate our proposed approach empirically in a range of environments, while comparing its performance against appropriate baselines.
Reject
The authors present a method for creating a curriculum for goal-conditioned reinforcement learning. In particular, they propose to use reachability traces to define a sequence of sub-goals that aid learning. During the review process, the reviewers mentioned the novelty of the proposed approach and the intuitive explanations provided by the authors. However, the reviewers also pointed out that the experiments could be more thorough, errors in the theoretical justification of the method as well as simplicity of the evaluation environments, among others. Some of the reviewers increased their score after the authors' rebuttal but it was not enough to advocate for acceptance of the paper. I encourage the authors to incorporate reviewers' feedback in the next version of the paper.
val
[ "XcZI1AgF9UP", "IRKqYrbMzfc", "k5aN4wXHcv", "IJITyHBPri-", "8AAmkaS-GRV", "qW7tyaMPTPY", "SxfilsKRdld", "9jyZi_o-nDt", "L-AXYsFSelA", "iVdaIImeec0", "9Cedk-HQzhP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper tackles the problem of curriculum design for single-task/single-goal reinforcement learning problems with sparse rewards. The primary contribution of the work is a method to generate a subgoal curriculum using reachability traces — a learned metric that captures the distance to goal using under a pre-det...
[ 5, -1, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_DXRwVRh4i8g", "SxfilsKRdld", "8AAmkaS-GRV", "iclr_2022_DXRwVRh4i8g", "9Cedk-HQzhP", "IJITyHBPri-", "XcZI1AgF9UP", "iVdaIImeec0", "iclr_2022_DXRwVRh4i8g", "iclr_2022_DXRwVRh4i8g", "iclr_2022_DXRwVRh4i8g" ]
iclr_2022_RAoBtzlwtCC
Provable Federated Adversarial Learning via Min-max Optimization
Federated learning (FL) is a trending training paradigm to utilize decentralized training data. FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation. This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks. Adversarial training is a trending method to improve the robustness of neural networks against adversarial perturbations. First, we formulate a \textit{general} form of federated adversarial learning (FAL) that is adapted from adversarial learning in the centralized setting. On the client side of FL training, FAL has an inner loop to optimize an adversarial to generate adversarial samples for adversarial training and an outer loop to update local model parameters. On the server side, FAL aggregates local model updates and broadcast the aggregated model. We design a global training loss to formulate FAL training as a min-max optimization problem. Unlike the convergence analysis in centralized training that relies on the gradient direction, it is significantly harder to analyze the convergence in FAL for two reasons: 1) the complexity of min-max optimization, and 2) model not updating in the gradient direction due to the multi-local updates on the client-side before aggregation. Further, we address the challenges using appropriate gradient approximation and coupling techniques and present the convergence analysis in the over-parameterized regime. Our main result theoretically shows that the minimal value of loss function under this algorithm can converge to $\epsilon$ small with chosen learning rate and communication rounds. It is noteworthy that our analysis is feasible for non-IID clients.
Reject
The reviewers had a number of concerns which seem to remain after the authors response. In particular, the reviewers were concerned about the validity of the paper's assumptions in real-world applications and lack of experimental results. Also, while the reviewers acknowledge the novelty in technical contributions, they suggested that the authors explain more clearly how the results of this paper are distinguishable from prior art.
train
[ "SBpbe3PK74r", "4QIuE2gNH0R", "dvY9fzBtHWV", "FSr-jtmbeJG", "rQMm0HqP2u", "hykcEPTHhvd", "2v0Qwgu4Uxp", "p-wn5seeUeH", "tko-PVLCOA", "_XdtMop2DG2", "7UoAPS98Hzp" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper is a direct follow-up of Zhang et al 2020b. With assumptions including overparametrized two-layer ReLU network, normalized dataset, gamma-separability, and Lipschitz convex loss, it proves the convergence of FedAvg under adversarial perturbation. These assumptions are easy adaptations from Zhang et al 2...
[ 3, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "iclr_2022_RAoBtzlwtCC", "SBpbe3PK74r", "7UoAPS98Hzp", "_XdtMop2DG2", "tko-PVLCOA", "p-wn5seeUeH", "iclr_2022_RAoBtzlwtCC", "iclr_2022_RAoBtzlwtCC", "iclr_2022_RAoBtzlwtCC", "iclr_2022_RAoBtzlwtCC", "iclr_2022_RAoBtzlwtCC" ]
iclr_2022_xOeWOPFXrTh
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time intervals.
Reject
The paper has received 5 reviews with 4 advocating for rejection (marginal or clear cut) and one borderline leaning towards a weak accept. The key concerns voiced by the reviewers are the lack of novelty (*the novelty of the proposed multi-derivative architecture is limited*), the lack of comparisons with specific architectures in appropriate setting (rPPGNet without the STVEN module, DeeprPPG, RhythmNet, CVD), and concerns about the use of synthetic data (although authors provide some justifications to that end). It appears that the key to reviewers' scores is that higher-order dynamics did not constitute a sufficient novelty. Given the post-rebuttal scores and discussions, AC has no option but to recommend a reject at this point.
train
[ "TVCzu9E_SE3", "QgxtNaH3yd5", "lYzakxQWckw", "k_gAtFbPD_h", "HNSjP_g4o3t", "FNyc4OK8B3x", "-FnDSEOG5W", "O8-ZGdtl8y", "B7FUoQ2RFR", "5t-R4PmL6V", "so8kTF3Uoqn", "JSuxLj3oeJb", "4ZcEPKRhocU", "0eWHTtyXK3", "aAyqtRU9tt0", "6HHw72-lVvb", "wu2w5NH-QzD", "-WvAhrGyaSu", "4RRioCNvL_A" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " From my side I think there are no more relevant details that need to be clarified. I will review what has been added to the appendix about the details of the model and the code, as soon as they can upload a new version. Regards.", " I understand this work is to address the importance of incorporating higher-ord...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4, 2 ]
[ "lYzakxQWckw", "0eWHTtyXK3", "O8-ZGdtl8y", "FNyc4OK8B3x", "JSuxLj3oeJb", "so8kTF3Uoqn", "B7FUoQ2RFR", "5t-R4PmL6V", "-WvAhrGyaSu", "-WvAhrGyaSu", "4RRioCNvL_A", "wu2w5NH-QzD", "6HHw72-lVvb", "aAyqtRU9tt0", "iclr_2022_xOeWOPFXrTh", "iclr_2022_xOeWOPFXrTh", "iclr_2022_xOeWOPFXrTh", "...
iclr_2022_LhObGCkxj4
New Perspective on the Global Convergence of Finite-Sum Optimization
Deep neural networks (DNNs) have shown great success in many machine learning tasks. Their training is challenging since the loss surface of the network architecture is generally non-convex, or even non-smooth. How and under what assumptions is guaranteed convergence to a \textit{global} minimum possible? We propose a reformulation of the minimization problem allowing for a new recursive algorithmic framework. By using bounded style assumptions, we prove convergence to an $\varepsilon$-(global) minimum using $\mathcal{\tilde{O}}(1/\varepsilon^2)$ gradient computations. Our theoretical foundation motivates further study, implementation, and optimization of the new algorithmic framework and further investigation of its non-standard bounded style assumptions. This new direction broadens our understanding of why and under what circumstances training of a DNN converges to a global minimum.
Reject
This paper proves a global convergence rate of a newly proposed algorithm finite sum problem under some assumptions. While the proposed algorithm provides some interesting ideas to solving the finite sum problem using intermediate proxy solver, the current assumptions are too strong and I'm afraid that this can make the result essentially trivial: For example, assumption 3 assumes that $ H_i * v \approx \nabla_z\phi_i(h(w, x_i))$ for every i. This simply implies that $ \|\nabla_w\phi_i(h(w, x_i)) \|_2 $ is as large as $\|\nabla_z\phi_i(h(w, x_i))\|_2^2 $ as long as the norm of v is small, since $\nabla_w\phi_i(h(w, x_i)) = \nabla_z\phi_i(h(w, x_i)) H_i $. Hence the assumption simply assumes that "If the loss is not small, then the gradient of the objective is not small (using the convexity of $\phi$, so $|\nabla_z\phi_i(h(w, x_i))$ has to be large)" -- This would imply that gradient descent can also work (and arguably having the same convergence rate) under this assumption. Note that "the smallest movement that can decrease the objective the most" is indeed following gradient descent direction -- So gradient descent would not move the weights more than this algorithm as well. Therefore, I am not sure that there is a clear benefit to using this algorithm compared to the standard (stochastic) gradient descent. In particular, I would suggest the authors at least show one example where under the current set of assumptions, gradient descent does not work as efficiently compared to the proposed algorithm -- This will make the proposed algorithm much more justified.
test
[ "moe2X6CQTkp", "g5AzZ3Dj5K", "SH_RscTPNOh", "qFQvp2HqIF", "G1UVFPJdbyH", "_dSl0j0maty", "P0em6fu87QX", "UiaWMNb1_n6", "MCJ3XbEJTm4", "SQ1EC0xrGSW", "6FZJh9M2rcq", "4Y80Rd6k8fF", "Rtz6Qab0MY", "rRQX7Fz50aJ", "nyCFASyMQCC", "AJZBArnGcWX", "YAzLY5Jan8Qf", "Oi_1kCnCAL-", "UBxcOkZ4J9M...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", ...
[ " Dear everyone, \n\nWe thank all the reviewers for their constructive comments and suggestions that help improve our paper. \n\nWe are delighted to hear that you all agree to support us, and we are truly grateful for your substantial help during the discussion period. The highlights in our revision have been remov...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, -1, 2, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_LhObGCkxj4", "iclr_2022_LhObGCkxj4", "G1UVFPJdbyH", "iclr_2022_LhObGCkxj4", "P0em6fu87QX", "YAzLY5Jan8Qf", "AJZBArnGcWX", "MCJ3XbEJTm4", "6FZJh9M2rcq", "iclr_2022_LhObGCkxj4", "Rtz6Qab0MY", "iclr_2022_LhObGCkxj4", "FYHe2WkWIdE", "SNfhZ8SVg7p", "YAzLY5Jan8Qf", "qDQ56YHtlM", ...
iclr_2022_uVTp9Z-IUOC
Test-Time Adaptation to Distribution Shifts by Confidence Maximization and Input Transformation
Deep neural networks often exhibit poor performance on data that is unlikely under the train-time data distribution, for instance data affected by corruptions. Previous works demonstrate that test-time adaptation to data shift, for instance using entropy minimization, effectively improves performance on such shifted distributions. This paper focuses on the fully test-time adaptation setting, where only unlabeled data from the target distribution is required. This allows adapting arbitrary pretrained networks. Specifically, we propose a novel loss that improves test-time adaptation by addressing both premature convergence and instability of entropy minimization. This is achieved by replacing the entropy by a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. Moreover, we propose to prepend an input transformation module to the network that can partially undo test-time distribution shifts. Surprisingly, this preprocessing can be learned solely using the fully test-time adaptation loss in an end-to-end fashion without any target domain labels or source domain data. We show that our approach outperforms previous work in improving the robustness of publicly available pretrained image classifiers to common corruptions on such challenging benchmarks as ImageNet-C.
Reject
The paper considers test time adaptation to distribution shift which is a very important and impactful problem. The authors propose an empirical method that has different pieces, the most important ones being input transformation and confidence maximization and using likelihood ratio loss. There were various concerns that got addressed during the rebuttal period such as, novelty of the proposed method, ablation study of different parts of the model, novelty and importance of diversity regularizer, choice of optimization. However there are still three remaining concerns that addressing them will improve the paper significantly: First, clear motivation behind the method for the cases when the model is certain but we have data imbalance. Second, analysis in the online setting of batch-by-batch prediction and adaptation. Third, establishing the claim regarding data subset experiment that it enable the model to adapt on a subset of data and later switch to complete execution mode without adaptation for efficient run time and improved throughput. How is the method to know the data distribution has changed, or that it has sufficiently adapted to it when the data distribution is not changing?
train
[ "hSW3wFrlP3W", "AF-313aqG4a", "uRCMEcJYT3c", "ux6lTvf0M_m", "8jj7hAlG62", "3gw-UcCztU", "P2PHi1T9fZ2", "3dQNntWqmJ", "xOPSHy5-0V", "d4EXG_JsLTC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The response addresses all of the points highlighted in my review for rebuttal, and this response and discussion have altered my view of three weaknesses: novelty, restricting the choice of parameters, and the input transformer. I have accordingly raised my score to 6, on the side of acceptance.\n\n- Novelty: Aft...
[ -1, 6, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 5, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "3dQNntWqmJ", "iclr_2022_uVTp9Z-IUOC", "iclr_2022_uVTp9Z-IUOC", "uRCMEcJYT3c", "iclr_2022_uVTp9Z-IUOC", "d4EXG_JsLTC", "xOPSHy5-0V", "AF-313aqG4a", "iclr_2022_uVTp9Z-IUOC", "iclr_2022_uVTp9Z-IUOC" ]
iclr_2022_HFE5P8nhmmL
SVMnet: Non-parametric image classification based on convolutional SVM ensembles for small training sets
Deep convolutional neural networks (DCNNs) have demonstrated superior power in their ability to classify image data. However, one of the downsides of DCNNs for supervised learning of image data is that their training normally requires large sets of labeled "ground truth" images. Since in many real-world problems large sets of pre-labeled images are not always available, DCNNs might not perform in an optimal manner in all real-world cases. Here we propose SVMnet -- a method based on a layered structure of Support Vector Machine (SVM) ensembles for non-parametric image classification. By utilizing the quick learning of SVMs compared to neural networks, the proposed method can reach higher accuracy than DCNNs when the training set is small. Experimental results show that while "conventional" DCNN architectures such as ResNet-50 outperform SVMnet when the size of the training set is large, SVMnet provides a much higher accuracy when the number of "ground truth" training samples is small.
Reject
The paper proposes to overcome the challenge of annotating datasets to train convolutional networks by considering instead an architecture that is composed of stacked support vector machine layers. Each support vector machine is trained on a small patch from the input image. A voting mechanism is used to aggregate the predictions. Results show better performance by the model in the small data regime compared with larger convolutional neural networks trained from scratch. The reviewers appreciated the relevance of the problem and the originality of the approach. The reviewers also appreciated several parts of the experimental evaluation that were carefully conducted in particular the sensitivity of the analysis with respect to the patch size and the multiple datasets considered for the experimental evaluation. The reviewers also expressed concerns about the adequacy of the evaluation (unfair comparisons), the completeness of the baselines (missing baselines), and the significance of the improvements. In particular, the experimental evaluation was considered too limited given the problem considered. The authors submitted responses to the reviewers' comments. After reading the response, updating the reviews, and discussion, the reviewers considered that ‘the experimental evaluation improved a bit', that several concerns were satisfactorily addressed, and yet that the updated results 'do not show a significant improvement of the proposed method over existing works, simple baselines or pre-trained ResNet'. We encourage the paper to pursue their approach further taking into account the reviewers' comments, encouragements, and suggestions. The revision of the paper will generate a stronger submission to a future venue. Reject.
train
[ "p4aT37x-jXT", "rFHI8kOUWlR", "F82FThL_KTH", "j0d-riCo2nP", "9H3D5hirN5u", "QSUkS2hDKVI", "wLdrXaY3xs", "pyPEPXiQnEk", "2cCYp2tjrw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " - I agree with the first point, then maybe the more appropriate name would be SVM-Array or something else than a network which can in fact confuse readers.\n- The experiments improved a bit but still insufficient to show relevant and practical ability of the present method to be scalable beyond what could be achi...
[ -1, -1, -1, -1, -1, 3, 1, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "j0d-riCo2nP", "F82FThL_KTH", "pyPEPXiQnEk", "wLdrXaY3xs", "QSUkS2hDKVI", "iclr_2022_HFE5P8nhmmL", "iclr_2022_HFE5P8nhmmL", "iclr_2022_HFE5P8nhmmL", "iclr_2022_HFE5P8nhmmL" ]
iclr_2022_qynB_fAt5TQ
Centroid Approximation for Bootstrap
Bootstrap is a principled and powerful frequentist statistical tool for uncertainty quantification. Unfortunately, standard bootstrap methods are computationally intensive due to the need of drawing a large i.i.d. bootstrap sample to approximate the ideal bootstrap distribution; this largely hinders their application in large-scale machine learning, especially deep learning problems. In this work, we propose an efficient method to explicitly \emph{optimize} a small set of high quality ``centroid'' points to better approximate the ideal bootstrap distribution. We achieve this by minimizing a simple objective function that is asymptotically equivalent to the Wasserstein distance to the ideal bootstrap distribution. This allows us to provide an accurate estimation of uncertainty with a small number of bootstrap centroids, outperforming the naive i.i.d. sampling approach. Empirically, we show that our method can boost the performance of bootstrap in a variety of applications.
Reject
The submission aims to improve the quality of the bootstrap when the number of samples is small. It does so by gradient descent on the to approximate the ideal bootstrap in Wasserstein distance. The submission combines a nice set of methodologies, and aims to address an interesting statistical problem in a principled way. The reviewers were unanimous in their opinion that the submission falls below the threshold for acceptance to ICLR. It was revealed in post rebuttal discussion with reviewer y4AP that they wish to retain a reject recommendation due to a lack of clarity in the methodology even after author comments. The review details specific issues that can eventually be clarified in a revision for submission to another venue.
train
[ "_hUgh-TNZm6", "a3nBMEJSFl6", "fvivQo3MIRe", "rXrtGq8w6l", "6bEFJ-oVKm9", "EXXb4BauMp1", "o_R3-09Jrb", "wBblt6GNYNe", "HuGM2M2GFRH", "hU5_mBjScq0", "ctEZciRZW_e", "LuCHoc-A9PY", "btew_HjFlIS", "tXaH-3LIO_k", "5dVUIF7UUsi" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I will be incorporating feedback also given in private comments during reviewer discussion in the meta-review. Thanks for your active participation in the review and discussion process, and willingness to incorporate reviewer feedback.", " Thanks Reviewer y4ap for all the discussion and comments.\n\nWe agree t...
[ -1, -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "a3nBMEJSFl6", "fvivQo3MIRe", "iclr_2022_qynB_fAt5TQ", "iclr_2022_qynB_fAt5TQ", "EXXb4BauMp1", "o_R3-09Jrb", "wBblt6GNYNe", "HuGM2M2GFRH", "fvivQo3MIRe", "iclr_2022_qynB_fAt5TQ", "5dVUIF7UUsi", "rXrtGq8w6l", "tXaH-3LIO_k", "iclr_2022_qynB_fAt5TQ", "iclr_2022_qynB_fAt5TQ" ]
iclr_2022_ZUinrZwKnHb
Attend to Who You Are: Supervising Self-Attention for Keypoint Detection and Instance-Aware Association
Bottom-up multi-person pose estimation models need to detect keypoints and learn associative information between keypoints. We argue that these problems can be entirely solved by the Transformer model. Specifically, the self-attention in Transformer measures the pairwise dependencies between locations, which can play a role in providing association information for keypoints grouping. However, the naive attention patterns are still not subjectively controlled, so there is no guarantee that the keypoints will always attend to the instances to which they belong. To address it we propose a novel approach of multi-person keypoint detection and instance association using instance masks to supervise self-attention. By supervising self-attention to be instance-aware, we can assign the detected keypoints to the correct human instances based on the pairwise attention scores, without using pre-defined offset vector fields or embedding like CNN-based bottom-up models. An additional benefit of our method is that the instance segmentation results of any number of people can be directly obtained from the supervised attention matrix, thereby simplifying the pixel assignment pipeline. The experiments on the COCO multi-person keypoint detection challenge and person instance segmentation task demonstrate the effectiveness and simplicity of the proposed method.
Reject
This paper proposes a bottom-up multi-person pose estimation method using a Transformer model. There is consensus among the reviewers that this paper is not ready for acceptance/publication. Although some reviewers find the proposed idea interesting (some find it lacking novelty though), all the reviewers agree that the quantitative experimental results are not promising. Some reviewers explicitly criticized lacking empirical accuracy compared to state-of-the-arts. The authors provided additional details and results in the rebuttal, but they were not sufficient to change the opinions of the reviewers. We recommend rejecting the paper.
train
[ "SQRzCtmCFvn", "nC2B_jnmnwH", "AHv0lp46p5Y", "9y1dxG-_DY", "kcAZbwb5GOq", "8lVn-RYfnJZ", "CMNA_vLJL4", "Alea0vs9yXe", "Ah2T9zdPy75", "_rrWwVWNxHJ", "s-LD3PLA2Mm", "t5v-AAySvD", "LJhHXEAgqWd", "aLDq9V_Kmnh", "SJiNeElYYl9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate all the work the authors have put into their responses to the reviewer's comments. While I do not agree with all of the reviewer comments, there is consensus that this paper is not yet ready for acceptance. \n\n- There are interesting ideas touched on in this work but I don't think a clear case has b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4, 4 ]
[ "8lVn-RYfnJZ", "CMNA_vLJL4", "9y1dxG-_DY", "SJiNeElYYl9", "aLDq9V_Kmnh", "LJhHXEAgqWd", "t5v-AAySvD", "s-LD3PLA2Mm", "_rrWwVWNxHJ", "iclr_2022_ZUinrZwKnHb", "iclr_2022_ZUinrZwKnHb", "iclr_2022_ZUinrZwKnHb", "iclr_2022_ZUinrZwKnHb", "iclr_2022_ZUinrZwKnHb", "iclr_2022_ZUinrZwKnHb" ]
iclr_2022_6-lLt2zxbZR
An Application of Pseudo-log-likelihoods to Natural Language Scoring
Language models built using semi-supervised machine learning on large corpora of natural language have very quickly enveloped the fields of natural language generation and understanding. In this paper we apply a zero-shot approach in- dependently developed by several researchers now gaining recognition as a significant alternative to fine-tuning for evaluation on common sense tasks. A language model with relatively few parameters and training steps (albert-xxlarge-v2) compared to a more recent language model (T5) can outperform it on a recent large data set (TimeDial), while displaying robustness in its performance across a similar class of language tasks. Surprisingly, this result is achieved by using a hyperparameter-free zero-shot method with the smaller model, compared to fine-tuning to the larger model. We argue that robustness of the smaller model ought to be understood in terms of compositionality, in a sense that we draw from re- cent literature on a class of similar models. We identify a practical cost for our method and model: high GPU-time for natural language evaluation. The zero-shot measurement technique that produces remarkable stability, both for ALBERT and other BERT variants, is an application of pseudo-log-likelihoods to masked language models for the relative measurement of probability for substitution alter- natives in forced choice language tasks such as the Winograd Schema Challenge, Winogrande, CommonsenseQA, and others. One contribution of this paper is to bring together a number of similar, but independent strands of research. We produce some absolute state-of-the-art (SOTA) results for common sense reasoning in binary choice tasks, performing better than any published result in the literature, including fine-tuned efforts. In others our results are SOTA relative to published methods similar to our own – in some cases by wide margins, but below SOTA absolute for fine-tuned alternatives. In addition, we show a remarkable consistency of the model’s performance under adversarial settings, which we argue is best explained by the model’s compositionality of representations.
Reject
This paper argues several loosely-related points about the evaluation of pretrained models on commonsense reasoning datasets in the Winograd style, and presents experiments with existing models on several datasets, including a novel 20-example benchmark. All four reviewers struggled to find a clear contribution or theme in this paper that is novel and thorough enough to meet the bar for publication at a selective general-ML venue. I'd urge the authors to focus in on just one of these points and expand, and to consider submitting to a venue that more narrowly focuses on methods for commonsense reasoning in NLP.
train
[ "ISXhocpTTkR", "H2m6kqnYpRq", "eOjGUtxaCqC", "AflrEKKEb53", "z_c2KyP0vN8", "EPKUpQZfZZL", "mPWU1UZIbv9", "Cx2-cMQMoz5", "BcqdNME9HRY", "HfzjntfCR60", "4t58U8YwEn", "eatksiMD179" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In the Appendix, Table 5, there are two column headers that are incorrect.\n\nThe second \"GAP\" should be \"Winobias\", and the \"Winobias\" should instead be \"Winogender\". \n\nIn verifying the correct headers we have discovered an error in our representations of the Winogender data. We have included both OCCU...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_6-lLt2zxbZR", "eOjGUtxaCqC", "AflrEKKEb53", "mPWU1UZIbv9", "iclr_2022_6-lLt2zxbZR", "eatksiMD179", "z_c2KyP0vN8", "4t58U8YwEn", "HfzjntfCR60", "iclr_2022_6-lLt2zxbZR", "iclr_2022_6-lLt2zxbZR", "iclr_2022_6-lLt2zxbZR" ]
iclr_2022_L2jrxKBloq8
Second-Order Rewards For Successor Features
Current Reinforcement Learning algorithms have reached new heights in performance. However, such algorithms often require hundreds of millions of samples, often resulting in policies that are unable to transfer between tasks without full retraining. Successor features aim to improve this situation by decomposing the policy into two components: one capturing environmental dynamics and the other modelling reward. Where the reward function is formulated as the linear combination of learned state features and a learned parameter vector. Under this form, transfer between related tasks now only requires training the reward component. In this paper, we propose a novel extension to the successor feature framework resulting in a natural second-order variant. After derivation of the new state-action value function, a second additive term emerges, this term predicts reward as a non-linear combination of state features while providing additional benefits. Experimentally, we show that this term explicitly models the environment's stochasticity and can also be used in place of $\epsilon$-greedy exploration methods during transfer. The performance of the proposed extension to the successor feature framework is validated empirically on a 2D navigation task, the control of a simulated robotic arm, and the Doom environment.
Reject
The submitted paper considers a form of second-order extension of successor features building on a second-order representation of the reward function in terms of state-features. The authors demonstrate that this approach can be useful for transfer learning and also show an application to exploration. All reviewers gave borderline recommendations (2x weak accept, 2x weak reject). While most reviewers agree that the proposed approach can be sensible and that the paper is well written, there are concerns that experimental results do not fully support all claims and additional experiments are required to clearly demonstrate advantages over existing baselines. Also the proposed approach for exploration is rather incomplete and not well studied. The raised concerns were not fully refuted by the authors during the discussion period but rather made some reviewers more concerned about full validty of all claims. Thus, while I think the paper has potential and can be turned into a good paper, I am recommending rejection of the paper in its current form. I would like to encourage to authors to carefully address the reviewers' concerns in future versions of the paper.
train
[ "1visU3qPGY1", "eVsv2JfDPEz", "ofJ2eOL6Z3B", "SpzE0mJc-AO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors take the successor feature framework that separately models enivronmental dynamics and reward to use a second order reward learning formulation over the standard linear model. This allows a non-linear relationship to more easily form between features and rewards lessening the burden of a the feature en...
[ 6, 5, 5, 6 ]
[ 3, 4, 4, 3 ]
[ "iclr_2022_L2jrxKBloq8", "iclr_2022_L2jrxKBloq8", "iclr_2022_L2jrxKBloq8", "iclr_2022_L2jrxKBloq8" ]
iclr_2022_TLnReGgZEdW
Generalization in Deep RL for TSP Problems via Equivariance and Local Search
Deep reinforcement learning (RL) has proved to be a competitive heuristic for solving small-sized instances of traveling salesman problems (TSP), but its performance on larger-sized instances is insufficient. Since training on large instances is impractical, we design a novel deep RL approach with a focus on generalizability. Our proposition consisting of a simple deep learning architecture that learns with novel RL training techniques exploits two main ideas. First, we exploit equivariance to facilitate training. Second, we interleave efficient local search heuristics with the usual RL training to smooth the value landscape. In order to validate the whole approach, we empirically evaluate our proposition on random and realistic TSP problems against relevant state-of-the-art deep RL methods. Moreover, we present an ablation study to understand the contribution of each of its components.
Reject
The paper discusses new RL algorithms for solving large. TSP instances. The algorithm is novel and the problem is important however certain technical questions regarding the soundness of the algorithm were raised. Furthermore, it seems that despite much larger computational time, the algorithm provides only very moderate gains over previous baselines. Finally, it is not clear how the proposed methods (e.g. equivariance) can be applied outside of the TSP problem scope. Thus the concern is the limited impact of the method on the field. The authors addressed some of the concerns of the reviewers in the rebuttal however it is still not clear: (1) how the presented mechanism can be applied for other combinatorial problems beyond TSP and therefore how useful it can be for the machine learning community, (2) how novel the paper is (the use of equivariance is as direct as in the regular graph neural network setup). Furthermore, the experiments show that the deep learning approach to TSP is still not competitive with standard non-machine baselines. Thus it is not clear whether the proposed algorithm is a right approach to solve this problem, even though it beats other deep learning techniques. The paper is very well written though and the presented method is definitely of value to the research community working on the TSP. Therefore it seems that at this point the paper is more suited for one of the mathematical journals on combinatorics and graph theory.
val
[ "jazg81UFZ1", "-JoNb5hWKhJ", "YtiJoj6kXcA", "9mnOyLkp9kf", "URz9gEGrNKi", "_jrnN26kGAt", "9EjY9NLhT1N", "OdtN0mCggn5", "6V2IS8tdZfl", "3ukQGRwOCCJ", "MhG1LWHukqC", "drTS9R2FkX2", "82LUecbqLjS", "Zv7nTMMbbRO", "kHT8tvcM1g-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank all the authors for pointing out a number of good questions for our paper. Our paper becomes more precise and some mistakes are corrected after the revision. We would appreciate reviewers' replies indicating whether our response and clarifications have addressed the issues raised in the reviews...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_TLnReGgZEdW", "iclr_2022_TLnReGgZEdW", "drTS9R2FkX2", "jazg81UFZ1", "3ukQGRwOCCJ", "iclr_2022_TLnReGgZEdW", "MhG1LWHukqC", "6V2IS8tdZfl", "Zv7nTMMbbRO", "_jrnN26kGAt", "kHT8tvcM1g-", "-JoNb5hWKhJ", "iclr_2022_TLnReGgZEdW", "iclr_2022_TLnReGgZEdW", "iclr_2022_TLnReGgZEdW" ]
iclr_2022__Vn-mKDipa1
Hierarchically Regularized Deep Forecasting
Hierarchical forecasting is a key problem in many practical multivariate forecasting applications - the goal is to simultaneously predict a large number of correlated time series that are arranged in a pre-specified aggregation hierarchy. The main challenge is to exploit the hierarchical correlations to simultaneously obtain good prediction accuracy for time series at different levels of the hierarchy. In this paper, we propose a new approach for hierarchical forecasting which consists of two components. First, decomposing the time series along a global set of basis time series and modeling hierarchical constraints using the coefficients of the basis decomposition. And second, using a linear autoregressive model with coefficients that vary with time. Unlike past methods, our approach is scalable (inference for a specific time series only needs access to its own history) while also modeling the hierarchical structure via (approximate) coherence constraints among the time series forecasts. We experiment on several public datasets and demonstrate significantly improved overall performance on forecasts at different levels of the hierarchy, compared to existing state-of-the-art hierarchical models.
Reject
The paper proposes a method for time series forecasting based on a hierarchical deep learning approach. Three reviewers submitted reviews, with two marginally accept and one marginally reject. The paper was therefore borderline, but the issues raised by the marginal reject reviewer on the justification for the design choice of a deep latent model and the experimental setup appear worth addressing in a revision resubmitted to another conference.
train
[ "5wo8m3374-", "tTqQJLa9MVK", "5nn5D1llj9P", "99ZrGb2GsaG", "PpDRmWfVipr", "hOhZ3PIqqs", "Fid_7luonYj", "d6hIu4beZW0", "1iL6LE-3FGR", "2FskTIoi5MK", "odZSbvRh9uc" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for their valuable comments on the paper. We appreciate that the reviewers found our paper “well written and easy to read” (U9mT), “conveys a good reader journey experience” (UnFM), solves “an important problem” (iRbQ), “scales well” (UnFM), “easy to implement” (UnFM), and “showing good res...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2022__Vn-mKDipa1", "odZSbvRh9uc", "2FskTIoi5MK", "2FskTIoi5MK", "odZSbvRh9uc", "2FskTIoi5MK", "2FskTIoi5MK", "1iL6LE-3FGR", "iclr_2022__Vn-mKDipa1", "iclr_2022__Vn-mKDipa1", "iclr_2022__Vn-mKDipa1" ]
iclr_2022_gzeruP-0J29
Revisiting and Advancing Fast Adversarial Training Through the lens of Bi-Level Optimization
Adversarial training (AT) has become a widely recognized defense mechanism to improve the robustness of deep neural networks against adversarial attacks. It is originated from solving a min-max optimization problem, where the minimizer (i.e., defender) seeks a robust model to minimize the worst-case training loss at the presence of adversarial examples crafted by the maximizer (i.e., attacker). However,the min-max nature makes AT computationally intensive and thus difficult to scale. Thus, the problem of FAST-AT arises. Nearly all the recent progress is achieved based on the following simplification: The iterative attack generation method used in the maximization step of AT is replaced by the simplest one-shot gradient sign-based PGD method. Nevertheless, FAST-AT is far from satisfactory, and it lacks theoretically-grounded design. For example, a FAST-AT method may suffer from robustness catastrophic overfitting when training with strong adversaries. In this paper, we foster a technological breakthrough for designing FAST-AT through the lens of bi-level optimization (BLO) instead of min-max optimization. First, we theoretically show that the most commonly-used algorithmic specification of FAST-AT is equivalent to the linearized BLO along the direction given by the sign of input gradient. Second, with the aid of BLO, we develop a new systematic and effective fast bi-level AT framework, termed FAST-BAT, whose algorithm is rigorously derived by leveraging the theory of implicit gradient. In contrast to FAST-AT, FAST-BAT has the least restriction to placing the tradeoff between computation efficiency and adversarial robustness. For example, it is capable of defending sign-based projected gradient descent (PGD) attacks without calling any gradient sign method and explicit robust regularization during training. Furthermore, we empirically show that our method outperforms state-of-the-art FAST-AT baselines. In particular, FAST-BAT can achieve superior model robustness without inducing robustness catastrophic overfitting and losing standard accuracy.
Reject
This paper investigates fast adversarial training methods as a bilevel optimization problem. The proposed algorithm compares well with the existing techniques in overall runtime (obtaining better clean-test accuracy, which is not the goal, and) matching the robust accuracy of existing adversarial training methods. The proposed framework, however, is more general and flexible and is theoretically grounded. The problem studied here is exciting and the approach the authors take is interesting. The current version, unfortunately, has some serious shortcomings. The empirical comparisons are a bit lacking — in general, the wall clock time is not a very good measure, it depends heavily on the implementation and various optimizations therein. A more suitable comparison would be in terms of floating-point operations, or in terms of iteration complexity. The paper reports other interesting findings such as how the proposed method avoids robust overfitting. However, there is little theoretical evidence or insight for how the proposed method avoids it. The writing can be improved with more emphasis on the novelty and significance of the contributions — some of the statements regarding improvements over prior work are somewhat misleading given the incremental gains (e.g., see Table 1). I believe the comments from the reviewers have already helped improve the quality of the paper. I encourage the authors to further incorporate the feedback and work towards a stronger submission.
train
[ "COZUAKEh2_S", "lrN-yFXRTWV", "DT9fJ9Ab8b5", "SzsXTUNJE6", "f1zfY6rmG87", "-lTtFA1FzJP", "Mk14Rc7Udyr", "X9lUaGZVf_G", "zLcoU6lzob", "Tm720SAloJ4", "qTX7OkYLwp", "e3oldTeIs0", "2VBLBTh1aX9", "Uvz2kDYjIsp", "CRz-OCvSkN", "WltkttiWwDd", "i2NOHaeO3Q", "cgTn53aSTJC", "RlLhb9MpRK", ...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Dear Reviewer c58a:\n\nAs you recently commented that standard deviations are missing in Tables 3 and 5, we have added these results; see [revision of Table 3](https://ibb.co/PFRjXwQ) and [revision of Table 5](https://ibb.co/n8Cd66s). All the mean values and standard deviations are computed over 10 independent tr...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "DT9fJ9Ab8b5", "DT9fJ9Ab8b5", "iclr_2022_gzeruP-0J29", "Mk14Rc7Udyr", "X9lUaGZVf_G", "zLcoU6lzob", "AqdbH2zedCi", "-yZdHvmECHT", "qUCAsw5DgEr", "e3oldTeIs0", "iclr_2022_gzeruP-0J29", "CRz-OCvSkN", "DT9fJ9Ab8b5", "6pEA1oN1eeM", "qTX7OkYLwp", "a3laIqJRpdr", "6pEA1oN1eeM", "qTX7OkYLwp...
iclr_2022_NX0nX7TE4lc
DIVERSIFY to Generalize: Learning Generalized Representations for Time Series Classification
Time series classification is an important problem in real world. Due to its nonstationary property that the distribution changes over time, it remains challenging to build models for generalization to unseen distributions. In this paper, we propose to view the time series classification problem from the distribution perspective. We argue that the temporal complexity attributes to the unknown latent distributions within. To this end, we propose DIVERSIFY to learn generalized representations for time series classification. DIVERSIFY takes an iterative process: it first obtains the worst-case distribution scenario via adversarial training, then matches the distributions between all segments. We also present some theoretical insights. Extensive experiments on gesture recognition, speech commands recognition, and sensor-based human activity recognition demonstrate that DIVERSIFY significantly outperforms other baselines while effectively characterizing the latent distributions by qualitative and quantitative analysis.
Reject
This paper has been reviewed by three reviewers, two scoring it borderline leaning towards an accept, and one scoring it as accept. The key criticism from reviewers is the lack of novelty (*the technical novelty is limited due to the adoption of existing methods without substantial changes (pseudolabeling for latent distribution, adversarial learning for domain generalization)*) and limited technical analysis (*Theoretical analysis is weak due to the use of existing conclusion from Sicilia et al., 2021*). On the other hand, authors argue that the problem they address is new (*learn the generalized representation for time series data, which is a new and challenging problem*). As it stands, AC sees this paper as borderline leaning towards reject for the above reasons even though the evaluations are interesting.
train
[ "R3CHKY-QAdq", "cXxkII3Izu-", "_7XhAmB78VB", "oht40wW6CPt", "ChOxi4-5l30", "bSdkM6VQT-a", "mQcF0Fbi-Nk", "dt6qnQhpxAm", "CyeNkNukkQt", "0sZa9fHYwZ", "W6c-vfkD-g9", "1WsWw0cFwrN", "akXHYjVK_QH", "MQ692XKKRg", "XK2OOQ54ykG", "i4I0mKsfE-6", "tLFN4pCrPn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed a time series classification method with loss function that simultaneously promotes diverse distribution among different sub-domains within each domain and domain invariant feature representation within the same class. Here the domain is considered in a rather granular manner such as different ...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_NX0nX7TE4lc", "ChOxi4-5l30", "iclr_2022_NX0nX7TE4lc", "CyeNkNukkQt", "dt6qnQhpxAm", "CyeNkNukkQt", "iclr_2022_NX0nX7TE4lc", "mQcF0Fbi-Nk", "i4I0mKsfE-6", "tLFN4pCrPn", "0sZa9fHYwZ", "mQcF0Fbi-Nk", "mQcF0Fbi-Nk", "mQcF0Fbi-Nk", "R3CHKY-QAdq", "R3CHKY-QAdq", "iclr_2022_NX0nX...
iclr_2022_1DUwCRNAbA
An Investigation into the Role of Author Demographics in ICLR Participation and Review
As machine learning conferences grow rapidly, many are concerned that individuals will be left behind on the basis of traits such as gender and geography. We leverage historic ICLR submissions from 2017 to 2021 to investigate the impact of gender and country of origin both on representation and paper review outcomes at ICLR. We also study various hypotheses that could explain gender representation disparities at ICLR, with a focus on factors that impact the likelihood of an author returning to the conference in consecutive years. Finally, we probe the effects of paper topic on the review process and perform a study on how the inclusion of theorems and the number of co-authors impact the success of papers in the review process.
Reject
In this paper, the authors present an investigation of the impact of demographics on the peer review outcomes of ICLR. This is an important topic, as the demographics of ICLR and similar conferences are seriously skewed and may cause some people to feel excluded. The authors look into this complex problem with extensive manual annotations and analyses. The main weakness of this paper is that it is observational, and while the results are interesting, it is difficult to take away a clear and convincing message for the future. Part of the reason is that the whole problem is quite complex, and the hypotheses that are presented and tested in this paper reveal relatively shallow findings. Compared to the NeurIPS experiments which are carefully designed, these are not causal (see one of the reviewers' comments), so it is difficult to draw conclusions beyond correlations. In summary, the results are interesting, and despite some of the reviewers' concerns, I would not exclude this paper because of the topic being irrelevant to the cfp, but I think the paper needs a more clear and convincing message.
train
[ "TnsK-NC8p60", "r0lTbLmkQI7", "XD2hXg_n25", "fqnPrYloXvD", "ey32vD7TqVG", "bjupzDpCdeA", "WFRlZv-qYso", "eouDF4a7b8H", "wCz9YHZk2Nv", "BjSC5UgSor", "vRDSOJbeEqq", "NFu3_i6iutq", "TcojsHF5Gv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for the helpful response, which clarifies some things. I appreciate the changes, which were positive as well. My score remains positive.", " The authors have not responded to my concerns in the \"Rebuttal Response\" post, which are crucial in justifying their methodology and reliabilit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "WFRlZv-qYso", "fqnPrYloXvD", "bjupzDpCdeA", "ey32vD7TqVG", "wCz9YHZk2Nv", "TcojsHF5Gv", "NFu3_i6iutq", "vRDSOJbeEqq", "BjSC5UgSor", "iclr_2022_1DUwCRNAbA", "iclr_2022_1DUwCRNAbA", "iclr_2022_1DUwCRNAbA", "iclr_2022_1DUwCRNAbA" ]
iclr_2022_EKjUnoX-7M0
A new look at fairness in stochastic multi-armed bandit problems
We study an important variant of the stochastic multi-armed bandit (MAB) problem, which takes fairness into consideration. Instead of directly maximizing cumulative expected reward, we need to balance between the total reward and fairness level. In this paper, we present a new insight in MAB with fairness and formulate the problem in the penalization framework, where rigorous penalized regret can be well defined and more sophisticated regret analysis is possible. Under such a framework, we propose a hard-threshold UCB-like algorithm, which enjoys many merits including asymptotic fairness, nearly optimal regret, better tradeoff between reward and fairness. Both gap-dependent and gap-independent upper bounds have been established. Lower bounds are also given to illustrate the tightness of our theoretical analysis. Numerous experimental results corroborate the theory and show the superiority of our method over other existing methods.
Reject
The paper provides an algorithm for the stochastic multi armed bandit (MAB) problem in the regime with fairness constraints. It continues a line of work that in high level define fairness as a requirement to ensure a minimum amount of exploration for every arm. The main concern I found in the reviews regards the definition of fairness in this paper. Although it follows the same high level narrative of previous works its exact definition and difference from previous papers is not convincingly motivated, and seems to be tailored to the proposed algorithm rather to a real world fairness constraint. This issue could have been mitigated by a novel or generalizable technique, or insightful experiments, but this does not seem to be the case given the reviewers comment about the limited novelty and basic experiments.
train
[ "aCgtMqkMAa", "HD1bepxcxRt", "ZBWoemy-W4E", "_agv0ovy8SV", "xCRLRuir7zP", "FFahp7RCIuV", "_vmv3KTYLUV", "YPPFs6ypmip", "qde3-FaNf5P", "vfbC8EFqsVA", "wZJimMRvKie" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for your response to our rebuttal and we are happy to see that there is no other doubt other than the model-independent upper bound, as raised in the original review. \n \nOn the other hand, we can see that you have, after reading other reviews, additional comments regarding our bound ...
[ -1, -1, -1, 5, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "ZBWoemy-W4E", "vfbC8EFqsVA", "YPPFs6ypmip", "iclr_2022_EKjUnoX-7M0", "wZJimMRvKie", "vfbC8EFqsVA", "_agv0ovy8SV", "qde3-FaNf5P", "iclr_2022_EKjUnoX-7M0", "iclr_2022_EKjUnoX-7M0", "iclr_2022_EKjUnoX-7M0" ]
iclr_2022_f-KGT01Qze0
Robustmix: Improving Robustness by Regularizing the Frequency Bias of Deep Nets
Deep networks have achieved impressive results on a range of well curated benchmark datasets. Surprisingly, their performance remains sensitive to perturbations that have little effect on human performance. In this work, we propose a novel extension of Mixup called Robustmix that regularizes networks to classify based on lower frequency spatial features. We show that this type of regularization improves robustness on a range of benchmarks such as Imagenet-C and Stylized Imagenet. It adds little computational overhead and furthermore does not require a priori knowledge of a large set of image transformations. We find that this approach further complements recent advances in model architecture and data augmentation attaining a state-of-the-art mCE of 44.8 with an EfficientNet-B8 model and RandAugment, which is a reduction of 16 mCE compared to the baseline.
Reject
The paper proposes a data augmentation approach that extends Mixup with high- and low-pass filtering operations, in order to regularize deep networks towards focusing on low frequency components of the input signal. Reviewers are unconvinced about the significance of the contribution. Reviewer 5zdd notes that the method does not improve over standard Mixup in the absence of corruption error. Reviewer 3E2o notes that "the idea of spectral mixing itself is not particularly novel", and also asks for ablation studies concerning the hyperparameters of the method; the author response unfortunately does not provide enough detail on ablation experiments. The AC agrees with the reviewers and does not believe the author response has addressed weaknesses in a satisfactory manner.
train
[ "WYckuPmhdMk", "1M6od1AvEzN", "7-chtJMkPnx", "2dBJ0RNfQta", "2fu9wywB60J", "koDshO4e1V9", "1zxsnBbxfon" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new spectral augmentation Robustmix, aiming to improve the robustness of image classifiers. In detail, the presented transformation consists of two preliminary Mixup steps and one final stage mixing low and high frequencies from the images obtained during the two initial steps. The pseudo-lab...
[ 3, -1, -1, -1, -1, 5, 5 ]
[ 5, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_f-KGT01Qze0", "7-chtJMkPnx", "WYckuPmhdMk", "koDshO4e1V9", "1zxsnBbxfon", "iclr_2022_f-KGT01Qze0", "iclr_2022_f-KGT01Qze0" ]
iclr_2022_OGbbY4qmir5
Neurally boosted supervised spectral clustering
Network embedding methods compute geometric representations of graphs that render various prediction problems amenable to machine learning techniques. Spectral network embeddings are based on the computation of eigenvectors of a normalized graph Laplacian. When coupled with standard classifiers, spectral embeddings yield strong baseline performance in node classification tasks. Remarkably, it has been recently shown that these `base' classifications followed by a simple `Correction and Smooth' procedure reach state-of-the-art performance on widely used benchmarks. All these recent works employ classifiers that are agnostic to the nature of the underlying embedding. We present simple neural models that leverage fundamental geometric properties of spectral embeddings and obtains significantly improved classification accuracy over commonly used standard classifiers. Our results are based on a specific variant of spectral clustering that is not well-known, but it is presently the only variant known to have analyzable theoretical properties. We provide a \texttt{PyTorch} implementation of our classifier along with code for the fast computation of spectral embeddings.
Reject
There was some discussion on this paper, both with the authors and between reviewers. On the one hand, there is a general agreement that the empirical results suggesting that spectral clustering-based method can be competitive with SOTA methods on node classification benchmark is an interesting result. One the other hand, reviewers did not find a significantly novel contribution in the methodology proposed, and found that the empirical evaluation lacks depth and details to be really informative (eg, to understand why some methods work or not on some benchmarks). There is therefore a consensus that the paper is not ready for ICLR in its current form, but we hope that the reviews and discussion will help the authors prepare a revised version in the future.
train
[ "oTP5nxKDn32Y", "w-1kmZGOE20", "AFiUQf9FTNK", "txDpHXqNf8H", "KPNdS-gafvn", "l4wmbQ1qLBA" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The reviewer raised a concern that the paper cannot be considered a new method paper. But the entire Algorithm 1 is a *new* method that has not been proposed or published before. For example, the Conic and Spherical classification surfaces, the multiple channels, and the specially designed reduce layers, along wi...
[ -1, 8, 5, 3, 3, 5 ]
[ -1, 4, 3, 4, 4, 4 ]
[ "KPNdS-gafvn", "iclr_2022_OGbbY4qmir5", "iclr_2022_OGbbY4qmir5", "iclr_2022_OGbbY4qmir5", "iclr_2022_OGbbY4qmir5", "iclr_2022_OGbbY4qmir5" ]
iclr_2022_sTkY-RVYBz
Counterbalancing Teacher: Regularizing Batch Normalized Models for Robustness
Batch normalization (BN) is a ubiquitous technique for training deep neural networks that accelerates their convergence to reach higher accuracy. However, we demonstrate that BN comes with a fundamental drawback: it incentivizes the model to rely on frequent low-variance features that are highly specific to the training (in-domain) data, and thus fails to generalize to out-of-domain examples. In this work, we investigate this phenomenon by first showing that removing BN layers across a wide range of architectures leads to lower out-of-domain and corruption errors at the cost of higher in-domain error. We then propose the Counterbalancing Teacher (CT) method, which leverages a frozen copy of the same model without BN as a teacher to enforce the student network's learning of robust representations by substantially adapting its weights through a consistency loss function. This regularization signal helps CT perform well in unforeseen data shifts, even without information from the target domain as in prior works. We theoretically show in an overparameterized linear regression setting why normalization leads a model's reliance on such in-domain features, and empirically demonstrate the efficacy of CT by outperforming several methods on standard robustness benchmark datasets such as CIFAR-10-C, CIFAR-100-C, and VLCS.
Reject
The authors have proposed a new consistency loss for improving model robustness to common corruptions. With a student-teacher training setup, only the student network uses batch normalization at training time. Improvements are shown on small scale corruption datasets (CIFAR-C), a single domain generalization dataset (VLCS), and RobustPointSet. Though, positive feedback were given on the quality of the story telling, and on an interesting motivation by a few toy examples, some concerns remained among the reviewers. In particular applicability of the method as model and data sizes increases, e.g., on ImageNet-C, was questioned. After Additional results were provided by the authors, the method seems to break as scales increases. The way relevant baselines from previous work was also judged light and should be improved. Hence, the paper could be improved to include more comparisons and more convincingly showing advantages of the method.
train
[ "L3S4GVCfxr", "bHNmrNU8Ybl", "rKnDNBkaXyt", "LmyD3YpOmmp", "SzTfaF3weNB", "WeEq7Y6R87R", "5MgTiD9OxQW", "yY3-bHM2W5", "ZaqiExKHfKV", "EV2PDvRF09l", "D_tb2--sYSX", "CfV0qSe4WNa", "wBHMHQYIBRd", "U_x9Yw4t_zZ", "CtIZCLACYMO", "GZIFJvhArRb", "s1Ksm8LuJ-", "BVgnbfVlljX", "7DdMMzL3ADH"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", ...
[ " - The modified method section is pasted below. We kept $\\theta$ and $\\zeta$ in the method section for consistency because we used them in section 2.1. Please let us know if you want us to make more changes.\n\n- We did not use any initialization techniques etc. when training the teacher network on ImageNet. We ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "bHNmrNU8Ybl", "OmKo0MgUYQJ", "SzTfaF3weNB", "d997G4VBx5L", "5MgTiD9OxQW", "7DdMMzL3ADH", "x9gl4pnJ3D", "d997G4VBx5L", "x9gl4pnJ3D", "XNoNHKp44ZK", "CfV0qSe4WNa", "s1Ksm8LuJ-", "U_x9Yw4t_zZ", "CtIZCLACYMO", "GZIFJvhArRb", "OmKo0MgUYQJ", "Qbn_NekPyfH", "FxZ_qDK1Mus", "x9gl4pnJ3D",...
iclr_2022_-u8EliRNW8k
Speech-MLP: a simple MLP architecture for speech processing
Overparameterized transformer-based architectures have shown remarkable performance in recent years, achieving state-of-the-art results in speech processing tasks such as speech recognition, speech synthesis, keyword spotting, and speech enhancement et al. The main assumption is that with the underlying self-attention mechanism, transformers can ultimately capture the long-range temporal dependency from speech signals. In this paper, we propose a multi-layer perceptron (MLP) architecture, namely speech-MLP, useful for extracting information from speech signals. The model splits feature channels into non-overlapped chunks and processes each chunk individually. The processed chunks are then merged together and processed to consolidate the output. By setting the different numbers of chunks and focusing on different contextual window sizes, speech-MLP learns multiscale local temporal dependency. The proposed model is successfully evaluated on two tasks: keyword spotting and speech enhancement. In our experiments, we use two benchmark datasets for keyword spotting (Google speech command V2-35 and LibriWords) and the VoiceBank dataset for the speech enhancement task. In all experiments, speech-MLP surpassed transformer-based solutions, achieving state-of-the-art performance with fewer parameters and simpler training schemes. Such results indicate that oftentimes more complex models such as transformers are not necessary for speech processing tasks. Hence, they should not be considered as the first option as simpler and more compact models can offer optimal performance.
Reject
This paper proposes an MLP-based neural network specifically designed for speech processing. The proposed Split & Glue layer is used to capture multi-resolution speech characteristics. The method achieved better performance in both command recognition and speech enhancement tasks. Two major concerns raised by the reviewers: The proposed split & glue layer is similar to convolution. Although the authors revised the paper with more clarification on the differences, the op is equivalent to frame-wise convolution which has been explored in speech literature. This limits the novelty of the paper. The experimental justifications are relatively simple and limited. On the voice command and speech enhancement tasks presented in the paper, stronger and better baselines would be more convincing to justify the benefit of the proposed method. Moreover, testing on large scale ASR tasks instead of the relatively simple voice command task would be more convincing. The decision is mainly based on the limited novelty and experimental justification.
train
[ "L5E73rjr72n", "yIK4C6ajii8", "kfmG0Hstzi", "Wxi1iWSIGmk", "aDUHkU_rh40", "wy-4Ax83_OL", "IRXTmACd155", "2LgzvGSzuaL", "pV8H-_Tj1DM", "tQPONVMTEYP", "nCzqg2hemeJ", "-Lz-IpCfT0-", "ogt8LGxRc8B", "RUHSSCEfUTu", "M6sDPl8vA4Q", "WbbNUTBO24A", "x0lt596G-QC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThe authors have made detailed responses to all the reviews. Please take a look and see whether they address your concerns and update the ratings if necessary. Thanks for your help and expertise!", " Thank you for your response. The need to use similar model sizes as other works is very clear...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4, 4 ]
[ "iclr_2022_-u8EliRNW8k", "tQPONVMTEYP", "IRXTmACd155", "2LgzvGSzuaL", "pV8H-_Tj1DM", "IRXTmACd155", "x0lt596G-QC", "WbbNUTBO24A", "M6sDPl8vA4Q", "RUHSSCEfUTu", "ogt8LGxRc8B", "iclr_2022_-u8EliRNW8k", "iclr_2022_-u8EliRNW8k", "iclr_2022_-u8EliRNW8k", "iclr_2022_-u8EliRNW8k", "iclr_2022_...
iclr_2022_rF5UoZFrsF4
VUT: Versatile UI Transformer for Multimodal Multi-Task User Interface Modeling
User interface modeling is inherently multimodal, which involves several distinct types of data: images, structures and language. The tasks are also diverse, including object detection, language generation and grounding. In this paper, we present VUT, a Versatile UI Transformer that takes multimodal input and simultaneously accomplishes 5 distinct tasks with the same model. Our model consists of a multimodal Transformer encoder that jointly encodes UI images and structures, and performs UI object detection when the UI structures are absent in the input. Our model also consists of an auto-regressive Transformer model that encodes the language input and decodes output, for both question-answering and command grounding with respect to the UI. Our experiments show that for most of the tasks, when trained jointly for multi-tasks, VUT has achieved accuracy either on par with or exceeding the accuracy when the model is trained for individual tasks separately.
Reject
This paper proposes a neural architecture for tasks involving user interfaces. Tasks involve detecting objects on screen, writing captions about UI components, attribute recognition, etc. The reviewers for this submission found the proposed model to be reasonable and effective. They also found the paper to be well written and easy to understand. However, they did have one major concern, before and after rebuttal: While the model and design choices were reasonable, they questioned if the insights gained from this paper were of interest and relevance to the broader vision community. They also had other concerns/suggestions including adding inference costs and adding more detail, which were addressed in the rebuttal. Another concern was the fact that multi-task did not provide large gains over single task. I agree with the authors in this regard. I think the goal here was to produce a multi-task model that attained at least parity with respect to single task, because a single model would provide large benefits when running on a device, and hence I think this concern was well addressed. My takeaway from the paper, reviews and discussion, however, continues to focus on the major concern of the reviewers. I think this paper would have benefited from answering at least one (if not more) of the following questions to the reader: (1) Why should the broader community work on this task ? (2) If this task is of limited interest, are there instead, aspects of this task that serve as a useful testbed for multimodal research ? (3) If the task and testbed are not directly applicable, are there new techniques developed in this paper that are broadly applicable to other problems or domains ? Unfortunately, I think that this paper does not presently address either of these questions strongly to the reader. The paper proposes a method for their task, but readers who aren't directly interested in that end task may find this submission less interesting, in terms of insights for their own work. Given the above, I encourage the authors to address this concern and resubmit. I recommend rejection.
train
[ "1E-QONO6Og", "4mAM7Lr2ilf", "Z-psPXn_vXD", "NFReKEKQThf", "7A5_8GHgU0v", "zz3034Bw4H", "hNn9VfKmF9r", "7_iPaRPN95h" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an architecture for graphical user interfaces which involve multi-modal inputs (UI screenshots, Hierarchy structures, Natural Language) and multi-task learning (UI Object Detection, Widget Captioning, Screen summarization, Language grounding, and Tappability).The proposed architecture consists o...
[ 5, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_rF5UoZFrsF4", "Z-psPXn_vXD", "zz3034Bw4H", "1E-QONO6Og", "7_iPaRPN95h", "hNn9VfKmF9r", "iclr_2022_rF5UoZFrsF4", "iclr_2022_rF5UoZFrsF4" ]
iclr_2022_6EVxJKlpGR
Surprise Minimizing Multi-Agent Learning with Energy-based Models
Multi-Agent Reinforcement Learning (MARL) has demonstrated significant success by virtue of collaboration across agents. Recent work, on the other hand, introduces surprise which quantifies the degree of change in an agent's environment. Surprise-based learning has received significant attention in the case of single-agent entropic settings but remains an open problem for fast-paced dynamics in multi-agent scenarios. A potential alternative to address surprise may be realized through the lens of free-energy minimization. We explore surprise minimization in multi-agent learning by utilizing the free energy across all agents in a multi-agent system. A temporal Energy-Based Model (EBM) represents an estimate of surprise which is minimized over the joint agent distribution. Our formulation of the EBM is theoretically akin to the minimum conjugate entropy objective and highlights suitable convergence towards minimum surprising states. We further validate our theoretical claims in an empirical study of multi-agent tasks demanding collaboration in the presence of fast-paced dynamics.
Reject
The paper explores surprise minimization in multi-agent learning by using free energy across all agents in a multi-agent system. A temporal EBM represents an estimate of surprise which is minimized over the joint agent distribution. Empirical studies on the proposed method are conducted. This paper builds in an interesting direction around surprise minimization in multi-agent learning by using the energy-based framework, but the presentation of the method seems to need more efforts to be improved to avoid confusion. The discussion between authors and reviewers is summarized below: The major concerns of Reviewer doix include that: (i) the empirical results are not compelling, (ii) qualitative results are missing, and (iii) the motivation of surprise minimization in multi-agent RL is unclear. After the rebuttal, the authors addressed the concerns of Reviewer doix, who changed his/her score from 5 to 6. The major concern of Reviewer zFND comes from the understanding and justification of the paper. After the rebuttal, the concerns of Reviewer zFND have been partially addressed by clarification on what measures of surprise can be used and how these would be estimated. Reviewer zFND eventually changed his/her score from 5 to 6. Also, most of the concerns about theory and experiments from Reviewer 8Wiu have been addressed after the rebuttal. Reviewer 8Wiu accordingly changed the rating from 5 to 6. Reviewer g9cM is still not satisfied with the authors' answers, and his/her concerns regarding some technical issues remain and points out that the current paper has many inconsistencies across the writing that make it hard to evaluate the soundness and correctness of the results. After the rebuttal, the author successfully addressed most of the concerns from 3 of 4 reviewers, but the overall rating of the paper is on a borderline level. Given the fact that the paper still has some unaddressed concerns from Reviewer g9cM, and other reviewers actually do not champion the paper. The AC tends to recommend rejecting the paper at the current stage. AC urges the authors to improve their paper by including all the suggestions provided by the reviewers, and then resubmit it to a future venue.
train
[ "z8X_PQ_ZMQ7", "GsZUb1JdWmF", "srvYnHbtUMM", "Gn-ZL2BA913", "ZLBl7HdptYT", "Gsv2tgnUdTY", "JvFefbz2CRl", "3wxjIidP7Je", "DaUrxoQCaAp", "Hq7cW6lNL-D", "49MFWw6P4Bp", "DTnuz0IDVi", "-xlvbMZyTN", "2c349s5jnSW", "nYIVZwwQvoN", "CsMbpQbqtKe", "ZDBpTHGTBHx", "0hoxa17-1oR", "OLt0aVSu2xZ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", ...
[ " Thank you! We are happy to know that your concerns were suitably addressed.", "The authors present a method to regularise the learning of Q values within Decentralised partially observable markov decision processes, where the regulariser is one that minimises surprise in some way across the population of agents...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 1, -1, -1, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "srvYnHbtUMM", "iclr_2022_6EVxJKlpGR", "DIp4OgvuSed", "OLt0aVSu2xZ", "JvFefbz2CRl", "iclr_2022_6EVxJKlpGR", "-xlvbMZyTN", "iclr_2022_6EVxJKlpGR", "iclr_2022_6EVxJKlpGR", "DTnuz0IDVi", "iclr_2022_6EVxJKlpGR", "2w3auRTadtS", "2c349s5jnSW", "SNnyKyiDVSE", "49MFWw6P4Bp", "GsZUb1JdWmF", "...
iclr_2022_3FvF1db-bKT
Local Augmentation for Graph Neural Networks
Data augmentation has been widely used in image data and linguistic data but remains under-explored on graph-structured data. Existing methods focus on augmenting the graph data from a global perspective and largely fall into two genres: structural manipulation and adversarial training with feature noise injection. However, the structural manipulation approach suffers information loss issues while the adversarial training approach may downgrade the feature quality by injecting noise. In this work, we introduce the local augmentation, which enhances node features by its local subgraph structures. Specifically, we model the data argumentation as a feature generation process. Given the central node's feature, our local augmentation approach learns the conditional distribution of its neighbors' features and generates the neighbors' optimal feature to boost the performance of downstream tasks. Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner. Extensive experiments and analyses show that local augmentation consistently yields performance improvement for various GNN architectures across a diverse set of benchmarks.
Reject
The paper presents a new algorithm for data augmentation in graph neural networks. The algorithm works by learning a conditional model of a node's neighbor features, and augment the neighborhood representation using the generative model. In response to the reviews, the authors provided long answers and clarified much of the text. Nonetheless, after the discussion, two main concerns remained. First, the presention still felt subpar, too notationally heavy for what was presented. Second, the gains with respect to the baselines were assessed as not sufficiently significant to justify the approach which is substantially more complex than a baseline such as GRAND.
train
[ "LbEQymkBtXR", "yqxjR8ct_ug", "cING_qR9a5u", "fGgrMVq2APX", "lEE8KKK0Eh", "Or7O5oqIvqC", "QwocK3S0W9R", "IdnKfAWWtB", "0c9xDezNK69", "Py1xpVJb2yA", "80NurWzJtrf", "j1kKPhuIQMT", "2NhaVxnoAxU", "n2y0PnHpyi", "5t8PXvhhJDt", "wYWJ3vlP7b", "26g2vnE7ZC2", "T89nD0PeV69", "IwkG8p2WN_F",...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer"...
[ " **After carefully reading all the reviews, the authors' responses, and corresponding revisions, I think the quality of this paper is better than the original submission. However, as their presentation is still not clear enough to me and some points raised by other reviewers, e.g. computation complexity vs perform...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "cING_qR9a5u", "iclr_2022_3FvF1db-bKT", "5t8PXvhhJDt", "lEE8KKK0Eh", "26g2vnE7ZC2", "QwocK3S0W9R", "2NhaVxnoAxU", "0c9xDezNK69", "n2y0PnHpyi", "iclr_2022_3FvF1db-bKT", "iclr_2022_3FvF1db-bKT", "iclr_2022_3FvF1db-bKT", "VrUCkaAcLs7", "BuTqPGUa9v5", "wYWJ3vlP7b", "yqxjR8ct_ug", "NXTLYR...
iclr_2022_4pijrj4H_B
Fair Node Representation Learning via Adaptive Data Augmentation
Node representation learning has demonstrated its efficacy for various applications on graphs, which leads to increasing attention towards the area. However, fairness is a largely under-explored territory within the field, which may lead to biased results towards underrepresented groups in ensuing tasks. To this end, this work theoretically explains the sources of bias in node representations obtained via Graph Neural Networks (GNNs). Our analysis reveals that both nodal features and graph structure lead to bias in the obtained representations. Building upon the analysis, fairness-aware data augmentation frameworks on nodal features and graph structure are developed to reduce the intrinsic bias. Our analysis and proposed schemes can be readily employed to enhance the fairness of various GNN-based learning mechanisms. Extensive experiments on node classification and link prediction are carried out over real networks in the context of graph contrastive learning. Comparison with multiple benchmarks demonstrates that the proposed augmentation strategies can improve fairness in terms of statistical parity and equal opportunity, while providing comparable utility to state-of-the-art contrastive methods.
Reject
This paper studies augmentation-based methods to improve GNN fairness. Specifically, based on upper bounds, they propose augmentation tricks to reduce such bounds, empirically validated from benchmark datasets. Before rebuttal, there was a negative consensus that evaluation results are inconclusive and important state-of-the-arts are not discussed. Authors have significantly revised to address the concerns of some reviewers (gszb and LHQM), though some concerns still remain that the scheme is rather ad-hoc. Meanwhile, reviewer xMz4 did not find rebuttal sufficient, as some valid comments were not fully discussed. First, datasets where sensitive attributes are the inherent attributes of instances are more suitable for evaluation, which has not been properly addressed by authors. Second, important baselines were mentioned by xMz4, but they were only mentioned briefly in the new section of related work in the revised work. For example, (Agarwal et al 2021) is mentioned as dealing with counterfactual fairness only, but statistic parity studied in this paper is highly related to this concept. Similarly, this work can be viewed as an ad-hoc extension for fairness of GCA, without in-depth discussions to compare/contrast with these work, and to better highlight novelty/distinction. Summing up reviewer discussions, we conclude this paper is not ready yet as an ICLR publication.
train
[ "qP4TjpISXVN", "GEoUvf20db", "Q4shs9Cxrd8", "1y5Hag0xWea", "_T4smLCO4eF", "PxCH1A9qXK3", "O8uIq4UccM", "NRjyRsTwq8Y", "7IFHlu6Nz2", "x65z_7vxo8N", "E6HDmxw5pD4", "UL8VL9U-onC", "M2N9lmC0mD", "bfnt4nqipLS", "h4XJPQ6XYnC", "d7cLpfLitaF", "Ayrttzl4fr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the Reviewer for the comments. Below, we responded to the Reviewer’s concerns in our point-to-point replies. \n\n***\n\n__Q-1a:__ In the Introduction, the motivation is not clear. Why the \"augmentation method\" can well address this problem? It seems that the authors find that \"augmentation has not bee...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5 ]
[ "Ayrttzl4fr", "iclr_2022_4pijrj4H_B", "Ayrttzl4fr", "Ayrttzl4fr", "Ayrttzl4fr", "Ayrttzl4fr", "d7cLpfLitaF", "d7cLpfLitaF", "h4XJPQ6XYnC", "GEoUvf20db", "GEoUvf20db", "GEoUvf20db", "GEoUvf20db", "iclr_2022_4pijrj4H_B", "iclr_2022_4pijrj4H_B", "iclr_2022_4pijrj4H_B", "iclr_2022_4pijrj...
iclr_2022_qESp3gXBm2g
TRAKR – A reservoir-based tool for fast and accurate classification of neural time-series patterns
Neuroscience has seen a dramatic increase in the types of recording modalities and complexity of neural time-series data collected from them. The brain is a highly recurrent system producing rich, complex dynamics that result in different behaviors. Correctly distinguishing such nonlinear neural time series in real-time, especially those with non-obvious links to behavior, could be useful for a wide variety of applications. These include detecting anomalous clinical events such as seizures in epilepsy, and identifying optimal control spaces for brain machine interfaces. It remains challenging to correctly distinguish nonlinear time-series patterns because of the high intrinsic dimensionality of such data, making accurate inference of state changes (for intervention or control) difficult. Simple distance metrics, which can be computed quickly do not yield accurate classifications. On the other end of the spectrum of classification methods, ensembles of classifiers or deep supervised tools offer higher accuracy but are slow, data-intensive, and computationally expensive. We introduce a reservoir-based tool, state tracker (TRAKR), which offers the high accuracy of ensembles or deep supervised methods while preserving the computational benefits of simple distance metrics. After one-shot training, TRAKR can accurately, and in real time, detect deviations in test patterns. By forcing the weighted dynamics of the reservoir to fit a desired pattern directly, we avoid many rounds of expensive optimization. Then, keeping the output weights frozen, we use the error signal generated by the reservoir in response to a particular test pattern as a classification boundary. We show that, using this approach, TRAKR accurately detects changes in synthetic time series. We then compare our tool to several others, showing that it achieves highest classification performance on a benchmark dataset–sequential MNIST–even when corrupted by noise. Additionally, we apply TRAKR to electrocorticography (ECoG) data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding the value of expected outcomes. We show that TRAKR can classify different behaviorally relevant epochs in the neural time series more accurately and efficiently than conventional approaches. Therefore, TRAKR can be used as a fast and accurate tool to distinguish patterns in complex nonlinear time-series data, such as neural recordings.
Reject
Authors propose an autoencoding echo state machine for a one-shot one-class time series classification task. Their approach feeds a (one-dimensional) error signal over time relative to a reference training datum to SVMs. Training is very fast by design. OFC signal analysis has practical value in neuroscience. But only one benchmark (seq-MNIST) was used to evaluate their method. While the performance seem impressive, no explanation of why the internal representation learned by the proposed system is superior and robust to noise was provided. No sequential autoencoders or latent neural trajectory inference methods were compared. Although the manuscript has greatly improved through the review-rebuttal process, there are missing key details (e.g. length of E(t) used for classification--important for real-time application, initial state for the reservoir, choice of W_in --important since it seems to be a chaotic network that's driven by strong input). While there is novelty in the approach, there is a general lack of enthusiasm among the reviewers for the manuscript as is. The reviewers and AC strongly encourage the authors to further developed these ideas and add thorough analyses for another conference. (BTW, perhaps it's worth citing https://doi.org/10.1109/IJCNN.2016.7727309, since autoencoder combined with reservoir computing has been used for anomaly detection.)
train
[ "lPTpTgNRfmO", "uhNijOVnI74", "9hIU2wVT5TB", "O4W_DLpUbtg", "Mj_aod35W8y", "AbxjhMG_5ax", "ByPz_Jt4Cnd", "IVZOTXk9NH0", "G53A5OnceQ", "BlMdmhtDViA", "1w_PmO85JO", "q36hsxW3S5R", "_IM3qrfu2a-", "O0ZbdACFni5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ " The authors have clearly addressed all of my concerns, so I am increasing my score from 5 to 6. \n\nAgreed with the other reviewer that the new benchmarks are a nice addition to the paper, and I think the comparisons the authors make are a fair and make a good case for the utility of TRAKR. \n\nLooking forward t...
[ -1, 6, -1, 6, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "AbxjhMG_5ax", "iclr_2022_qESp3gXBm2g", "q36hsxW3S5R", "iclr_2022_qESp3gXBm2g", "iclr_2022_qESp3gXBm2g", "uhNijOVnI74", "q36hsxW3S5R", "Mj_aod35W8y", "O4W_DLpUbtg", "uhNijOVnI74", "ByPz_Jt4Cnd", "O4W_DLpUbtg", "O0ZbdACFni5", "Mj_aod35W8y" ]
iclr_2022_BKmoW5K4sS
On Adversarial Bias and the Robustness of Fair Machine Learning
Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the error of a model across different groups, through imposing fairness constraints on the learning algorithm. But, are decisions made by fair models trustworthy? How sensitive are fair models to changes in their training data? By giving equal importance to groups of different sizes and distributions in the training set, we show that fair models become more fragile to outliers. We study the trade-off between fairness and robustness, by analyzing the adversarial (worst-case) bias against group fairness in machine learning and by comparing it with the effect of similar adversarial manipulations on regular models. We show that the adversarial bias introduced in training data, via the sampling or labeling processes, can significantly reduce the test accuracy on fair models, compared with regular models. Our results demonstrate that adversarial bias can also worsen a model's fairness gap on test data, even though the model satisfies the fairness constraint on training data. We analyze the robustness of multiple fair machine learning algorithms that satisfy equalized odds (and equal opportunity) notion of fairness.
Reject
The authors consider the impact of designing fair algorithms on adversarial robustness. The particular focus is on poisoning attacks. They show experimentally that for some datasets and models/algorithms that using "fair" algorithms increase adversarial vulnerability compared to the standard training procedure (ignoring fairness criteria). The reviews have raised questions about whether the experimental results are extensive enough and I share their concerns. Most importantly, the authors have not addressed the question regarding to the quality of approximation at all.
train
[ "rr4JTNqaJa4", "3uM4pGwow9", "d95K4vfAHwP", "zF4sldtFsRi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Intuitively, the same amount of data poisoning will have a larger impact when the learner is solving a constrained optimization problem than an unconstrained one. This paper proposed an attack algorithm specifically designed for such fair learners and showed that fair learning algorithms are more vulnerable to ad...
[ 3, 3, 8, 5 ]
[ 4, 5, 4, 3 ]
[ "iclr_2022_BKmoW5K4sS", "iclr_2022_BKmoW5K4sS", "iclr_2022_BKmoW5K4sS", "iclr_2022_BKmoW5K4sS" ]
iclr_2022_CoMOKHYWf2
AdaFocal: Calibration-aware Adaptive Focal Loss
Much recent work has been devoted to the problem of ensuring that a neural network's confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with Focal loss leads to better calibrated deep networks than cross-entropy loss, while achieving the same level of accuracy \cite{mukhoti2020}. This success stems from Focal loss regularizing the entropy of the network's prediction (controlled by the hyper-parameter $\gamma$), thereby reining in the network's overconfidence. Further improvements in calibration can be achieved if $\gamma$ is selected independently for each training sample. However, the proposed strategy (named FLSD-53) is based on simple heuristics which, when selecting the $\gamma$, does not take into account any knowledge of whether the network is under or over confident about such samples and by how much. As a result, in most cases, this strategy performs only slightly better. In this paper, we propose a calibration-aware sample-dependent Focal loss called AdaFocal that adaptively modifies $\gamma$ from one training step to the next based on the information about the network's current calibration behaviour. At each training step $t$, AdaFocal adjusts the $\gamma_t$ based on (1) $\gamma_{t-1}$ of the previous training step (2) the magnitude of the network's under/over-confidence. We evaluate our proposed method on various image recognition and NLP tasks, covering a variety of network architectures, and confirm that AdaFocal consistently achieves significantly better calibration than the competing state-of-the-art methods without loss of accuracy.
Reject
The work AdaFocal proposes an approach to tune Focal Loss' $\gamma$ hyperparameter to improve the model's calibration, particularly to avoid the occasional underconfidence when using focal loss. This tuning is done not as a learned constant hyperparameter across training but as one that evolves over training. The work is both well-motivated and well-written. However, multiple reviewers share the concern (which I agree with) that the method fails on ImageNet experiments, and the method often fails to beat even temperature scaling. The experimental comparison arguing that the approach improves upon many methods pre-temperature scaling is unfair as no other method leverages the validation set. This makes for a fairly deceiving slight of hand if not read carefully. When compared to temperature scaling, which does use the validation set, the performance improvement gap is diminished altogether. I recommend the authors use the reviewers' feedback to enhance their preprint should they aim to submit to a later venue. In particular, improve the clarity around the experimental validation.
train
[ "jB_QroNPjzY", "ev7Gyc6G-Np", "91oyMH54nrj", "BRNXJ4SHFbZ", "Du5YEOM7rmO", "IKqScRT_Nm5", "zSwJ10Uh5x8", "sPI-2OiccPL", "Sq2iMaP7tPp", "YOreTsQBOWd", "DWBV2qM4erD", "Nlbw_7zzxip", "gZs66UAHARN", "s6tcTh75EV", "rQ2yLbceCka", "AyCq_LSR0H", "D-X2sRVbLav" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for raising the discussion as we would like to take this opportunity to clarify that in the paper we have restrained ourselves from talking about the difference $C_{train,i} - C_{val,i}$ (which would be a strong statement) and have focused only on the somewhat loose correspondence of - if w...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "ev7Gyc6G-Np", "YOreTsQBOWd", "iclr_2022_CoMOKHYWf2", "Du5YEOM7rmO", "IKqScRT_Nm5", "zSwJ10Uh5x8", "sPI-2OiccPL", "91oyMH54nrj", "YOreTsQBOWd", "D-X2sRVbLav", "Nlbw_7zzxip", "AyCq_LSR0H", "rQ2yLbceCka", "iclr_2022_CoMOKHYWf2", "iclr_2022_CoMOKHYWf2", "iclr_2022_CoMOKHYWf2", "iclr_202...
iclr_2022_D9SuLzhgK9
Adam is no better than normalized SGD: Dissecting how adaptivity improves GAN performance
Adaptive methods are widely used for training generative adversarial networks (GAN). While there has been some work to pinpoint the marginal value of adaptive methods in minimization problems, it remains unclear why it is still the method of choice for GAN training. This paper formally studies how adaptive methods help performance in GANs. First, we dissect Adam---the most popular adaptive method for GAN training---by comparing with SGDA the direction and the norm of its update vector. We empirically show that SGDA with the same vector norm as Adam reaches similar or even better performance than the latter. This empirical study encourages us to consider normalized stochastic gradient descent ascent (nSGDA) as a simpler alternative to Adam. We then propose a synthetic theoretical framework to understand why nSGDA yields better performance than SGDA for GANs. In that situation, we prove that a GAN trained with nSGDA provably recovers all the modes of the true distribution. In contrast, the same networks trained with SGDA (and any learning rate configuration) suffers from mode collapsing. The critical insight in our analysis is that normalizing the gradients forces the discriminator and generator to update at the same pace. We empirically show the competitive performance of nSGDA on real-world datasets.
Reject
The paper studies how adaptive methods help train GANs to achieve better FID scores. It empirically shows that the adaptive magnitude in ADAM is the reason for ADAM's wide adoption for GAN training. The paper receives three reviews: one ranked the paper "accept, good paper" and two ranked the paper "marginally below the acceptance threshold". The supportive reviewer likes the findings in the paper interesting but does not provide enough explanation on the significance of the findings. On the other hand, the negative reviewers raise several concerns, including the GAN architectures used in the paper are outdated and the achieved performance gain is not major. As the paper focuses on performance instead of convergence, the meta-reviewer feels it would be better to include results on SOTA GAN architectures. The provided rebuttal does not lead to any review score change. Consolidating the review and rebuttal, the meta-reviewer feels the paper needed to be improved to meet the bar and would not recommend its acceptance.
train
[ "ZpNbCBCIgVv", "NrHQ2N4QEfy", "UU_dpyXduAF", "xUUk1wFacQ6", "Ybk9huRYFD5", "ZTnTH903dEk", "gniKIURmJXi", "8nkCw1Sm2rR" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies how adaptive methods help performance in GANs. The study empirically finds that SGDA with the same vector norm as Adam reaches similar better performance. Based on this observation, normalized SGDA (nSGDA) is proposed as a simpler alternative to Adam. nSGDA is evaluated on several datasets and t...
[ 5, -1, -1, -1, -1, -1, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_D9SuLzhgK9", "UU_dpyXduAF", "ZTnTH903dEk", "ZpNbCBCIgVv", "8nkCw1Sm2rR", "gniKIURmJXi", "iclr_2022_D9SuLzhgK9", "iclr_2022_D9SuLzhgK9" ]
iclr_2022_bPadTQyLb2_
Efficient representations for privacy-preserving inference
Deep neural networks have a wide range of applications across multiple domains such as computer vision and medicine. In many cases, the input of a model at inference time can consist of sensitive user data, which raises questions concerning the levels of privacy and trust guaranteed by such services. Much existing work has leveraged homomorphic encryption (HE) schemes that enable computation on encrypted data to achieve private inference for multi-layer perceptrons and CNNs. An early work along this direction was CryptoNets, which takes 250 seconds for one MNIST inference. The main limitation of such approaches is that of compute, which is due to the costly nature of the NTT (number theoretic transform) operations that constitute HE operations. Others have proposed the use of model pruning and efficient data representations to reduce the number of HE operations required. In this paper, we focus on improving upon existing work by proposing changes to the representations of intermediate tensors during CNN inference. We construct and evaluate private CNNs on the MNIST and CIFAR-10 datasets, and achieve over a two-fold reduction in the number of operations used for inferences of the CryptoNets architecture.
Reject
This paper improves on the efficiency of prior work that uses homomorphic encryption to perform privacy-preserving inference. There are two main concerns raised by the reviewers. First, multiple reviewers (and I) found this paper difficult to read. Multiple pieces of the problem are not clearly presented especially with respect to the technical contributions. This was fixed in part in the rebuttal but more could still be done here. But more importantly, three reviewers raise concerns about the evaluation methodology, especially with respect to comparisons to prior work. On top of this, there are valid criticisms raised by the reviewers about if the contribution here is that significant when compared to prior work. (This is something that both more clear writing and more careful experiments could hep address.) Taken together I do not believe this paper is yet ready for publication.
train
[ "RsivDpgt48a", "3u7tuTaOWQV", "uHBsUDbBzAK", "mhclv106jbP", "bzg4Xgj75QB", "X8EBYAzXEgS", "Asb0MbJkOIv", "o1iecjhNZRB", "SBUL7MZ3XZY", "_nrj7rU_yip", "npqW3QvqHF", "FS3kWN7c_zM", "PFknTMObSzH", "VZngbzVju2V" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors response and stand by my recommendation.\n\n", " Dear Reviewer,\n\nWe would again like to thankyou for your time in reviewing our manuscript and the useful feedback. We would like to highlight that the discussion period is coming to an end and were hoping you might be able to respond to ...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "SBUL7MZ3XZY", "FS3kWN7c_zM", "iclr_2022_bPadTQyLb2_", "X8EBYAzXEgS", "iclr_2022_bPadTQyLb2_", "npqW3QvqHF", "iclr_2022_bPadTQyLb2_", "VZngbzVju2V", "PFknTMObSzH", "FS3kWN7c_zM", "bzg4Xgj75QB", "iclr_2022_bPadTQyLb2_", "iclr_2022_bPadTQyLb2_", "iclr_2022_bPadTQyLb2_" ]
iclr_2022_EFgzhSJYIj6
RL-DARTS: Differentiable Architecture Search for Reinforcement Learning
Recently, Differentiable Architecture Search (DARTS) has become one of the most popular Neural Architecture Search (NAS) methods successfully applied in supervised learning (SL). However, its applications in other domains, in particular for reinforcement learning (RL), has seldom been studied. This is due in part to RL possessing a significantly different optimization paradigm than SL, especially with regards to the notion of replay data, which is continually generated via inference in RL. In this paper, we introduce RL-DARTS, one of the first applications of end-to-end DARTS in RL to search for convolutional cells, applied to the challenging, infinitely procedurally generated Procgen benchmark. We demonstrate that the benefits of DARTS become amplified when applied to RL, namely search efficiency in terms of time and compute, as well as simplicity in integration with complex preexisting RL code via simply replacing the image encoder with a DARTS supernet, compatible with both off-policy and on-policy RL algorithms. At the same time however, we provide one of the first extensive studies of DARTS outside of the standard fixed dataset setting in SL via RL-DARTS. We show that throughout training, the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
Reject
The paper studies network architecture search in the context of reinforcement learning. In particular it applies the DARTS method to the Procgen RL benchmark, and conducts extensive experimental evaluations. It identifies a number of issues that could potentially prevent DARTS from working well in the RL setting (such as nonstationarity and high variance), but in the end shows good performance without needing to modify DARTS substantially. The reviewers agreed that a key strength of the paper is in its experiments. But they also identified a weakness in novelty: if a paper's main contribution is to combine two previously well-explored ideas (in this case, RL and DARTS) then there is a high bar for the quality of exposition and positioning, and the reviewers did not feel that this bar was met. (Though the authors' updates during the rebuttal period did help substantially with clarity and relationship to other methods -- thank you for these!) Recommended decision: while the paper makes a worthwhile contribution, it does not in its current form rise to the level of novelty and general interest that is needed for publication in ICLR.
train
[ "cYhsqrJBq-P", "oMH8mWTZCn", "XarNtbtT0b-", "C3Txb3WloUJ", "D_fe9N5rLZ-", "vGzWytDh1G", "1cOIOOh9sWS", "TJT1cDFkpS8", "8c1sn2WQ_V", "TlEok5ayOKR", "amrPuzHdiLB", "VtKsWzJY2de", "46aN9YgvoB6", "zx5ljceuSTq", "FKuMkn7Seow", "IbYddzbY-2X" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank Reviewer wMrv for their positive recommendation and suggestions for improvement. We've incorporated Points 2+3 into our draft in Appendix A with the RL-DARTS algorithm procedure as well as new positive experiments on DM-Control with the SAC algorithm. We also hope that we've clarified our novel contribut...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "IbYddzbY-2X", "FKuMkn7Seow", "zx5ljceuSTq", "D_fe9N5rLZ-", "iclr_2022_EFgzhSJYIj6", "FKuMkn7Seow", "D_fe9N5rLZ-", "1cOIOOh9sWS", "iclr_2022_EFgzhSJYIj6", "IbYddzbY-2X", "vGzWytDh1G", "zx5ljceuSTq", "8c1sn2WQ_V", "iclr_2022_EFgzhSJYIj6", "iclr_2022_EFgzhSJYIj6", "iclr_2022_EFgzhSJYIj6"...
iclr_2022_IbyMcLKUCqT
Theoretical Analysis of Consistency Regularization with Limited Augmented Data
Data augmentation is popular in the training of large neural networks; currently, however, there is no clear theoretical comparison between different algorithmic choices on how to use augmented data. In this paper, we take a small step in this direction; we present a simple new statistical framework to analyze data augmentation - specifically, one that captures what it means for one input sample to be an augmentation of another, and also the richness of the augmented set. We use this to interpret consistency regularization as a way to reduce function class complexity, and characterize its generalization performance. Specializing this analysis for linear regression shows that consistency regularization has strictly better sample efficiency as compared to empirical risk minimization on the augmented set. In addition, we also provide generalization bounds under consistency regularization for logistic regression and two-layer neural networks. We perform experiments that make a clean and apples-to-apples comparison (i.e. with no extra modeling or data tweaks) between ERM and consistency regularization using CIFAR-100 and WideResNet; these demonstrate the superior efficacy of consistency regularization.
Reject
This paper shows how constraining the representation to be invariant to augmentation shrinks the hypothesis space to improve generalization more than just introducing additional samples through augmentation. I agree with the reviewers that this is a novel, intuitive, and interesting finding. However, there were many technical and clarity issues with the original submission. These were partially addressed by the authors in the rebuttal. The reviewers appreciated the authors' efforts and commitment in the rebuttal, but my conclusion from our discussion that this paper requires another round of revisions. I hope the authors would follow the reviewer's comments, improve the paper, and re-submit.
train
[ "tT7wwZ9-Hqz", "hCDgB867Otj", "0z7SwSWluCR", "QMAKTSOeNf", "C2usbbKm7wY", "zOxUuXduUp4", "JGTvRERMbX", "P44CwIodrWR", "bVva7gQdi9a", "c2SMyY2ECVK", "YVcypdQ4qg6", "PvrRzfrSV93", "tq6SZgniLPG", "qWZqHE2YGet", "yUp4uacDEhI", "rKC933plpYy", "kxRcgTIl-fa" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Data augmentation is a common technique to improve generalization, especially when data is scarce. This paper introduces a theoretical framework for analyzing the effectiveness of consistency regularization when data augmentation is employed. In the limit, consistency regularization is akin to solving a constraine...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_IbyMcLKUCqT", "iclr_2022_IbyMcLKUCqT", "QMAKTSOeNf", "C2usbbKm7wY", "zOxUuXduUp4", "c2SMyY2ECVK", "hCDgB867Otj", "rKC933plpYy", "iclr_2022_IbyMcLKUCqT", "tq6SZgniLPG", "tT7wwZ9-Hqz", "tT7wwZ9-Hqz", "YVcypdQ4qg6", "kxRcgTIl-fa", "kxRcgTIl-fa", "iclr_2022_IbyMcLKUCqT", "iclr...
iclr_2022_C5Q04gnc4f
An object-centric sensitivity analysis of deep learning based instance segmentation
In this study we establish a comprehensive baseline regarding the object-centric robustness of deep learning models for instance segmentation. Our approach is motivated by the work of Geirhos et al. (2019) on texture bias in CNNs. However, we do not compare against human performance but instead incorporate ideas from object-centric representation learning. In addition, we analyze and control the effect of strong stylization that can lead to disappearing objects. The result is a stylized and object-centric version of MS COCO on which we perform an extensive sensitivity analysis regarding visual feature corruptions. We evaluate a broad range of frameworks including Cascade and Mask R-CNN, Swin Transformer, YOLACT(++), DETR, SOTR and SOLOv2. We find that framework choice, data augmentation and dynamic architectures improve robustness whereas supervised and self supervised pre-training does surprisingly not. In summary we evaluate 63 models on 61 versions of COCO for a total of 3843 evaluations.
Reject
This paper finally received divergent and borderline reviews with two positive (6) and two negative (3) rates. After the thorough reviews by ACs ourselves, we would like to decide to reject this work at this time, even though this submission has a lot of potentials including intensive analyses on instance segmentation frameworks and architectures. We first would like to appreciate comprehensive author’s responses and additional empirical results. They should be extremely helpful to make this submission stronger. Here are some of our suggested points for improvement: (i) The novelty, significance, and practical implications of this work (compared to previous analysis work) may need to be better presented in a more persuasive way. (ii) Nuance of stylization transformation can be better explained compared to other types of perturbations or transformations. (iii) Empirical fairness can be better justified. (iv) Since the paper is written in a highly condensed way, some of reduction may improve the readability. (v) Finally, given that this paper focuses on empirical study about instance segmentation, it may be more appreciated in a computer vision venue.
test
[ "omS01VHOl8", "sSzGXIQcHJN", "Lcg0ptna8LN", "18BE3Z16Nx1", "QciVd8XFdTp", "5Erevp2izP", "z-uTXsyeNG_", "FpHRaNQRyrb", "XL19wApZxKn", "3g0k0EclVYi", "iHR0P-aP3uD", "c-bDsdDYsp", "R9ist2EwoJ3", "W8Fww9t46vZ", "t3j-D5xFmF", "bCbLP5_PCwo", "q7vvrBE8fnn", "VgpiwrWjiET", "MpSb2rH8M7L",...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " Thank you for the response. \n\nPlease note that we have a valid comparison in the paper regarding the impact of data augmentation for Mask R-CNN. We find that LSJ data augmentation has indeed a positive but in comparison, only minor positive effect on out-of-distribution robustness. These finding are in line wit...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 5 ]
[ "sSzGXIQcHJN", "-zZVZZoplY2", "z-uTXsyeNG_", "QciVd8XFdTp", "XL19wApZxKn", "iclr_2022_C5Q04gnc4f", "FpHRaNQRyrb", "c-bDsdDYsp", "bCbLP5_PCwo", "2lWahrSb594", "gYmAAaOwVeb", "PjLFdoRngDf", "iclr_2022_C5Q04gnc4f", "bCbLP5_PCwo", "bCbLP5_PCwo", "q7vvrBE8fnn", "5Erevp2izP", "2lWahrSb59...
iclr_2022_MUpxS9vDbZr
Why Should I Trust You, Bellman? Evaluating the Bellman Objective with Off-Policy Data
In this work, we analyze the effectiveness of the Bellman equation as a proxy objective for value prediction accuracy in off-policy evaluation. While the Bellman equation is uniquely solved by the true value function over all state-action pairs, we show that in the finite data regime, the Bellman equation can be satisfied exactly by infinitely many suboptimal solutions. This eliminates any guarantees relating Bellman error to the accuracy of the value function. We find this observation extends to practical settings; when computed over an off-policy dataset, the Bellman error bears little relationship to the accuracy of the value function. Consequently, we show that the Bellman error is a poor metric for comparing value functions, and therefore, an ineffective objective for off-policy evaluation. Finally, we discuss differences between Bellman error and the non-stationary objective used by iterative methods and deep reinforcement learning, and highlight how the effectiveness of this objective relies on generalization during training.
Reject
This paper studies whether the Bellman error is a good metric to reflect the quality of value function estimation, focusing on finite-sample off-policy data sets. Both theoretical analyses and empirical experiments have been provided, showing that the Bellman error is often not the right metric to consider. However, while I appreciate the authors' theoretical attempts, the current theoretical contributions are not deep/significant enough. As the reviewers mentioned, the failure of the direct use of BRM is not surprising given the insufficiency of data (namely, no algorithm can make predictions on completely unseen regions unless further modeling structure is present). The authors might want to further strengthen their theory along this important direction.
val
[ "76vlMqN9xSh", "Y7GmstBYCXU", "v9IWD8gs9j", "fMFm8w8hVqn", "3KdLJ-C7fUh", "qmhFpyULXbq", "7428Pa40a14", "VP2hFwH36_I", "qML2-Zv3TB", "3a1o4yse4ZP", "SYDAvMxJMOZ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for the reviewers & AC for their insightful comments & time. Since two reviewers have yet to engage in discussion, we would like to highlight how we modified the paper to address their feedback (as well as the feedback of the other two reviewers).\n\n - Added discussion to related work on minimizi...
[ -1, 8, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2022_MUpxS9vDbZr", "iclr_2022_MUpxS9vDbZr", "fMFm8w8hVqn", "qmhFpyULXbq", "SYDAvMxJMOZ", "Y7GmstBYCXU", "3a1o4yse4ZP", "qML2-Zv3TB", "iclr_2022_MUpxS9vDbZr", "iclr_2022_MUpxS9vDbZr", "iclr_2022_MUpxS9vDbZr" ]
iclr_2022_IxCAF8IMatf
A Unified Knowledge Distillation Framework for Deep Directed Graphical Models
Knowledge distillation (KD) is a technique that transfers the knowledge from a large teacher network to a small student network. It has been widely applied to many different tasks, such as model compression and federated learning. However, the existing KD methods fail to generalize to general \textit{deep directed graphical models (DGMs)} with arbitrary layers of random variables. We refer by \textit{deep} DGMs to DGMs whose conditional distributions are parameterized by deep neural networks. In this work, we propose a novel unified knowledge distillation framework for deep DGMs on various applications. Specifically, we leverage the reparameterization trick to hide the intermediate latent variables, resulting in a compact DGM. Then we develop a surrogate distillation loss to reduce error accumulation through multiple layers of random variables. Moreover, we present the connections between our method and some existing knowledge distillation approaches. The proposed framework is evaluated on three applications: deep generative models compression, discriminative deep DGMs compression, and VAE continual learning. The results show that our distillation method outperforms the baselines in data-free compression of deep generative models, including variational autoencoder (VAE), variational recurrent neural networks (VRNN), and Helmholtz Machine (HM). Moreover, our method achieves good performance for discriminative deep DGMs compression. Finally, we also demonstrate that it significantly improves the continual learning performance of VAE.
Reject
The paper proposes a framework for distilling deep directed graphical models where the teacher and student models have the same number of latent variables z. The key idea is to reparameterize both models in terms of standardized random variables epsilon with fixed distributions and train the student to match the conditional distributions of the observed variables/targets given the values of the standardized RVs epsilon. The approach aims to avoid error compounding that affects the local distillation approach, where the student is trained to match conditional distributions of the teacher model (without the above reparameterization). To deal with discrete latent variables and vanishing gradients the authors augment the target matching loss with the latent distillation loss that matches the local distribution for each z_i given the standardized variables epsilon it depends on. Positives -The paper tackles an important problem. -The idea of using reparameterization for distillation in this way makes a lot of sense for continuous latent variables and could be impactful. -The experiments provide some evidence in support of the idea. Negatives -There are considerable issues with the clarity of writing: for example, it is really not clear how (and why) the method is supposed to work for discrete latent variables. The explanation provided by the authors in their response to the reviewers was helpful but still not clear enough. -The fact that the teacher and student models need to have the same number of latent variables (and perhaps even the same structure) is a big limitation of the method given the claim of its generality, and thus needs to be clearly acknowledged and discussed. For example, the method cannot be used to train a student model with fewer latent variables than the teacher, which seems like a very common use case. -The experimental evaluation is extensive but insufficient, in large part due to the evaluation metrics. Given that VAEs are trained by maximizing the ELBO (and distilled by minimizing a sum of KLs), it makes sense to also evaluate them based on the ELBO rather than solely on the FID, is done in the paper. The VRNN experiment would be much more informative if it included a quantitative evaluation (e.g. based on ELBO). In summary, the paper has considerable potential but needs to be substantially improved before being published.
val
[ "urNm_lX9j0n", "X8Pfk0E2bgL", "9-4QBqhgg9P", "AfPlSo6eLrr", "59tvKza5pGB", "tiHFvRmuLio", "xuh7YZZqQMx", "p1fFDih6VBz", "-r9-A_xiNSu", "-o70BmJN8Y", "WZcsSAP75c", "pmnS2Z1cTUN", "N3_TOV_e-Kd", "qWpKlRTJ0v1", "7qyarBYZ1Ga", "7Ju5zzYtva8", "COQc5_5tHQQ", "Gv0cEnOxsMP", "VDzfVvMeJgG...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q**: On experiments on a more complex DGM\n\n**A**: Our method can deal with more complex DGMs than VAE. Fig. 6 (c) in the appendix shows an example of a complicated DGM, called variational recurrent neural networks (VRNN), that contains many latent variables and target variables. We have conducted experiments ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "AfPlSo6eLrr", "9-4QBqhgg9P", "N3_TOV_e-Kd", "59tvKza5pGB", "N3_TOV_e-Kd", "WZcsSAP75c", "VDzfVvMeJgG", "Gv0cEnOxsMP", "7qyarBYZ1Ga", "tiHFvRmuLio", "xuh7YZZqQMx", "7Ju5zzYtva8", "qWpKlRTJ0v1", "1AjWjKZ2-S5", "FtndfHh3Nk", "p1fFDih6VBz", "iclr_2022_IxCAF8IMatf", "iclr_2022_IxCAF8IM...
iclr_2022_j30wC0JM39Q
Why do embedding spaces look as they do?
The power of embedding representations is a curious phenomenon. For embeddings to work effectively as feature representations, there must exist substantial latent structure inherent in the domain to be encoded. Language vocabularies and Wikipedia topics are human-generated structures that reflect how people organize their world, and what they find important. The structure of the resulting embedding spaces reflects the human evolution of language formation and the cultural processes shaping our world. This paper studies what the observed structure of embeddings can tell us about the natural processes that generate new knowledge or concepts. We demonstrate that word and graph embeddings trained on standard datasets using several popular algorithms consistently share two distinct properties: (1) a decreasing neighbor frequency concentration with rank, and (2) specific clustering velocities and power-law based community structures. We then assess a variety of generative models of embedding spaces by these criteria, and conclude that incremental insertion processes based on the Barabási-Albert network generation process best model the observed phenomenon on language and network data.
Reject
The authors of this work introduced new metrics for node embedding that can measure the evolution of the embeddings, and compare them with existing graph embedding approaches, and experimented on real datasets. All reviewers agreed that the work addresses interesting problem and that the proposed measures are nove, but there are too many flaws in the initial version of the paper, and despite the thorough responses of the authors, it is believed that there are still too many open questions for this paper to be accepted this year ICLR.
test
[ "5ioBkI0Z0M9", "5Is2m5SWvYq", "o1ZCbNu1S_Q", "00SLqTM1WIY", "MCdB5OC7O4", "15fY6VP7wrp", "xJ4AQyrsma4", "XnV1tmVg9BY", "h0SdsrbNSH4", "citGcoJ7Cpm" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " -----Part Two-----\n-----\n\n\n-----\n\n**Weakness 4**: \"... lack of empirical validation for the proposed measure.\"\n\n**Response**: \n- For the measures we borrow from other fields (such as spatial autocorrelations), empirical validity is a long-settled issue.\n\n- The validity of our newly proposed measures ...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "5Is2m5SWvYq", "citGcoJ7Cpm", "h0SdsrbNSH4", "XnV1tmVg9BY", "15fY6VP7wrp", "xJ4AQyrsma4", "iclr_2022_j30wC0JM39Q", "iclr_2022_j30wC0JM39Q", "iclr_2022_j30wC0JM39Q", "iclr_2022_j30wC0JM39Q" ]
iclr_2022_7yuU9VeIpde
Memory-Constrained Policy Optimization
We introduce a new constrained optimization method for policy gradient reinforcement learning, which uses two trust regions to regulate each policy update. In addition to using the proximity of one single old policy as the first trust region as done by prior works, we propose to form a second trust region through the construction of another virtual policy that represents a wide range of past policies. We then enforce the new policy to stay closer to the virtual policy, which is beneficial in case the old policy performs badly. More importantly, we propose a mechanism to automatically build the virtual policy from a memory buffer of past policies, providing a new capability for dynamically selecting appropriate trust regions during the optimization process. Our proposed method, dubbed as Memory-Constrained Policy Optimization (MCPO), is examined on a diverse suite of environments including robotic locomotion control, navigation with sparse rewards and Atari games, consistently demonstrating competitive performance against recent on-policy constrained policy gradient methods.
Reject
Two trust region constrained optimization for policy gradient RL, where the second trust region is based on a virtual policy built from a memory buffer and using an attention mechanism to combine prior policies. The reviewers agree that the paper is well written, the idea is novel, and the paper is extensively evaluated. The authors are commended for running the additional baselines during the rebuttal period. However, the paper still contains some shortcomings, specifically, the results are somewhat inconclusive even after the rebuttal. While it is not expected that the method wins across the board, it is important to provide an analysis of the limitations of the method. When is the algorithm appropriate to use, and when is it not? To make the paper stronger, in the next version of the paper should: - move the theory in the main text (Appendix C). - provide the analysis of the algorithm and its limitations.
train
[ "DTykKIlP1bH", "flf4d4pXw38", "e2zgVsYzYGb", "-qWkHOZD00z", "9g41OZCOxwe", "LFRZOsESqz", "6JfbRwxzWxr", "TQt2IGAAREd", "tZ7D0eVa7BF" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **\"... two important related works ...\"**\n\nThank you for suggesting two relevant related works. We have mentioned them in this revision's Sec. 5 and will add more discussion (if more space is given) as below.\n\n[1] augments PPO-clip with adaptive clip range. We instead advocate using 2 trust regions and appl...
[ -1, -1, -1, -1, -1, 5, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "LFRZOsESqz", "6JfbRwxzWxr", "TQt2IGAAREd", "tZ7D0eVa7BF", "iclr_2022_7yuU9VeIpde", "iclr_2022_7yuU9VeIpde", "iclr_2022_7yuU9VeIpde", "iclr_2022_7yuU9VeIpde", "iclr_2022_7yuU9VeIpde" ]
iclr_2022_tzO3RXxzuM
Stability based Generalization Bounds for Exponential Family Langevin Dynamics
We study the generalization of noisy stochastic mini-batch based iterative algorithms based on the notion of stability. Recent years have seen key advances in data-dependent generalization bounds for noisy iterative learning algorithms such as stochastic gradient Langevin dynamics (SGLD) based on (Mou et al., 2018; Li et al., 2020) and related approaches (Negrea et al., 2019; Haghifam et al., 2020). In this paper, we unify and substantially generalize stability based generalization bounds and make three technical advances. First, we bound the generalization error of general noisy stochastic iterative algorithms (not necessarily gradient descent) in terms of expected stability, which in turn can be bounded by the expected Le Cam Style Divergence (LSD). Such bounds have a $O(1/n)$ sample dependence unlike many existing bounds with $O(1/\sqrt{n})$ dependence. Second, we introduce Exponential Family Langevin Dynamics (EFLD) which is a substantial generalization of SGLD and which allows exponential family noise to be used with gradient descent. We establish data-dependent expected stability based generalization bounds for general EFLD. Third, we consider an important new special case of EFLD: Noisy Sign-SGD, which extends Sign-SGD by using Bernoulli noise over $\{-1,+1\}$, and we establish optimization guarantees for the algorithm. Further, we present empirical results on benchmark datasets to illustrate the our bounds are non-vacuous and quantitatively much sharper than existing bounds.
Reject
This paper focuses on generalization bounds for exponential family Langevin dynamics, which extends related recent work for stochastic iterative algorithms such as SGLD in several ways. They derive expected stability bounds for a more general class of noisy stochastic iterative algorithms, leading to an exponential family variation of Langevin dynamics and a noisy version of the sign-SGD algorithm. The contributions are technical and quite positively received by one reviewer, while the others were not convinced to change their opinions during the author response as there were concerns on the limitation of the theoretical contributions and the extent to which these contributions have implications on achieving state of the art performance. While I find it valid that the scope of the paper focuses on generalization bounds and provide improvements over the existing literature, rather than on empirical benchmarks or on optimization-related aspects, the overall borderline impression of the reviewers on the whole suggests that a refined version of the paper that further clarifies the contributions and makes clear its impacts as well as limitations may make for a stronger and more impactful paper.
train
[ "agoKumG7BuZ", "gK3oNYA_oVK", "JzBTjMlOlQJ", "OqQeSDxlnav", "zPQ1DavBQu0", "XfSWOyoWzj", "4u5IOVJ4ym", "XUoWmOpx8eM", "lqJQP1czbC1", "IWd6p2XDcFj", "MWI834ppxky", "rMjey4IiCtk", "CXZCJlJKm0y" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their answer. I am still unconfident with my review as this paper is quite technical and I am not familiar with this line of work. However, I would like to keep a grade marginally below the acceptance threshold as, from basic knowledge in theoretical machine learning and a reasonable effor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 2 ]
[ "OqQeSDxlnav", "lqJQP1czbC1", "rMjey4IiCtk", "zPQ1DavBQu0", "CXZCJlJKm0y", "MWI834ppxky", "MWI834ppxky", "iclr_2022_tzO3RXxzuM", "IWd6p2XDcFj", "iclr_2022_tzO3RXxzuM", "iclr_2022_tzO3RXxzuM", "iclr_2022_tzO3RXxzuM", "iclr_2022_tzO3RXxzuM" ]
iclr_2022_3Wybo29gGlx
Should we Replace CNNs with Transformers for Medical Images?
Convolutional Neural Networks (CNNs) have reigned for a decade as the de facto approach to automated medical image diagnosis, pushing the state-of-the-art in classification, detection and segmentation tasks. Recently, vision transformers (ViTs) have appeared as a competitive alternative to CNNs, yielding impressive levels of performance in the natural image domain, while possessing several interesting properties that could prove beneficial for medical imaging tasks. In this work, we explore whether it is feasible to switch to transformer-based models in the medical imaging domain as well, or if we should keep working with CNNs - can we trivially replace CNNs with transformers? We consider this question in a series of experiments on several standard medical image benchmark datasets and tasks. Our findings show that, while CNNs perform better if trained from scratch, off-the-shelf vision transformers are on par with CNNs when pretrained on ImageNet in both classification and segmentation tasks. Further, ViTs often outperform their CNN counterparts when pretrained using self-supervision.
Reject
I recommend a rejection of this paper. My overall impression is that this is genuinely an interesting topic and this a good basis for a solid paper, however, as pointed by several reviewers, there are multiple unanswered questions due to a very large scope of this work. It might be that a format of a conference paper is not the most appropriate for this work. The authors should consider instead submitting to some of the leading journals on medical image analysis, e.g. IEEE Transactions on Medical Imaging or Medical Image Analysis. I expect this work, as it is mostly empirical, would be appreciated there and could in fact make a much bigger impact if published there.
train
[ "eUm_WmKm9zc", "yolPwi4JzRP", "glNJ1XV9x1T", "KlUIYgkaLWJ", "vQhyw7bcxBB", "Kj4ZVJyfOeq", "estSxXqCRrB", "RvebQjagiZC", "tsS4ETOz8lG", "TA3E-6vUgsJ", "Jmt6sehFhum", "D8QKI807SVO", "bPbKPEkMx1", "D1EZQCF4rT1", "kmSxGaW6mJ", "jCgIRPJ7olz" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewers for their efforts put in the reply. I think the reviewers have addressed my main concerns well. I am increasing my score.", "This paper primarily analyses if we should replace CNNs with transformers for medical imaging. Experiments are conducted on a number of medical image benchmark datas...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "vQhyw7bcxBB", "iclr_2022_3Wybo29gGlx", "iclr_2022_3Wybo29gGlx", "jCgIRPJ7olz", "yolPwi4JzRP", "yolPwi4JzRP", "D1EZQCF4rT1", "D1EZQCF4rT1", "D1EZQCF4rT1", "D1EZQCF4rT1", "kmSxGaW6mJ", "bPbKPEkMx1", "iclr_2022_3Wybo29gGlx", "iclr_2022_3Wybo29gGlx", "iclr_2022_3Wybo29gGlx", "iclr_2022_3W...
iclr_2022_OMxLn4t03FG
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
In the recent years of development of theoretical machine learning, over-parametrization has been shown to be a powerful tool to resolve many fundamental problems, such as the convergence analysis of deep neural network. While many works have been focusing on designing various algorithms for over-parametrized network with one-hidden layer, multiple-hidden layers framework has received much less attention due to the complexity of the analysis, and even fewer algorithms have been proposed. In this work, we initiate the study of the performance of second-order algorithm on multiple-hidden layers over-parametrized neural network. We propose a novel algorithm to train such network, in time subquadratic in the width of the neural network. Our algorithm combines the Gram-Gauss-Newton method, tensor-based sketching techniques and preconditioning.
Reject
This paper studies the performance of second-order algorithms on training multi-layers over-parameterized neural networks. The authors propose an algorithm based on the Gram-Gauss-Newton method, tensor-based sketching techniques, and preconditioning to train such a network, whose runtime is subquadratic in the width of the neural network. While some reviewers provide some weak support, none of them are in strong support, even after the author's response. I think one of the reasons is the lack of empirical experiments. Since the main claim of this paper is an efficient second-order algorithm, some experiments are necessary to back up this claim. Unfortunately, the authors did not try to add such an experiment during the rebuttal. I would suggest the authors add such experiments in the revision.
train
[ "msMQmPPPWBg", "YeaIVbsFbfX", "cJ0d8ia5zh4", "pQvhEGVe02_", "A2Me-JcyzJu", "Kjl9-GO-ZE7", "7pRxNmsleP6", "rH9iV3TobUe" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. This paper proved that the second-order method can minimize the training loss in linea rate on multi-layer over-parameterized neural networks. This analysis relies on the connection between neural tangent kernel and over-parameterized neural networks.\n2. This paper also reduced the per-iteration cost of second...
[ 6, -1, -1, -1, -1, 5, 6, 5 ]
[ 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_OMxLn4t03FG", "msMQmPPPWBg", "rH9iV3TobUe", "7pRxNmsleP6", "Kjl9-GO-ZE7", "iclr_2022_OMxLn4t03FG", "iclr_2022_OMxLn4t03FG", "iclr_2022_OMxLn4t03FG" ]
iclr_2022_VNXYZjGcsty
Chaining Data - A Novel Paradigm in Artificial Intelligence Exemplified with NMF based Clustering
In the era of artificial intelligence there is an acceleration of high quality inference from the fusion of data and we have overcome the linking challenge associated with higher order features. We have fundamentally linked together tables of databases for clustering algorithms and expect this paradigm and those related to it to produce many new insights. We propose linked view clustering that is an extension of multi-view clustering by adding complementary and consensus information across linked views of each datapoint. While there are many methods, we focus on non-negative matrix factorization combined with the fusion of linking data in a manner that corresponds to extracting knowledge from the multiple tables of a relational database. It is commonplace to identify hashtag communities on social media by word usage, however there exists troves of data not included but could be. We can incorporate locations by hashtag to improve community detection, this is multiNMF or multiview clustering, but we extend this method to beyond the first link. A general artificial intelligence method to incorporate any table that can be chained backwards has not been done before to our knowledge. We call this linked view NMF or chained view clustering and give the algorithms to perform multiplicative updates and the general solution that can be solved using automatic differentiation such as JAX. We demonstrate how the equations can be interpreted on synthetic data as well as how information flows through the links and as a proof of concept on real data we incorporate word vectors using the method on an authorship clustering dataset.
Reject
The paper proposes a new approach for linked-view clustering based on chained non-negative matrix factorization. Reviewers highlighted that paper proposes a novel and interesting approach to an important problem. However, reviewers raised also significant concerns regarding clarity of presentation (motivation, general approach, contributions, scope) as well as the experimental evaluation. Reviewers raised also concerns regarding justification of the approach being a novel paradigm. After author response and discussion, all reviewers and the AC agree that the paper is not yet ready for publication due to the aforementioned issues.
train
[ "Z9CgHNmvvu", "RE6cOXp2W0", "i6vW_APwXHc", "VmtFbTUox_4", "08TPtvlw952", "XG4R1jCMAYd", "M8kSwJf9mZ", "owtrxhJ9ZrI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents a new technique, grounded in non-negative matrix factorization (NMF), for unsupervised, representation learning of linked data. The paper presents a technique that can deal with data scenarios whereby a certain phenomenon of interest has multiple views to characterize that behavior (i.e. traditi...
[ 5, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_VNXYZjGcsty", "VmtFbTUox_4", "08TPtvlw952", "Z9CgHNmvvu", "owtrxhJ9ZrI", "M8kSwJf9mZ", "iclr_2022_VNXYZjGcsty", "iclr_2022_VNXYZjGcsty" ]
iclr_2022_7Z7u2z1Ornl
Pruning Edges and Gradients to Learn Hypergraphs from Larger Sets
This paper aims for set-to-hypergraph prediction, where the goal is to infer the set of relations for a given set of entities. This is a common abstraction for applications in particle physics, biological systems and combinatorial optimization. We address two common scaling problems encountered in set-to-hypergraph tasks that limit the size of the input set: the exponentially growing number of hyperedges and the run-time complexity, both leading to higher memory requirements. We make three contributions. First, we propose to predict and supervise the \emph{positive} edges only, which changes the asymptotic memory scaling from exponential to linear. Second, we introduce a training method that encourages iterative refinement of the predicted hypergraph, which allows us to skip iterations in the backward pass for improved efficiency and constant memory usage. Third, we combine both contributions in a single set-to-hypergraph model that enables us to address problems with larger input set sizes. We provide ablations for our main technical contributions and show that our model outperforms prior state-of-the-art, especially for larger sets.
Reject
This paper proposes techniques for improving the scalability of set-to-hypergraph models. The main issue with the submission is that all reviewers found the clarity of the paper to be problematic including the proofs, the experimental conditions, and many other parts. The authors responded but some reviewers explicitly state that their questions have only partially been answered and some reviewers did not respond to the authors. Unfortunately, given the number of clarity issues raised by the reviewers it makes more sense to re-submit this paper after re-writing based on all the suggestions from the reviewers.
train
[ "bys4n9PJzP", "_duFUAF2lH", "9UsSblgPd45", "j9IOl_fLqTT", "d9StyTzQYSg", "P994aXgMgG" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In our second revision to the paper, we addressed the remaining points by the reviewers, as we summarize here. We hope the reviewers will reconsider the scores in light of the improvements made in this and the previous revision. Please let us know if there is anything else that could be improved.\n\n## On the num...
[ -1, -1, 5, 5, 3, 5 ]
[ -1, -1, 3, 3, 2, 3 ]
[ "iclr_2022_7Z7u2z1Ornl", "iclr_2022_7Z7u2z1Ornl", "iclr_2022_7Z7u2z1Ornl", "iclr_2022_7Z7u2z1Ornl", "iclr_2022_7Z7u2z1Ornl", "iclr_2022_7Z7u2z1Ornl" ]
iclr_2022_EJKLVMB_9T
SplitRegex: Faster Regex Synthesis via Neural Example Splitting
Due to the practical importance of regular expressions (regexes, for short), there has been a lot of research to automatically generate regexes from positive and negative string examples. A basic idea of learning a regex is a search-and-repair; search for a correct regex and repair it if incorrect. The problem is known to be PSPACE-complete and the main issue is to obtain a regex quickly within a time limit. While classical regex learning methods do not perform well, recent approaches using deep neural networks show better performance with respect to the accuracy of the resulting regexes. However, all these approaches including SOTA models are often extremely slow because of the slow searching mechanism, and do not produce desired regexes within a given time limit. We tackle the problem of learning regexes faster from positive and negative strings by relying on a novel approach called `neural example splitting'. Our approach essentially split up example strings into multiple parts using a neural network trained to group similar substrings from positive strings. This helps to learn a regex faster and, thus, more accurately since we now learn from several short-length strings. We propose an effective regex synthesis framework called `SplitRegex' that synthesizes subregexes from `split' positive substrings and produces the final regex by concatenating the synthesized subregexes. For the negative sample, we exploit pre-generated subregexes during the subregex synthesis process and perform the matching against negative strings. Then the final regex becomes consistent with all negative strings. SplitRegex is a divided-and-conquer framework for learning target regexes; split (=divide) positive strings and infer partial regexes for multiple parts, which is much more accurate than the whole string inferring, and concatenate (=conquer) inferred regexes while satisfying negative strings. We empirically demonstrate that the proposed SplitRegex framework substantially improves the previous regex synthesis approaches over four benchmark datasets.
Reject
There were genuine differences of opinion here. I saw reviews of 8,6,5,5. In these cases, I do try to check if the 8 has a really compelling argument and err on the side of accepting, but here I think both the positive and negative reviews have fair points, so I am inclined to recommend rejection here. I think the good news is that a lot of the negative stuff was around scoping/writing/related-work, and so it should be (relatively) easy to shore up this submission into something that will get better reviews in the next conference cycle.
train
[ "TQVVMdENIPs", "4I9K93ULBUT", "gogpbgw0sTk", "FdFkLoIlnqb", "-6MStHK2P-p", "_nlrtXpz52", "M4Xmencz-h7", "MSvzwpTgXch", "j-7EgPM6Mny", "ZdTXyiB4vZ_", "AIzM4M5_49v", "rmSFxaoiBFb", "AzrtGh04nj", "Lzb-6qCU_9e", "runU1AR1tL" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " (1) Length of regexes and number of available constructions are two different dimensions to evaluate complexity. It would be nice to somehow deal with both of them.\n-> Now we fully understand your comment. We will add experimental results on the performance of the proposed method for regexes with different compl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "gogpbgw0sTk", "-6MStHK2P-p", "4I9K93ULBUT", "4I9K93ULBUT", "M4Xmencz-h7", "runU1AR1tL", "runU1AR1tL", "Lzb-6qCU_9e", "Lzb-6qCU_9e", "AzrtGh04nj", "rmSFxaoiBFb", "iclr_2022_EJKLVMB_9T", "iclr_2022_EJKLVMB_9T", "iclr_2022_EJKLVMB_9T", "iclr_2022_EJKLVMB_9T" ]
iclr_2022_XpmTU4k-5uf
TIME-LAPSE: Learning to say “I don't know” through spatio-temporal uncertainty scoring
Safe deployment of trained ML models requires determining when input samples go out-of-distribution (OOD) and refraining from making uncertain predictions on them. Existing approaches inspect test samples in isolation to estimate their corresponding predictive uncertainty. However, in the real-world, deployed models typically see test inputs consecutively and predict labels continuously over time during inference. In this work, we propose TIME-LAPSE, a spatio-temporal framework for uncertainty scoring that examines the sequence of predictions prior to the current sample to determine its predictive uncertainty. Our key insight is that in-distribution samples will be more “similar” to each other compared to OOD samples, not just over the encoding latent-space but also across time. Specifically, (a) our spatial uncertainty score estimates how different OOD latent-space representations are from those of an in-distribution set using metrics such as Mahalanobis distance and cosine similarity and (b) our temporal uncertainty score determines deviations in correlations over time using representations of past inputs in a non-parametric, sliding-window based algorithm. We evaluate TIME-LAPSE on both audio and vision tasks using public datasets and further benchmark our approach on a challenging, real-world, electroencephalograms (EEG) dataset for seizure detection. We achieve state-of-the-art results for OOD detection in the audio and EEG domain and observe considerable gains in semantically corrected vision benchmarks. We show that TIME-LAPSE is more driven by semantic content compared to other methods, i.e., it is more robust to dataset statistics. We also propose a sequential OOD detection evaluation framework to emulate real-life drift settings and show that TIME-LAPSE outperforms spatial methods significantly.
Reject
This paper has been reviewed by four experts. Their independent evaluations were all below the acceptance threshold citing various issues ranging from disconnection between stated goals of the presented work and the means in which the approach was evaluated, to doubts about the scalability of the proposed approach, to the lack of clarity regarding the actual novelty of the approach given some key missed references, to name a few items of criticism. Most reviewers were impressed with the empirical performance achieved in the conducted experiments, and one of the reviewers raised their mark in response to the author's rebuttal. Yet, the overall evaluation places this work as it stands now below the threshold for ICLR acceptance. I would like to encourage the authors to continue pushing their promising endeavor and systematically incorporating the feedback received here to improve the overall quality of this work.
val
[ "icfCXfbIqmH", "rn3QgM_KsDR", "euUy3dewilK", "-f4Q7rDiDhK", "IVTJ1Gr6i6K", "G69Bl5hQSj7", "OjUJ5o8qZEs", "CdLmNGFXwRm", "tc2EffKp-2B", "EcY7IVxVpgG", "yRyU0UPvrq-", "fkC3vWJ7c-", "74ejGf_Dc0P", "R4lvm7Xr0y", "7YD-7mvixS" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the positive comments and for updating their score. \n\nWe’d like to address their concern about our work’s technical novelty below. Though the literature on outlier detection, uncertainty estimation and calibration is vast (we have summarized them in the paper, Section 2, page 3) and t...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "rn3QgM_KsDR", "yRyU0UPvrq-", "iclr_2022_XpmTU4k-5uf", "7YD-7mvixS", "R4lvm7Xr0y", "IVTJ1Gr6i6K", "iclr_2022_XpmTU4k-5uf", "OjUJ5o8qZEs", "74ejGf_Dc0P", "tc2EffKp-2B", "euUy3dewilK", "-f4Q7rDiDhK", "iclr_2022_XpmTU4k-5uf", "iclr_2022_XpmTU4k-5uf", "iclr_2022_XpmTU4k-5uf" ]
iclr_2022_DrpKmCmPMSC
Meta-free few-shot learning via representation learning with weight averaging
Recent studies on few-shot classification using transfer learning pose challenges to the effectiveness and efficiency of episodic meta-learning algorithms. Transfer learning approaches are a natural alternative, but they are restricted to few-shot classification. Moreover, little attention has been on the development of probabilistic models with well-calibrated uncertainty from few-shot samples, except for some Bayesian episodic learning algorithms. To tackle the aforementioned issues, we propose a new transfer learning method to obtain accurate and reliable models for few-shot regression and classification. The resulting method does not require episodic meta-learning and is called meta-free representation learning (MFRL). MFRL first finds low-rank representation generalizing well on meta-test tasks. Given the learned representation, probabilistic linear models are fine-tuned with few-shot samples to obtain models with well-calibrated uncertainty. The proposed method not only achieves the highest accuracy on a wide range of few-shot learning benchmark datasets but also correctly quantifies the prediction uncertainty. In addition, weight averaging and temperature scaling are effective in improving the accuracy and reliability of few-shot learning in existing meta-learning algorithms with a wide range of learning paradigms and model architectures.
Reject
By the scores, this submission is quite borderline. This paper introduces stochastic weight averaging into a few-shot learning setting, The reviewers all agreed the work was sound; discussion after the author response focused on the theoretical justifications, degree of novelty and potential impact, and the empirical support. The primary concerns were that the work was slightly too incremental to obviously merit publication at this stage: though the empirical results were sound, they mostly follow the existing observation that SWA tends to be beneficial for generalization in other settings; apparently in few-shot learning as well. The positives would be that this is simple enough that it could become a general "best practice" in few-shot learning baselines, and as such communicating this is important. The other discussion focused around the theoretical justification relating SWA to low-rank solutions. While empirically it does seem that the solutions found by SWA lead to low-rank representations, this is not really adequately explored, and it's not clear enough why this should be expected to happen. I think if this relationship between SWA and low-rank representations were more clearly explored then the paper would be a strong accept. As it stands, it is quite borderline. Based on the scores (5,5,6), the recommendation is to reject, but it certainly could be included as well, as it has solid execution and is a clear topical fit for ICLR.
train
[ "HuAC0n5AQhz", "jziwLwj2Hh-", "SmoE3bG9WJl", "dwHzn2joNEA", "NfrPzLeQTdg", "IM5ZFi10AnN", "qKovuf-4uZp", "aSYblduQseIC", "fvqUzhi-bg", "LKH-RXmkDO0", "nW0IUFSu45k", "SfPBppg5OYRO", "UdIIDLqlv4", "ZV5zSsLS22S" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We appreciate all reviewers for the insightful comments to improve the quality of the paper.\n\nWe upload the revised paper to address reviewers’ questions and concerns. We also made point-to-point responses to each reviewer. We rewrite part of the introduction to avoid confusion. We are happy to make further res...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_DrpKmCmPMSC", "dwHzn2joNEA", "iclr_2022_DrpKmCmPMSC", "NfrPzLeQTdg", "IM5ZFi10AnN", "SmoE3bG9WJl", "aSYblduQseIC", "fvqUzhi-bg", "ZV5zSsLS22S", "nW0IUFSu45k", "SfPBppg5OYRO", "UdIIDLqlv4", "iclr_2022_DrpKmCmPMSC", "iclr_2022_DrpKmCmPMSC" ]
iclr_2022_a43otnDilz2
KNIFE: Kernelized-Neural Differential Entropy Estimation
Estimation of (differential) entropy and the related mutual information has been pursued with significant efforts by the machine learning community. To address shortcomings in previously proposed estimators for differential entropy, here we introduce KNIFE, a fully parameterized, differentiable kernel-based estimator of differential entropy. The flexibility of our approach also allows us to construct KNIFE-based estimators for conditional (on either discrete or continuous variables) differential entropy, as well as mutual information. We empirically validate our method on high-dimensional synthetic data and further apply it to guide the training of neural networks for real-world tasks. Our experiments on a large variety of tasks, including visual domain adaptation, textual fair classification, and textual fine-tuning demonstrate the effectiveness of KNIFE-based estimation.
Reject
The focus of the submission is the estimation of the Shannon differential entropy (DE). The authors propose a differentiable DE estimator referred to as KNIFE (Kernelized Neural diFFerential Estimator): it is a plug-in method (5) using KDE (kernel density estimation; (4)). KNIFE has parameters including the locations (a), weights (w) and covariances (A) in KDE, which are tuned according to the upper bound heuristic in (6). The approach is illustrated on toy examples and in the context of training neural networks. Estimating information theoretical quantities is a current topic of machine learning. Unfortunately, as assessed by the reviewers 1) the submission lacks context and comparison to available entropy estimators, 2) the estimator closely follows Schraudolph (2004); the technical novelty is quite limited. More work and major revision are required.
train
[ "d4jRbhZv8b", "clnSMGY2_8A", "LPAAWKsqGb0", "iemgWDZ88bk", "Mn4fFLQEkkY", "3VJCP5CCMfS", "pL-WBjQOfdT", "vp9SxSfv6C2", "rdNhKS5xu", "lXVYMdWSjoc", "QqQJJ7as6r-", "KQD9OCi5mL", "1abSgAmFRzz", "9v_-6p3yCMd" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper provides a new approach to estimating differential entropy called KNIFE that is also applied to mutual information estimation. The authors define their estimator using a parametric model based on estimating a KDE. The authors provide some theoretical analyses of the estimator and multiple empirical expe...
[ 5, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 5, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_a43otnDilz2", "iclr_2022_a43otnDilz2", "pL-WBjQOfdT", "iclr_2022_a43otnDilz2", "3VJCP5CCMfS", "vp9SxSfv6C2", "clnSMGY2_8A", "iemgWDZ88bk", "d4jRbhZv8b", "9v_-6p3yCMd", "1abSgAmFRzz", "iclr_2022_a43otnDilz2", "iclr_2022_a43otnDilz2", "iclr_2022_a43otnDilz2" ]
iclr_2022_rqcLsG8Kme9
rQdia: Regularizing Q-Value Distributions With Image Augmentation
rQdia (pronounced “Arcadia”) regularizes Q-value distributions with augmented images in pixel-based deep reinforcement learning. With a simple auxiliary loss, that equalizes these distributions via MSE, rQdia boosts DrQ and SAC on 9/12 and 10/12 tasks respectively in the MuJoCo Continuous Control Suite from pixels, and Data-Efficient Rainbow on 18/26 Atari Arcade environments. Gains are measured in both sample efficiency and longer-term training. Moreover, the addition of rQdia finally propels model-free continuous control from pixels over the state encoding baseline. Additional results, namely more random seeds, pending.
Reject
The paper proposes a simple modification to how data augmentation is done in image-based RL. This results in some improvements on benchmark tasks. The change essentially amounts to adapting data-augmentation strategies that are already understood in other fields to deep RL. However, the effect of data augmentation in simple image-based deep RL tasks is already known. As such, I think the contribution in this paper is quite incremental -- the notion that data augmentation in deep RL helps is already known, and the particular augmentation strategy proposed here is not especially novel. So while it's good in terms of producing improved results on some benchmark tasks, it doesn't seem to be of high significance to the study of reinforcement learning or machine learning more broadly. As such, I think it could be a valuable contribution to a more narrow venue, or as a technical report, but is too incremental and narrow in scope for ICLR. A note to the authors (this did not impact the paper decision): due to the unfortunately lackluster quality of the reviews, I read and reviewed the paper myself as well to be able to produce a more accurate meta-review. In the balance, I see the point the authors make in the response that some of the results in prior work (e.g., CURL) are unfortunately unreliable. That's not the fault of the authors, it's the fault of the prior works. I took this into account in my assessment. In this sense, I do think the comparison to prior work is sensible. On the other hand, I think the practice of reporting only very specific checkpoints (e.g., 100k and 500k), though borrowed from prior work, is not a good way to report results, as it hides the real performance of the methods.
train
[ "RdrYhmP8vW5", "pa1Air8HqeF", "7O_7dXpLigL", "8bsXHGnmsO", "Re3U7xprwO_", "lHaT3witmRr", "-VT2PzQeCbZ", "CgpIqptpASz", "SVAYv60fjEK" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We released our revision a week ago and have yet to hear back from any of the reviewers. We include a wide suite of additional analyses that corroborate the significance of the paper and we are confident that we addressed the reviewer concerns. We do believe this submission belongs in ICLR’s proceedings, and we w...
[ -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, 5, 5, 4, 3 ]
[ "iclr_2022_rqcLsG8Kme9", "CgpIqptpASz", "-VT2PzQeCbZ", "SVAYv60fjEK", "lHaT3witmRr", "iclr_2022_rqcLsG8Kme9", "iclr_2022_rqcLsG8Kme9", "iclr_2022_rqcLsG8Kme9", "iclr_2022_rqcLsG8Kme9" ]