paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_rdBuE6EigGl
The Importance of the Current Input in Sequence Modeling
The last advances in sequence modeling are mainly based on deep learning approaches. The current state of the art involves the use of variations of the standard LSTM architecture, combined with several tricks that improve the final prediction rates of the trained neural networks. However, in some cases, these adaptations might be too much tuned to the particular problems being addressed. In this article, we show that a very simple idea, to add a direct connection between the input and the output, skipping the recurrent module, leads to an increase of the prediction accuracy in sequence modeling problems related to natural language processing. Experiments carried out on different problems show that the addition of this kind of connection to a recurrent network always improves the results, regardless of the architecture and training-specific details. When this idea is introduced into the models that lead the field, the resulting networks achieve a new state-of-the-art perplexity in language modeling problems.
Reject
This paper considers augmenting LSTM language models with a form of residual connection that adds and additional feed forward layer before the softmax that integrates the output of the recurrent cell with the input embedding. This architectural variation is evaluated on the standard Penn Treebank and Wikitext-2 language modelling tasks and shown to lead to lower perplexities on the test sets, particularly when dynamic evaluation is used. The reviewers agree that the proposed addition is well motivated, however they also observe that there has been substantial work in language modelling on various forms of residual and skip connections and it is not clear how this work relates to that body of work. The authors have provided some additional comparisons during the discussion, however the reviewers feel that further evaluation and analysis is needed. There was also some additional confusion about the varying hyperparamter tuning protocols employed in the different evaluations. The author’s have clarified this in their response so that it is clearer how the different results were obtained. Overall this paper presents an promising initial result, but it would benefit from more complete evaluation, analysis, and hyperparameter tuning. This could include ablation studies and analysis to shed more light on what the proposed architectural addition is contributing, how this relates to other varieties of residual connection, and it’s positive interaction with dynamic evaluation. It would also be useful to include a tuned model with a comparison to previously reported Wikitext-2 results.
train
[ "ROFrgQl7q0g", "YK2CsfuhOvM", "qE6H0FeFzae", "_x3ZIhZbtQK", "Nr73LJIwN58", "saZV-WjWdT", "Y_-S0FZQag", "WQuKzwMFRwu", "t1Y1xFmN3U4", "XPnfz9pwiNf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for the response!", "This paper proposes to add an extra residual connection between a transformed word embedding and the final output layer, which easily generalizes over different recurrent architectures. The language model experiments show that their proposed model performs better than the same recurr...
[ -1, 3, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ -1, 5, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "XPnfz9pwiNf", "iclr_2022_rdBuE6EigGl", "WQuKzwMFRwu", "iclr_2022_rdBuE6EigGl", "XPnfz9pwiNf", "_x3ZIhZbtQK", "_x3ZIhZbtQK", "YK2CsfuhOvM", "YK2CsfuhOvM", "iclr_2022_rdBuE6EigGl" ]
iclr_2022_qDx6DXD3Fzt
Provably Robust Detection of Out-of-distribution Data (almost) for free
The application of machine learning in safety-critical systems requires a reliable assessment of uncertainy. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data. Even if trained to be non-confident on OOD data one can still adversarially manipulate OOD data so that the classifier again assigns high confidence to the manipulated samples. In this paper we propose a novel method that combines a certifiable OOD detector with a standard classifier from first principles into an OOD aware classifier. This way we achieve the best of two worlds: certifiably adversarially robust OOD detection, even for OOD samples close to the in-distribution, without loss in either prediction accuracy or detection performance for non-manipulated OOD data. Moreover, due to the particular construction our classifier provably avoids the asymptotic overconfidence problem of standard neural networks.
Reject
This paper aims for detecting not only clean OOD data, but also their adversarially manipulated ones. The authors propose a method for this goal, with no/marginal loss in clean test accuracy (say, Acc) and clean OOD detection accuracy (say, AUC), while existing methods for targeting the same goal suffers from low Acc and AUC. 3 reviewers are positive and 2 reviewers are negative. Reviewers and AC think that the proposed idea of merging a certified binary classifier for in-versus out-distribution with a classifier for the in-distribution task is interesting. However, AC thinks that experimental results are arguable as pointed out by reviewers. For example, in CIFAR-10, the proposed method outperforms the baseline (GOOD) with respect to Acc and AUC, but often significantly underperforms it with respect to GAUC (guaranteed AUC) or AAUC (adversarial AUC). Then, the question is which metric is more important? It is arguable to say whether Acc is more important than GAUC or AAUC. But, at least, AC thinks that AUC and AAUC (or GAUC) are equally important as adversarially manipulated OOD data is nothing but another OOD data made from the original clean OOD data. Hence, the superiority of the proposed method over the baseline is arguable in the experiments, and AC tends to suggest rejection. ps ... AC is also a bit skeptical on the motivation of this paper. What is the value of obtaining "guaranteed AUC"? It is not the "real/true" worst case OOD performance, as it varies with respect to the tested clean OOD data. Namely, it is the worst case OOD performance just in a certain "subset" of OOD data, i.e., adversarially manipulated OOD data made from a certain clean OOD data. Hence, AC is curious about what is the value of establishing such a "partial" lower bound (rather than "true" lower bound considering all possible OOD data). AC thinks that the problem setup studied in this paper (and some previous papers) looks interesting/reasonable at the first glance, but feels somewhat artificial after a deeper look.
train
[ "7TNsUSFnHkw", "TBecWzJYyu", "Z4o8yaLiIpO", "F9m1TPv8lo", "KBuNgoPJH6", "LQayN0Y9-HG", "4yR7bfS3teK", "OzbaXmgyYOT", "eUnFwP_eTu", "6slG28fDwWX", "AYhYlHNYFVc", "GKx-lnAAmM-", "IJ0b2FBeYn", "GpvO86mOr9Z", "fMiiKghaes5", "Zk3tFPa5w3I", "uxdFE_Rrpt2", "M072JP4-VH", "gorNdft24jR", ...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "au...
[ " My comments are based on the empirical results provided by the authors. If we look at results in the paper (e.g. Table 2), the performance of the proposed method ProoD is quite mixed across in-distribution and OOD datasets (that is what I mean by \"stable\"). For example, in Table 2, on CIFAR-10 vs. Smooth, the p...
[ -1, -1, 3, -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "TBecWzJYyu", "F9m1TPv8lo", "iclr_2022_qDx6DXD3Fzt", "Z4o8yaLiIpO", "iclr_2022_qDx6DXD3Fzt", "6slG28fDwWX", "OzbaXmgyYOT", "iclr_2022_qDx6DXD3Fzt", "GKx-lnAAmM-", "AYhYlHNYFVc", "G4myqgYlt74", "IJ0b2FBeYn", "fMiiKghaes5", "Zk3tFPa5w3I", "-ObAM-L2oM", "gorNdft24jR", "iclr_2022_qDx6DXD...
iclr_2022_HY6i9FYBeFG
S3: Supervised Self-supervised Learning under Label Noise
Despite the large progress in supervised learning with Neural Networks, there are significant challenges in obtaining high-quality, large-scale and accurately labeled datasets. In this context, in this paper we address the problem of classification in the presence of noisy labels and more specifically, both close-set and open-set label noise, that is when the true label of a sample may, or may not belong to the set of the given labels. In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space, a relabeling mechanism that relies on the confidence of the classifier across subsequent iterations and a training strategy that trains the encoder both with a self-consistency loss and the classifier-encoder with cross-entropy loss on the selected samples alone. Without bells and whistles, such as co-training so as to reduce the self-confirmation bias, our method significantly surpasses previous methods on both CIFAR10/CIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
Reject
This work describes a two-stage method for learning with noisy labels. The crux of the reviews, discussions with the authors and post-rebuttal discussions between reviews (and myself) was related to the novelty of this work. The main concern is that while this body of work presents a relatively solid method (from an empirical point of view), the underlying components are not altogether that novel, and have been used in the context of learning with noisy labels before. Fundamentally, the proposed S3 method did not feel *convincingly* better, given its relative lack of novel technical insights. I appreciate that this is a frustrating reasoning to get -- after all, much of what we do in empirical ML is combinations of existing things. Ultimately, there was consensus amongst the reviewers that the work did not have sufficient insights or such outstanding empirical results so as to overcome this relative lack of technical novelty. All the reviewers have engaged meaningfully in discussions, provided constructive feedback and I hope that this will make subsequent iterations of this work better in many dimensions.
train
[ "haQMNaY9Ajk", "n6Pdp2Rrpgz", "nksTX4wLYcE", "le_UmWJGerR", "P5nk9qojMia", "B3btF0q1vXo", "oWwKFXWNLdq", "4Nsj5ZPnTo9", "7nomRzpQV9X", "Fw7jnachHWe", "oL9zZjTNNMt", "yk0g4GQJaSv", "6TtaYuG9UKi", "jzLq7FBBWQS", "jycTgVRLaJ6", "OREpEkdItNH", "Oi6fMXI086", "0fZKKc_sTGW", "5d5UeVi-nX...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " C2D includes the results for webvision (table 4 in the paper).\nThe revised paper improved clarity, though I suggest to proofread the newly-written parts for grammar and typos.\nI've updated my score accordingly.", "The paper proposes a two-stage approach to learning with noisy labels (LNL).\n1. a. Clean sample...
[ -1, 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "yk0g4GQJaSv", "iclr_2022_HY6i9FYBeFG", "le_UmWJGerR", "P5nk9qojMia", "iclr_2022_HY6i9FYBeFG", "oWwKFXWNLdq", "4Nsj5ZPnTo9", "7nomRzpQV9X", "Fw7jnachHWe", "0fZKKc_sTGW", "iclr_2022_HY6i9FYBeFG", "6TtaYuG9UKi", "uxTcwWpDDyM", "6NHfTdLZMhk", "iclr_2022_HY6i9FYBeFG", "YFf5VqIqENs", "YFf...
iclr_2022_uF_Wl0xSA7O
Independent Component Alignment for Multi-task Learning
We present a novel gradient-based multi-task learning (MTL) approach that balances training in multi-task systems by aligning the independent components of the training objective. In contrast to state-of-the-art MTL approaches, our method is stable and preserves the ratio of highly correlated tasks gradients. The method is scalable, reduces overfitting, and can seamlessly handle multi-task objectives with a large difference in gradient magnitudes. We demonstrate the effectiveness of the proposed approach on a variety of MTL problems including digit classification, multi-label image classification, camera relocalization, and scene understanding. Our approach performs favourably compared to other gradient-based adaptive balancing methods, and its performance is backed up by theoretical analysis.
Reject
This paper presents work on multi-task learning. The reviewers appreciated the method based on SVD of loss gradients. However, concerns were raised regarding empirical effectiveness and overall impact. The reviewers considered the authors' response in their subsequent discussions. While the methods are interesting, the concerns over their effectiveness would need to be more thoroughly addressed in order to improve the impact of the paper. As such, it is encouraged that the authors take these suggestions into account in preparing a new version of the paper for a future submission.
train
[ "AgtM3lbzZzQ", "utsVytrCmAs", "kubQUffVgW9", "9b0AKnBhL_6", "AGW01A-_fHa", "_FPCLpT7b4m", "9s25o33Y4D", "HzNBjimE1Wn", "XiAISQ1STl", "pxphCK1IFz", "ypCGZW5AmHv", "CwCX2f3W5Mj" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for going through our response and getting back to us. Regarding your remaining concern: We agree with you that the statement of surpassing baselines on page 7 in the MultiMNIST paragraph should be tuned down. In its current form, it also does not keep the focus on the relevant message of the experiment...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "utsVytrCmAs", "9s25o33Y4D", "iclr_2022_uF_Wl0xSA7O", "AGW01A-_fHa", "HzNBjimE1Wn", "iclr_2022_uF_Wl0xSA7O", "ypCGZW5AmHv", "XiAISQ1STl", "kubQUffVgW9", "CwCX2f3W5Mj", "iclr_2022_uF_Wl0xSA7O", "iclr_2022_uF_Wl0xSA7O" ]
iclr_2022_bmGLlsX_iJl
EMFlow: Data Imputation in Latent Space via EM and Deep Flow Models
The presence of missing values within high-dimensional data is an ubiquitous problem for many applied sciences. A serious limitation of many available data mining and machine learning methods is their inability to handle partially missing values and so an integrated approach that combines imputation and model estimation is vital for down-stream analysis. A computationally fast algorithm, called EMFlow, is introduced that performs imputation in a latent space via an online version of Expectation-Maximization (EM) algorithm by using a normalizing flow (NF) model which maps the data space to a latent space. The proposed EMFlow algorithm is iterative, involving updating the parameters of online EM and NF alternatively. Extensive experimental results for high-dimensional multivariate and image datasets are presented to illustrate the superior performance of the EMFlow compared to a couple of recently available methods in terms of both predictive accuracy and speed of algorithmic convergence.
Reject
This paper proposes a data imputation method for MCAR and MAR data by combining EM and normalizing flows. The paper is clearly written. The idea is interesting and they show better performance compared to MCFlow and competing methods on ten multivariate UCI data, MNIST and CFAR10 image data. Issues regarding limited novelty compared to MCFlow was raised. Issues regarding the validity of Assumption 2 on the dependencies in the latent space and observation space was also raised.
val
[ "11SUrc0VF-e", "YwuzdRW-2Ds", "wTQapKwa4sE", "pjoIZ2kG9jS", "1l78YK1VJ4T", "kCiL8rm3zBz", "7QMhBH83_Bq", "CLYnDJXAEwm", "rUetbqYQl0", "zD1g9lJPSIl", "LiYGAQYepqF", "CFYFwWY_OqB", "22dnWja_Rf", "8LLp1OznxG", "P0QsASnSkq", "WvlKhCyqo-e" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for detailed read and constructive comment, and hope that you find our response satisfying. If you have any further questions or concerns, we are willing to provide more explanations and experiments. Meanwhile, if our previous response addresses your concerns, we sincerely hope that yo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "8LLp1OznxG", "LiYGAQYepqF", "pjoIZ2kG9jS", "1l78YK1VJ4T", "kCiL8rm3zBz", "7QMhBH83_Bq", "CLYnDJXAEwm", "P0QsASnSkq", "WvlKhCyqo-e", "8LLp1OznxG", "CFYFwWY_OqB", "22dnWja_Rf", "iclr_2022_bmGLlsX_iJl", "iclr_2022_bmGLlsX_iJl", "iclr_2022_bmGLlsX_iJl", "iclr_2022_bmGLlsX_iJl" ]
iclr_2022_s2UpjzX82FS
Privacy-preserving Task-Agnostic Vision Transformer for Image Processing
Distributed collaborative learning approaches such as federated and split learning have attracted significant attention lately due to their ability to train neural networks using data from multiple sources without sharing data. However, they are not usually suitable in applications where each client carries out different tasks with its own data. Inspired by the recent success of Vision Transformer (ViT), here we present a new distributed learning framework for image processing applications, allowing clients to learn multiple tasks with their private data. The key idea arises from a novel task-agnostic Vision Transformer that is introduced to learn the global attention independent of specific tasks. Specifically, by connecting task-specific heads and tails at client sides to a task-agnostic Transformer body at a server side, each client learns a translation from its own task to a common representation, while the Transformer body learns global attention between the features embedded in the common representation. To enable decomposition between the task-specific and common representation, we propose an alternating training strategy in which task-specific learning for the heads and tails is run on the clients by fixing the Transformer, which alternates with task-agnostic learning for the Transformer on the server by freezing the heads and tails. Experimental results on multi-task learning for various image processing show that our method synergistically improves the performance of the task-specific network of each client while maintaining privacy.
Reject
The paper aims to devise a distributed multi-task privacy preserving framework for image processing. In this regard, author propose partitioning neural network models into task specific heads/tails and a common task-agnostic feature backbone (body). A training procedure is designed which is claimed to be privacy preserving wherein the head and tail is trained locally on the client or using federated learning when multiple clients share a task, while the main backbone/body is trained in a centralized manner by collecting appropriate gradients from the clients. Making easy to follow code is also highly appreciated. We thank the reviewers and authors for engaging in an active discussion and also updating the paper. While the new version is definitely resolves some of the concerns of the reviewers, some still remain. Privacy preserving in title and in main body of the paper seems misleading. Proposed method doesn't provide any guarantees for privacy (also pointed out by many reviewers). The author response doesn't seem to be convincing and other federated learning papers do not claim privacy unless having some specific mechanism like adding noise, secure aggregate, etc. Also, the reviewers are in consensus that novelty as well as large scale empirical evaluation is limited.
train
[ "JLj3giD2xMw", "FEanhDHz9u1", "0KJNjiMzCV6", "ecV6zBzXH", "CtnOb-viAzp", "ssjiOWti4r", "KaVTOfhxlB", "4W6zOcK04GE", "OOEaa6uHhDA", "YzMV0WDgEEV", "eBxBfQ3UzHC", "gSovst_c9Fz", "A16nvTlZ58", "3kk_ndk_GH5", "1-R0kmE6-Ff", "mxtaT0ZcDg0", "XzxR6BJPxyr", "EyDpO3J0Lpe", "xe9gqFy6vQX", ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thank you for the additional comment. Most of the existing federated / split learning frameworks focus on the confidentiality of the raw data. This is especially true for image processing, where the key point of privacy risk is the raw data itself rather than the extracted features or statistics. This is why we r...
[ -1, -1, -1, -1, 5, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, -1, -1, 2, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "FEanhDHz9u1", "ecV6zBzXH", "KaVTOfhxlB", "ssjiOWti4r", "iclr_2022_s2UpjzX82FS", "T4IVPzFmNYV", "ZSKHVTxG8d", "iclr_2022_s2UpjzX82FS", "YzMV0WDgEEV", "EyDpO3J0Lpe", "iclr_2022_s2UpjzX82FS", "iclr_2022_s2UpjzX82FS", "-5SuxV4JhMC", "4W6zOcK04GE", "4W6zOcK04GE", "4W6zOcK04GE", "4W6zOcK0...
iclr_2022_olQbo52II9
Learning to Solve Combinatorial Problems via Efficient Exploration
From logistics to the natural sciences, combinatorial optimisation on graphs underpins numerous real-world applications. Reinforcement learning (RL) has shown particular promise in this setting as it can adapt to specific problem structures and does not require pre-solved instances for these, often NP-hard, problems. However, state-of-the-art (SOTA) approaches typically suffer from severe scalability issues, primarily due to their reliance on expensive graph neural networks (GNNs) at each decision step. We introduce ECORD; a novel RL algorithm that alleviates this expense by restricting the GNN to a single pre-processing step, before entering a fast-acting exploratory phase directed by a recurrent unit. Experimentally, we demonstrate that ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem, whilst also providing orders of magnitude improvement in speed and scalability. Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73% on 500 vertex graphs with a decreased wall-clock time. Moreover, ECORD retains strong performance when generalising to larger graphs with up to 10000 vertices.
Reject
The paper proposes an efficient RL-based approach for solving the weighted maximum cut problem. The proposed approach shares high-level insights with prior work such as ECO-DQN (Barrett et al.) and S2V-DQN; the key contribution is to demonstrate that the proposed cheap action decoding and stochastic policy strategy can improve the scalability without sacrificing much of the quality of the solution on the tasks considered in this paper. The reviewers in general find the paper well presented, and especially note that the clear motivation for improving the efficiency of current GNN-based RL baselines, particularly represented by ECO-DQN. A common concern among the reviewers is that the original title is misleading; the authors acknowledge that they should properly position the paper to avoid confusion that they were to address general combinatorial optimization problems (as the current title suggests). Notably, many combinational optimization problems can be reduced to max-cut as suggested in the authors’ responses; demonstrating the performance in (some of) these problems via a max-cut reduction would be helpful to support the significance of this work. Beyond the title and positioning of this work, there were also initial confusions among the committee in terms of the choice of both (RL or supervised) learning-based and heuristic-based baselines. The authors did an excellent job in clarifying many of the questions in terms of related work and baselines (the clarity of the work has improved over the rebuttal phase). However, despite the additional ablation study and newly added baselines, there remain concerns/questions in the choice of task domains (lack of hard problem instances where existing solvers, learning- or heuristics -based may fail due to (possibly higher) computational complexity). Given the empirical focus of the paper, this appears to be an important concern, and not all reviewers are convinced the current empirical results are significant to warrant acceptance of this work.
train
[ "QRuKZ1tZQh", "C0oCYaOe0l4", "48cGfVxuuDF", "iyQ6zt0iSaR", "MmPT3qxBGFI", "yAM5e-LvTNM", "GSH5byJ_t0P", "Uh8nJAd3cvG", "BrCOyPNXx9p", "rFy6V7j-tz", "ZiIGML6I1l", "mX-bTR11JxX", "UzZ4CLpRNlH", "lbBW9C9Pv1a", "TkoBqHJr-fX", "Gmsiy4Qy53", "b4nPGN-PBki", "VETF-muAPP", "VMOb6EM6ImH", ...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_...
[ " I thank the authors for the response. I agree that ECORD has clear improvement over previous RL-based methods in terms of scalability (but for raw performance, I am not convinced since the improvement is marginal to ECO-DQN).\n\nSince the main contribution is scalability, it is natural to demonstrate the advantag...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "BrCOyPNXx9p", "MmPT3qxBGFI", "iclr_2022_olQbo52II9", "GSH5byJ_t0P", "Uh8nJAd3cvG", "iclr_2022_olQbo52II9", "Gmsiy4Qy53", "ZiIGML6I1l", "rFy6V7j-tz", "lbBW9C9Pv1a", "mX-bTR11JxX", "UzZ4CLpRNlH", "qBix5QCmLp", "TkoBqHJr-fX", "T9qhV2GnXVb", "b4nPGN-PBki", "yAM5e-LvTNM", "VMOb6EM6ImH"...
iclr_2022_OBwsUF4nFye
Private Multi-Task Learning: Formulation and Applications to Federated Learning
Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously. MTL is particularly relevant for privacy-sensitive applications in areas such as healthcare, finance, and IoT computing, where sensitive data from multiple, varied sources are shared for the purpose of learning. In this work, we formalize notions of task-level privacy for MTL via joint differential privacy(JDP), a relaxation of differential privacy for mechanism design and distributed optimization. We then propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP. We analyze our objective and solver, providing certifiable guarantees on both privacy and utility. Empirically, we find that our method allows for improved privacy/utility trade-offs relative to global baselines across common federated learning benchmarks.
Reject
Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously. In this work, authors formalize notions of task-level privacy for MTL via joint differential privacy (JDP). They propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP. Then analyze objective and solver, providing certifiable guarantees on both privacy and utility. The main results, namely the convergence rate results, are hard to parse and hard to interpret. For example, as one reviewer pointed out, it is bounded below by a constant which is not properly explained. Further, comparisons to the literature in user-level privacy (which is equivalent as the task-level privacy) is not provided enough. Significant improvement in the presentation of the main results, along with an interpretable explanation of the contribution, is necessary for this manuscript.
test
[ "64kpoCDUg9j", "uTwrEjLgnOu", "IKGbCQh8Ntz", "nAufw5h_TsG", "qu3rBpY6zgb", "6HXXcUWNet", "p_69oIn-VJz", "Dw7vM1_VS8J", "7w978-3lRtD", "Zu9ihIC4zjx", "DvpsRW3Y45", "S3JXtrP0orc", "nVTNTIrqwU5", "MPl8EbcMby0", "_H8jbk7UuJd", "PIwwUkaZdyL", "NogzuELiSwI" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer:\n\nThanks again for your detailed comments and suggestions. We would like to check to see whether our response has adequately resolved your concerns? If not, we would appreciate it if you could let us know whether there are any further concerns that we can discuss. In particular, we hope that we ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "IKGbCQh8Ntz", "p_69oIn-VJz", "nAufw5h_TsG", "qu3rBpY6zgb", "6HXXcUWNet", "Zu9ihIC4zjx", "Dw7vM1_VS8J", "7w978-3lRtD", "DvpsRW3Y45", "nVTNTIrqwU5", "NogzuELiSwI", "PIwwUkaZdyL", "_H8jbk7UuJd", "iclr_2022_OBwsUF4nFye", "iclr_2022_OBwsUF4nFye", "iclr_2022_OBwsUF4nFye", "iclr_2022_OBwsU...
iclr_2022_RB_2cor6d-w
Towards Physical, Imperceptible Adversarial Attacks via Adversarial Programs
Adversarial examples were originally defined as imperceptible perturbations which cause a deep neural network to misclassify. However, the majority of imperceptible perturbation attacks require to perturb a large number of pixels across the image and are thus hard to execute in the physical world. Existing physical attacks rely on physical objects, such as patches/stickers or 3D-printed objects. Producing adversarial patches is arguably easier than 3D-printing but these attacks incur highly visible perturbations. This raises the question: is it possible to generate adversarial examples with imperceptible patches? In this work, we consider adversarial multi-patch attacks, where the goal is to compute a targeted attack consisting of up to K patches with minimal L2 distortion. Each patch is associated with dimensions, position, and perturbation parameters. We leverage ideas from program synthesis and numerical optimization to search in this large, discrete space and obtain attacks that are competitive with the C&W attack but have at least 3x and up to 10x fewer perturbed pixels. We evaluate our approach on MNIST, Fashion-MNIST, CIFAR-10, and ImageNet and obtain success rate of at least 92% and up to 100% with at most ten patches. For MNIST, Fashion-MNIST, and CIFAR-10, the average L2 distortion is greater than the average L2 distortion of the C&W attack by up to 1.2.
Reject
This paper studies physical "adversarial programs" that allow an attacker to control a machine learning model by placing transparent patches on top of an image. The reviewers are split on this paper: while some reviewers like the work, others are concerned about the practicality, novelty, or utility of the attack. Starting with novelty, reviewers raise valid concerns about how this approach is similar to prior attacks that generate programs. The authors respond here, but the overall question remains unanswered and it is not clear which of the new pieces this paper introduces are responsible for the success. (Would prior techniques have sufficed? If not what part of prior methods makes this not the case?) For utility, the paper does not make a clear case of why it would be easier for an adversary to place N~=5 patches on top of an image as compared to other physical attacks (see especially Li et al. 2019 as a paper that deserves more than a sentence of comparison---why is this approach easier?). One final comment raised by many reviewers is the fact that the title and setup to this paper heavily lean on the "physical" component of the evaluation, and yet the paper does not demonstrate anything physical. The authors rebuttal that the word "towards" absolves them of responsibility for trying an attack in the real world does not convince me; either the paper should attempt this attack in the physical world (and say if it works or if it doesn't) or make it clear from the top that the attack is going to be digital from the start, but motivated by the physical world. Prior accepted papers that include physical world in the title (e.g., Kurakin et al., Athalye et al., Li et al.) don't solve the problem completely, but at least run experiments in the physical world.
train
[ "18Sd-0qG5MZ", "BBiFG2VZLKR", "TOF-XQo-y2S", "Ga5-no4V_4I", "u9YyOlUVqsH", "mg0y50kI7W", "9Q822F2f8pM", "HHpu0yEIydE", "FeYh-vG0TbX", "GPZdr4ismOT", "nksmSx0Zg5c", "WyPz-d9xLl", "kkoia4OZG7W", "BMehO96Hb6u", "FOrSvods4a", "uJIDV4cL-L9", "teZR7MgJiP", "bAA_O3hnDx7" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback. We apologize for not addressing your concerns about our program synthesis formulation and scalability in our previous response. We thought the numbered points detail all the reviewer’s concerns. Here is our response:\n- Program synthesis formulation: Our algorithm draws inspiration fr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5, 4 ]
[ "TOF-XQo-y2S", "Ga5-no4V_4I", "HHpu0yEIydE", "kkoia4OZG7W", "iclr_2022_RB_2cor6d-w", "9Q822F2f8pM", "uJIDV4cL-L9", "bAA_O3hnDx7", "teZR7MgJiP", "nksmSx0Zg5c", "uJIDV4cL-L9", "FOrSvods4a", "BMehO96Hb6u", "iclr_2022_RB_2cor6d-w", "iclr_2022_RB_2cor6d-w", "iclr_2022_RB_2cor6d-w", "iclr_...
iclr_2022_8KD0wdSF2NE
A composable autoencoder-based algorithm for accelerating numerical simulations
Numerical simulations for engineering applications solve partial differential equations (PDE) to model various physical processes. Traditional PDE solvers are very accurate but computationally costly. On the other hand, Machine Learning (ML) methods offer a significant computational speedup but face challenges with accuracy and generalization to different PDE conditions, such as geometry, boundary conditions, initial conditions and PDE source terms. In this work, we propose a novel ML-based approach, CoAE-MLSim, which is an unsupervised, lower-dimensional, local method, that is motivated from key ideas used in commercial PDE solvers. This allows our approach to learn better with relatively fewer samples of PDE solutions. The proposed ML-approach is compared against commercial solvers for better benchmarks as well as latest ML-approaches for solving PDEs. It is tested for a variety of complex engineering cases to demonstrate its computational speed, accuracy, scalability, and generalization across different PDE conditions. The results show that our approach captures physics accurately across all metrics of comparison (including measures such as results on section cuts and lines).
Reject
This paper proposes a new approach called CoAE-MLSim as a faster alternative to PDE solvers. According to the authors, the main strength of the method is that it needs fewer training data, however, as pointed out by the reviewers, the factor is not significant enough in the current results. Another concern raised by the reviewers is that the iterative algorithm is much more expensive than only one forward pass of other ML methods. Furthermore, the reviewers criticized that the comparisons with baselines are not sufficient, and some claims in the paper were not backed up. The authors provided their rebuttals and had a long discussion with most of the reviewers. Some clarifications have been made, and some reviewers increased their scores accordingly. However, overall speaking, the major problems with the paper still remain, and the rebuttal has not successfully convinced the reviewers to turn to the positive side. Therefore, we cannot give a green light to this paper yet.
train
[ "sr7vXPajAaG", "VuaNPH7Aq72", "FJOnf-Sh0aB", "rAcGd8qyZPU", "Qbn33981-WT", "wnki-L6sS1A", "chlsul0erzR", "lnoE4VUAXUM", "qXa08surtWE", "mMzQZIbMdh", "9hi-dngVFM", "TnERVmldVxR", "l08xYPZJFyO", "OEEQF3jw0dM", "tEgSDBfptFA", "xJTELtKedQp", "rU-GUW50J5H", "uTJKGbIgdhH", "992qoOSTaAN...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " We were wondering if you had any other questions or comments about the recent revision we submitted. Thanks.", "The paper proposed CoAE-MLSim to learn with relatively fewer samples of PDE solutions and solve PDEs. CoAE-MLSim uses the idea of domain decomposition: first learn the solution on local subdomains usi...
[ -1, 5, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 5, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "wnki-L6sS1A", "iclr_2022_8KD0wdSF2NE", "OEEQF3jw0dM", "XJuQ33BjmYc", "VuaNPH7Aq72", "chlsul0erzR", "rU-GUW50J5H", "iclr_2022_8KD0wdSF2NE", "mMzQZIbMdh", "xJTELtKedQp", "TnERVmldVxR", "l08xYPZJFyO", "hKu5xnUeiga", "tEgSDBfptFA", "VuaNPH7Aq72", "lnoE4VUAXUM", "uTJKGbIgdhH", "992qoOS...
iclr_2022_Jep2ykGUdS
DEUP: Direct Epistemic Uncertainty Prediction
Epistemic uncertainty is the part of out-of-sample prediction error due to the lack of knowledge of the learner. Whereas previous work was focusing on model variance, we propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. This estimator of epistemic uncertainty includes the effect of model bias (or misspecification) and is useful in interactive learning environments arising in active learning or reinforcement learning. In addition to discussing these properties of Direct Epistemic Uncertainty Prediction (DEUP), we illustrate its advantage against existing methods for uncertainty estimation on downstream tasks including sequential model optimization and reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic classification of images and for estimating uncertainty about synergistic drug combinations.
Reject
The manuscript develops a novel method for uncertainty prediction that can be used in the context of active or reinforcement learning problems. They consider experiments such as an OOD Detection task wherein a ResNet is trained on CIFAR10 and predictions must subsequently be made for in- versus out-of-distribution (SVHN) data. The work develops an approach based on directly estimating epistemic (as opposed to a aleatoric) uncertainty by learning to predict generalization error and then subtracting estimated aleatoric uncertainty. Reviewers found the essential approach to be novel and creative. However, there were several issues raised by reviewers that are not well addressed by responses by the authors. For example, Zaec worries about the dependence of the approach on an oracle for estimating aleatoric uncertainty. Multiple reviewers were concerned that this would make the approach unsuitable for many situations and thus limit the applicability of the ideas. Multiple reviewers also found the manuscript to be difficult to understand. I agree with the sentiment. While there may indeed be an interesting and important idea here, the text and explication of the algorithm and approach are complicated and leave the reader unsure about the contribution. I would recommend that the authors invest time and effort in simplifying and streamlining the narrative and presenting the technical innovation so that it is easier to judge. In it's current form, the manuscript is premature for publication.
train
[ "BUN2NKIKsrC", "1OuiEY-lBna", "YIU0OgZvRuQ", "sPNQdWe4qTv", "FItABC9ipX_", "d3Jnt-MA1Ya", "qBK4ZNFIS2q", "LJCjcrY3wft", "7VQk7K_Gc66", "iUQhZxnRuff", "OVRyarydYcJ", "Jw1zDsaBzXO", "JuvKxV7wYl", "1aS6WA7vcVH", "ZjqKaO-arSO", "p4kQTFll5MU", "j7dh47e5Zf", "sMbKY7o-hpZ", "xxDCMzhkeT-...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "public", "author", "author", "author", ...
[ " With the discussion period about to end, we hope the rebuttal and updated draft addressed all your questions and concerns, we would be happy to clarify further if you have any remaining questions or clarifications about the rebuttal itself or the updated draft!", " With the discussion period about to end, we ho...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "-LOIr9T5p6d", "32MHuDVxdOV", "FItABC9ipX_", "iclr_2022_Jep2ykGUdS", "Lfe4cSibWM", "sPNQdWe4qTv", "32MHuDVxdOV", "-LOIr9T5p6d", "OVRyarydYcJ", "iclr_2022_Jep2ykGUdS", "Jw1zDsaBzXO", "JuvKxV7wYl", "1aS6WA7vcVH", "ZjqKaO-arSO", "j7dh47e5Zf", "sMbKY7o-hpZ", "iclr_2022_Jep2ykGUdS", "ic...
iclr_2022_2p_5F9sHN9
The Geometry of Adversarial Subspaces
Artificial neural networks (ANNs) are constructed using well-understood mathematical operations, and yet their high-dimensional, non-linear, and compositional nature has hindered our ability to provide an intuitive description of how and why they produce any particular output. A striking example of this lack of understanding is our inability to design networks that are robust to adversarial input perturbations, which are often imperceptible to a human observer but cause significant undesirable changes in the network’s response. The primary contribution of this work is to further our understanding of the decision boundary geometry of ANN classifiers by utilizing such adversarial perturbations. For this purpose, we define adversarial subspaces, which are spanned by orthogonal directions of minimal perturbation to the decision boundary from any given input sample. We find that the decision boundary lies close to input samples in a large subspace, where the distance to the boundary grows smoothly and sub-linearly as one increases the dimensionality of the subspace. We undertake analysis to characterize the geometry of the boundary, which is more curved within the adversarial subspace than within a random subspace of equal dimensionality. To date, the most widely used defense against test-time adversarial attacks is adversarial training, where one incorporates adversarial attacks into the training procedure. Using our analysis, we provide new insight into the consequences of adversarial training by quantifying the increase in boundary distance within adversarial subspaces, the redistribution of proximal class labels, and the decrease in boundary curvature.
Reject
The work investigates the decision boundary of neural networks by quantifying in various ways the shape and curvature of the error set local to correctly classified inputs, dubbed the "adversarial subspace". First, a method is introduced which seeks to find the largest set of orthogonal directions starting at in input x which will all intersect the error set local to an image. This is motivated as a certain geometric measure of the error set, large sprawling error sets may have many orthogonal directions which intersect it local to the given input while small narrow error sets may have relatively few. Using this geometric measure, the authors compare the shape of adversarial subspaces of various image models both with and without adversarial training. After the rebuttal period, reviewers all felt that the work was borderline, with no one strongly advocating for the work. As noted by some reviewers, while some experiments may be interesting it was unclear what new insights the work contributes. For example, the authors argue that the change in the geometry of the error set explains why adversarial training works. It is unclear how this is an explanation more than it is simply an observation that the error set geometry has changed. An analogy would be trying to explain why Resnet-50 performs better than AlexNet by showing that it has higher test accuracy---this only shows that it is better, but doesn't explain why. During the discussion period the AC raised additional concerns regarding a sanity check that the author's main algorithm should pass. In particular, consider an error set x_1 >= K(x_2^2 + ... + x_n^2) + C, parameterized by constants K and C > 0. For all choices of K and C and starting point x = (0, ..., 0), the authors main algorithm will always return 1 as the dimensionality of this error set. It will find the vector (1, 0, ..., 0) and then terminate. However, this is problematic because we can choose K and C to make this error set either very narrow (e.g. K=C=100) or very wide (K = .000001, C = .00001)---the proposed algorithm will be unable to distinguish between this two extremes. Given this, it seems that greedily selecting the set of orthogonal directions starting at x can be very suboptimal if the intent is to find a maximum sized set of orthogonal directions. To conclude, the work would be substantially improved if it addresses two major weaknesses. First, there needs to be a clearer motivation for studying this notion of geometry of the error set, what new insights can the authors provide other than adversarial training changes the shape of the error set? Second, the method doesn't seem to be principled given it is unable to distinguish between the two extreme cases discussed above.
train
[ "Z2tmCQ1_9at", "b2vOOXLs7a", "-rVtwC4Q6FC", "i67AvL0w9yi", "i6jPsQLfExD", "QDDT0vZeOa6", "e7pK-C9-18K", "d6iy3OSd7Xs", "LiP09SqMX3-", "QYUt8LCKAZ", "cw-UB0A1mcW", "Ft-0gapaal", "nxkliQXVbj3", "kcPwablw401", "X57IjU616Bu", "buvn8qQKDLf", "Nv-DsrjGNW1", "NNfw60DyW1", "QmpglwetF1", ...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_r...
[ " We propose the following modifications to our manuscript to clarify the distinctions made in the earlier reply.\n\nWe see that the term adversarial subspaces might be misleading and we would be willing to change it to provide more clarity (this was also suggested by reviewer Pjpz). For example, we could change th...
[ -1, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 4 ]
[ "-rVtwC4Q6FC", "-rVtwC4Q6FC", "iclr_2022_2p_5F9sHN9", "QYUt8LCKAZ", "LiP09SqMX3-", "QmpglwetF1", "iclr_2022_2p_5F9sHN9", "QmpglwetF1", "iclr_2022_2p_5F9sHN9", "iclr_2022_2p_5F9sHN9", "NW-w4BJviGt", "LiP09SqMX3-", "LiP09SqMX3-", "e7pK-C9-18K", "Nv-DsrjGNW1", "Nv-DsrjGNW1", "iclr_2022_...
iclr_2022_pC00NfsvnSK
Improving zero-shot generalization in offline reinforcement learning using generalized similarity functions
Reinforcement learning (RL) agents are widely used for solving complex sequential decision making tasks, but still exhibit difficulty in generalizing to scenarios not seen during training. While prior online approaches demonstrated that using additional signals beyond the reward function can lead to better generalization capabilities in RL agents, i.e. using self-supervised learning (SSL), they struggle in the offline RL setting, i.e. learning from a static dataset. We show that the performance of online algorithms for generalization in RL can be hindered in the offline setting due to poor estimation of similarity between observations. We propose a new theoretically-motivated framework called Generalized Similarity Functions (GSF), which uses contrastive learning to train an offline RL agent to aggregate observations based on the similarity of their expected future behavior, where we quantify this similarity using generalized value functions. We show that GSF is general enough to recover existing SSL objectives while also improving zero-shot generalization performance on a complex offline RL benchmark, offline Procgen.
Reject
The paper proposes a new offline RL technique to generalize across domains. The paper was initially confusing (i.e., MDP vs POMDP) and weak empirically. The authors greatly improved the paper. However, a the end of the day, it is still not clear why the proposed approach performs better than existing techniques. We can think of the cumulant function with the discrete labels as essentially computing some statistics of future actions, observations and rewards. This is what every self supervised technique does. They differ in terms of their particular choice of statistics and architecture. The paper does not sufficiently motivate the particular architecture. Interestingly, in the experiments, the best statistics are cumulative rewards, which are closely related to the Q-values. In that case, it is even less clear why the approach should be beneficial since RL techniques that generalize across domains by learning state representations to predict Q-values seem very closely related. Despite the updates to the paper, the POMDP references are still confusing. The issue is that the paper embeds observations as if they were sufficient to predict future observations and rewards. This corresponds to the memoryless approach where a policy is optimized based on the last observation instead of the history of past actions and observations. Memoryless strategies are effective only when the last observation is a sufficient statistic, meaning that we really have a (near) fully observable MDP. The paper should discuss this and acknowledge that the approach will suffer in domains where memory of past actions and observations is critical.
train
[ "ajVZZdrPb7j", "_GhDTKCkgjk", "xTRt1Ii2Umf", "mwVJtLBBzrW", "z2MQtIavqfH", "8t1ismfV9Kx", "dqORxYoToqy", "BUyQFSCKiXc", "yG9tvcPOg5Y9", "e9W70Zq4LgUq", "d6uDEXgrHDQ", "h5oinElEZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes improving the generalization performance offline RL algorithms by using a new approach to aggregate observations based on the similarity of their expected future behavior. The idea of aggregating observations based on behavior similarity has already been explored in some papers, but the approac...
[ 5, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2022_pC00NfsvnSK", "BUyQFSCKiXc", "iclr_2022_pC00NfsvnSK", "iclr_2022_pC00NfsvnSK", "ajVZZdrPb7j", "h5oinElEZz", "d6uDEXgrHDQ", "xTRt1Ii2Umf", "ajVZZdrPb7j", "ajVZZdrPb7j", "iclr_2022_pC00NfsvnSK", "iclr_2022_pC00NfsvnSK" ]
iclr_2022_gI7KCy4UDN9
Post-Training Quantization Is All You Need to Perform Cross-Platform Learned Image Compression
It has been witnessed that learned image compression has outperformed conventional image coding techniques and tends to be practical in industrial applications. One of the most critical issues preventing it from being practical is the non-deterministic calculation, which makes the probability prediction cross-platform inconsistent and frustrates successful decoding. We propose to solve this problem by introducing well-developed post-training quantization and making the model inference integer-arithmetic-only, which is much simpler than presently existing training and fine-tuning based approaches yet still keeps the superior rate-distortion performance of learned image compression. Based on that, we further improve the discretization of the entropy parameters and extend the deterministic inference to fit Gaussian mixture models. With our proposed methods, the current state-of-the-art image compression models can infer in a cross-platform consistent manner, which makes the further development and practice of learned image compression more promising.
Reject
The paper considers quantization issues for learned neural-network-based image compression methods. Many works on the topic incorporate quantization into the training of the method. The paper provides evidence that post-training quantization is effective. Specifically, the paper demonstrates that state-of-the-art learned image compression methods can be quantized post-training and retain a very similar level of compression performance. The paper argues that this is important in particular for cross-platform applications, where an image is decompressed on different architectures. Finally, the paper proposes an approach to discretize entropy parameters. The reviewers raised the following concerns. - Reviewer 2XDr is concerned about the application of post-training quantization being a contribution, since post-training quantization has already been studied in [1] (and in the recent paper [2] that can be considered as concurrent work). The authors response is that the methods in [1] has extra overhead and clarify how the prior work is in fact different. This addressed the reviewer's concern, and the reviewer raised their score. - Reviewer eyVf finds the comparisons with previous methods to be insufficient, and in general find the value of the research unclear, as the goals are not sufficiently specified. The authors clarified, and the reviewer was satisfied with the response and raised the rating to marginal above the threshold. - Reviewer L7dn tends towards acceptance, but has concerns about the technical novelty, that are unspecified unfortunately. - Reviewer oV3R argues that the solution is marginal relative to prior work, and votes to reject the paper. The authors responded why they think it isn't, and also wrote a private letter to the area chair in which they explain why they think that reviewer's oV3R should not be taken into account. I agree with the authors that the paper under review provides a step relative to Balle et al (2019), and that the writing of the paper is not an issue; however, the reviewer's overarching point is that the overall contribution is marginally significant when taking the prior work by Balle et al (2019) into account and this is the sentiment of other reviewers as well. - Reviewer GrpS, an expert on image compression, leans towards acceptance and argues that the results are strong as they show little to no loss due to the quantization technique, but also rates the contribution to only be marginally significant and novel, and raises a few questions and issues, to which the reviewers responded. This paper is really borderline. Four out of five reviewers rate this paper as marginally above the acceptance threshold. The consensus is that while the experiments and claims are correct, the contribution is only marginally significant or novel, in particular, relative to prior work, and therefore I recommend rejecting the paper. I would, however, not be upset if it would be accepted.
test
[ "HingLVqXJAg", "UdfQBJvrYw9", "xNoPs-18dH_", "aSXDS7ZI49", "p1nWo_TdNZ", "oUhfnPYMtcD", "h849YQGek8I", "NGh6GEA9wSp", "7MSziQ-JaCY", "o16v6EE2mvE", "fb4OqQM1p7I", "_VKVDiwIo0O", "g1TaluRrAMi", "qGPBUShMkN", "Oi5-tHwSUj7", "MokNGxXfno-", "tZNEPQekyYL", "_-Zprn8U9N", "5asiGmIOVS6" ...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " **Good performances on the other context-modeling-based models.** Note that we also present the good performance on Cheng et al. (2020), as shown in Fig.5(a). Furthermore, we present the performance of the two models using parallel checkerboard adaption (He et al., 2021) in the Appendix. Various recent approaches...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "UdfQBJvrYw9", "h849YQGek8I", "iclr_2022_gI7KCy4UDN9", "h849YQGek8I", "h849YQGek8I", "fb4OqQM1p7I", "g1TaluRrAMi", "iclr_2022_gI7KCy4UDN9", "Oi5-tHwSUj7", "iclr_2022_gI7KCy4UDN9", "MokNGxXfno-", "5asiGmIOVS6", "_-Zprn8U9N", "tZNEPQekyYL", "NGh6GEA9wSp", "o16v6EE2mvE", "iclr_2022_gI7K...
iclr_2022_AEa_UepnMDX
Resolving label uncertainty with implicit generative models
In prediction problems, coarse and imprecise sources of input can provide rich information about labels, but are not readily used by discriminative learners. In this work, we propose a method for jointly inferring labels across a collection of data samples, where each sample consists of an observation and a prior belief about the label. By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs. This formulation unifies various machine learning settings: the weak beliefs can come in the form of noisy or incomplete labels, likelihoods given by a different prediction mechanism on auxiliary input, or common-sense priors reflecting knowledge about the structure of the problem at hand. We demonstrate the proposed algorithms on diverse problems: classification with negative training examples, learning from rankings, weakly and self-supervised aerial imagery segmentation, co-segmentation of video frames, and coarsely supervised text classification.
Reject
The authors have addressed several of the issues raised by the reviewers, and they are strongly encouraged in include the additional experiments, and sections, that they propose, in a revised submission. The reviewers also recognized the novelty and extend of applications the proposed methodology has. Nevertheless, the paper would significantly benefit from a rigorous and thorough comparison to related work, placing it well within the context of the literature brought up by reviewers. Experimental comparisons to competitors, even if the latter address more restrictive settings, would strengthen the paper. Most importantly, the authors should consider including a comprehensive related work section, that convincingly discusses and compares to related/adjacent methods.
train
[ "EGlY_NyN7I3", "9n8h2Rd12y", "YNoD63_Z9li", "fHssF6yWHkc", "bWaHY1hDGM5", "hSyhaNDJL8o", "npwUH7LZ-Gl", "FB7U_4GO3Ra", "SadbukLGEH5", "xH5aiMamQSY", "AfIOOUfgHN9", "WaIu_Opa56x" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. We respectfully disagree with the claim that context-dependent distributions cannot be called \"priors\". In fact, a number of standard examples of Bayesian statistics involve distributions termed \"priors\" that depend on auxiliary information or earlier inferences. For instance, in analysis of time series, i...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "9n8h2Rd12y", "npwUH7LZ-Gl", "iclr_2022_AEa_UepnMDX", "WaIu_Opa56x", "AfIOOUfgHN9", "xH5aiMamQSY", "SadbukLGEH5", "iclr_2022_AEa_UepnMDX", "iclr_2022_AEa_UepnMDX", "iclr_2022_AEa_UepnMDX", "iclr_2022_AEa_UepnMDX", "iclr_2022_AEa_UepnMDX" ]
iclr_2022_size4UxXVCY
Graph Tree Neural Networks
In the field of deep learning, various architectures have been developed. However, most studies are limited to specific tasks or datasets due to their fixed layer structure. In this paper, we do not express the structure delivering information as a network model but as a data structure called a graph tree. And we propose two association models of graph tree neural networks(GTNNs) designed to solve the problems of existing networks by analyzing the structure of human neural networks. Defining the starting and ending points in a single graph is difficult, and a tree cannot express the relationship among sibling nodes. On the contrary, a graph tree(GT) can express leaf and root nodes as its starting and ending points and the relationship among sibling nodes. Instead of using fixed sequence layers, we create a GT for each data and train GTNN according to the tree's structure. GTNNs are data-driven learning in which the number of convolutions varies according to the depth of the tree. Moreover, these models can simultaneously learn various types of datasets through the recursive learning method. Depth-first convolution (DFC) encodes the interaction result from leaf nodes to the root node in a bottom-up approach, and depth-first deconvolution (DFD) decodes the interaction result from the root node to the leaf nodes in a top-down approach. To demonstrate the performance of these networks, we conducted two experiments. The first experiment is whether various datasets can be processed by combining GTNN and feature extraction networks(processing various datasets). The second experiment is about whether the output of GTNN can embed information on all data contained in the GT(association). We compared the performance of existing networks that separately learned image, sound, and natural language datasets with the performance simultaneously learned by connecting these networks. As a result, these models learned without significant performance degradation, and the output vector contained all the information in the GT.
Reject
First, I would like to thank all the reviewers for their efforts in reading and understanding this paper. I tried to read the paper as well and I also find it's really difficult (if possible) for me to understand the ideas presented here. The most important task in writing a paper (as Reviewer Svha also suggested in his/her review) in the field of machine learning is to explain to your peers what is the problem you are trying to solve and how you solve (or partially solve) that problem. I think there is a consensus among the reviewers that the paper did not do a great job of that. I am not questioning the quality of the idea or the research here, but I think the paper here will need to do a significantly better job here in explaining the idea before it can be a good ICLR publication.
train
[ "06ZHog2rf-s", "Xy5-AtjHVqG", "dw7-PN8YOnl", "DGF7Z1kRquV", "d6Rmwlq9dFz", "RDzJWSEbfBs", "KUvMl_MKV4D" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to review!\n\nIf possible, may I know which section was not easy to read?\nI want to complete this study, and I will revise it by referring to the comments.", " Thank you for taking the time to review!\n\n1. I added figures to the appendix, but I'm trying to revise the article.\n\n...
[ -1, -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "d6Rmwlq9dFz", "KUvMl_MKV4D", "iclr_2022_size4UxXVCY", "RDzJWSEbfBs", "iclr_2022_size4UxXVCY", "iclr_2022_size4UxXVCY", "iclr_2022_size4UxXVCY" ]
iclr_2022_M_o5E088xO5
PROMISSING: Pruning Missing Values in Neural Networks
While data are the primary fuel for machine learning models, they often suffer from missing values, especially when collected in real-world scenarios. However, many off-the-shelf machine learning models, including artificial neural network models, are unable to handle these missing values directly. Therefore, extra data preprocessing and curation steps, such as data imputation, are inevitable before learning and prediction processes. In this study, we propose a simple and intuitive yet effective method for pruning missing values (PROMISSING) during learning and inference steps in neural networks. In this method, there is no need to remove or impute the missing values; instead, the missing values are treated as a new source of information (representing what we do not know). Our experiments on simulated data, several classification and regression benchmarks, and a multi-modal clinical dataset show that PROMISSING results in similar classification performance compared to various imputation techniques. In addition, our experiments show models trained using PROMISSING techniques are becoming less decisive in their predictions when facing incomplete samples with many unknowns. This finding hopefully advances machine learning models from being pure predicting machines to more realistic thinkers that can also say "I do not know" when facing incomplete sources of information.
Reject
The paper presents a simple and intuitive method to prune the missing value in the learning and inference steps of the neural networks, leading to similar prediction performance as other methods to impute missing value. It has some really useful insights, but could benefit from one more round of revision for a strong publication: 1. improving the writing so that its sets up the right expectations on the contributions of the paper; 2. providing discussions on its connections (and differences) with zero-imputation and missing-indicator methods; 3. thoroughly investigating the experiment results to illustrate the advantages of the proposed method. The recommendation of reject is made based on the technical aspect of the paper. ----------------------------- During the rebuttal phase, the authors misused the interactive and transparent (for the better or worse) openreview system by writing inappropriate comments with personal accusations to the reviewers who write negative reviews. We would like to extend the apologies to the reviewers for this unpleasant experience and thank the reviewers for their engagement and work, as well as their fair assessment of the paper.
train
[ "Kj6u4i6DMn8", "4bRToIZGHmv", "RdRKWllUYnv", "XtkTcDDK3OO", "Ig_ByeFHfdU", "CQAhlDthtf6", "-Hqi0zW6rYy", "9jsn2vERT99", "qptkyCOnn90", "TcbkBC0i6Ha", "thCpvLEV6vq", "jZe3a5Anm9h", "U8cG8aHqC_Z", "Jj1ybZJ_tDl", "4Jg33TpF7Rp", "pLDU2IOqu8T", "ZMsw_aBQ50u", "2KcJ69CGED0", "AZp5HgLHi...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank again the reviewers for engaging in the second phase discussion. Hereby, we would like to summarize the moments of the discussion:\n\n- We appreciate DWUQ's feedback on the revised version. The reviewer commends the additional experiments added to the revised version and realizes the potentials of the pr...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5 ]
[ "iclr_2022_M_o5E088xO5", "iclr_2022_M_o5E088xO5", "XtkTcDDK3OO", "9jsn2vERT99", "2KcJ69CGED0", "2KcJ69CGED0", "2KcJ69CGED0", "33SpNTYou5A", "33SpNTYou5A", "4bRToIZGHmv", "ZMsw_aBQ50u", "iclr_2022_M_o5E088xO5", "AZp5HgLHi-z", "AZp5HgLHi-z", "ZMsw_aBQ50u", "ZMsw_aBQ50u", "iclr_2022_M_o...
iclr_2022_yK_jcv_aLX
Action-Sufficient State Representation Learning for Control with Structural Constraints
Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks. In this paper, we focus on partially observable environments and propose to learn a minimal set of state representations that capture sufficient information for decision-making, termed \textit{Action-Sufficient state Representations} (ASRs). We build a generative environment model for the structural relationships among variables in the system and present a principled way to characterize ASRs based on structural constraints and the goal of maximizing cumulative reward in policy learning. We then develop a structured sequential Variational Auto-Encoder to estimate the environment model and extract ASRs. Our empirical results on CarRacing and VizDoom demonstrate a clear advantage of learning and using ASRs for policy learning. Moreover, the estimated environment model and ASRs allow learning behaviors from imagined outcomes in the compact latent space to improve sample efficiency.
Reject
This paper presents a state representation learning technique aiming at extracting only state features that are relevant to solve a given task. It combines several ideas, in particular (1) keeping only features that are relevant to take actions from an information-theoretic point of view, (2) model-based learning, and (3) sparsity-inducing constraints. Experiments on CarRacing and VizDoom show that the proposed method outperforms existing baselines. Although authors did a thorough job trying to address reviewers' comments during the discussion period, in the end most reviewers remained unconvinced by the submission in its current state, the main remaining concerns being: * Unclear description of the methodology and informal maths * A somewhat complex optimization objective that may require tuning many hyper-parameters, and whose entire relevance isn't clearly demonstrated empirically Overall I agree with these concerns, in particular the general feeling that the theoretical part is difficult to follow, with some apparent typos / mistakes (clearly the original submission had a lot of issues, given the original reviews that required the authors to fix several points). To give a concrete example, while reading the final revision I first ran into potential issues when defining the objective $H(a_t | ...)$ on p.4: * Although the left hand side is associated to a single timestep $t$, the right hand side is a sum over all values of $t$ * The definition of $q_{\phi}$ seems weird to me, in particular the fact that it takes $o_t$ as input (through $y_{1:t}$) => this seems like a typo (?) and otherwise I don't really understand how this defines a proper definition over states * As at least one reviewer pointed out, authors start from the mutual information $I$ but drop the entropy term $H(a_t | R_{t+1})$ by claiming it doesn't matter since the goal is to learn the ASR. However, in that objective the distribution over actions $p_{\alpha}$ seems to be learnable (through the $\alpha$ parameters), so if we try to minimize the mutual information $I$ including over $\alpha$ it would have been important to retain the entropy over actions as well. In terms of the relevance of the results, they look pretty good but: * The proposed algorithm ends up being somewhat complex, with a lot of terms in the loss (eq. 4), and a lack of empirical validation of what actually matters. I see a single ablation study in the Appendix (Fig. 10), and possibly also the comparison to VRL (but it isn't entirely clear to me what this baseline is implementing as it lacks details). I would have appreciated a more thorough empirical analysis of how each term in eq. 4 matters. * CarRacing experiments consistently use 21 dimensions "for a fair comparison", but this dimensionality was chosen specifically for and by ASR. As a result, it doesn't really look "fair" to me: a fairer comparison would have either selected the optimal dimensionality for each method, or shown results across a range of different dimensionalities. I also have some concerns regarding the applicability of the algorithm: 1. Relying on random actions to build a world model only works if random actions allow sufficient enough exploration of the state space. There are many situations where this isn't a realistic assumption (also alluded to by at least one reviewer). 2. Minor: in the setup of eq. 1 the reward $r_t$ doesn't directly depend on $s_t$. I'm not sure to which extent this matters for the proposed algorithm, but if this is a necessary condition for it to work properly, it may cause issues in many stochastic environments. As a result, I am recommending rejection as I believe the paper is not quite ready for publication. I would encourage the authors to try and simplify the presentation (the paper is very notation-heavy and not an easy read), focusing on showing convincing theoretical and empirical justification for all components of the proposed technique.
train
[ "gXV7KJoBWmL", "q7Rpp0wK3Mp", "jCkYOod6HX-", "puK2CDL4eib" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to find action-sufficient representation which might be helpful for downstream tasks.\nThe idea is that states and/or observations often include irrelevant for actions information which can be removed by \nextracting only reverent information. The method utilizes the concepts of Identifiability...
[ 5, 5, 5, 8 ]
[ 5, 2, 3, 4 ]
[ "iclr_2022_yK_jcv_aLX", "iclr_2022_yK_jcv_aLX", "iclr_2022_yK_jcv_aLX", "iclr_2022_yK_jcv_aLX" ]
iclr_2022_hUr6K4D9f7P
Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks
There is growing theoretical and empirical evidence that flatter local minima tend to improve generalization. An efficient and effective technique for finding such minima is Adversarial Weight Perturbation (AWP). The main idea is to minimize the loss w.r.t. a bounded worst-case perturbation of the model parameters by (approximately) solving an associated min-max problem. Intuitively, we favor local minima with a small loss in a neighborhood around them. The benefits of AWP, and more generally the connections between flatness and generalization, have been extensively studied for i.i.d. data such as images. In this paper we initiate the first study of this phenomenon for graph data. Along the way, we identify a vanishing-gradient issue with all existing formulations of AWP and we propose Weighted Truncated AWP (WT-AWP) to alleviate this issue. We show that regularizing graph neural networks with WT-AWP consistently improves both natural and robust generalization across many different graph learning tasks and models.
Reject
This paper considers a variant of adversarial weight perturbations / sharpness aware minimization for graph (convolutional) neural networks for node and graph classification. In particular, they make two adjustments: “truncating”, i.e., limiting the weight perturbation to specific layers, and weighting the sharpness aware loss with the regular loss during training. The reviewers found that the theoretical justifications (characterization of vanishing gradient and understanding of non-iid setting which was added during rebuttal) are interesting, but several reviewers also found the solution/empirical results not convincing enough. I recommend the authors to either shift the focus to the theoretical results or to strengthen the empirical results (and their connections with theory) following the comments of the reviewers.
train
[ "RBI8ChwkKd", "fsbtFyxX0XW", "G_WD7UDzyGX", "acLbiKVRWen", "2bFG_s_3w_J", "G7CNVPts7UL", "tTZrF60EJCi", "Kz1jD35UdDd", "gkHybJcv4NX", "RI2I8L-IUr", "RWgiUhPcUSR", "-YzVlROXSg_" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thanks the reviewers for answers and comments on the raised points. All my questions were addressed appropriately.\n\nI decided to keep my score at 6.", " Thanks for your updates \n\n*1. Our contributions*\n\nWe respectively disagree that \"the **only** difference between this work and previous work using AWP...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "G_WD7UDzyGX", "2bFG_s_3w_J", "RI2I8L-IUr", "gkHybJcv4NX", "tTZrF60EJCi", "iclr_2022_hUr6K4D9f7P", "-YzVlROXSg_", "RWgiUhPcUSR", "iclr_2022_hUr6K4D9f7P", "iclr_2022_hUr6K4D9f7P", "iclr_2022_hUr6K4D9f7P", "iclr_2022_hUr6K4D9f7P" ]
iclr_2022_dKLoUvtnq0C
Semi-supervised learning of partial differential operators and dynamical flows
The evolution of dynamical systems is generically governed by nonlinear partial differential equations (PDEs), whose solution, in a simulation framework, requires vast amounts of computational resources. For a growing number of specific cases, neural network-based solvers have been shown to provide comparable results to other numerical methods while utilizing fewer resources. In this work, we present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture. Our method treats time and space separately. As a result, it successfully propagates initial conditions in discrete time steps by employing the general composition properties of the partial differential operators. Following previous work, supervision is provided at a specific time point. We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions. The results show that the new method improves the learning accuracy at the time point of supervision point, and is also able to interpolate and extrapolate the solutions to arbitrary times.
Reject
This paper presents an approach to learn the solution operator of Markovian partial differential equations (PDEs) by combining the Fourier Neural Operator (FNO) with a hyper-network. In short, the hyper-network g_\theta(t) is trained to output the weights of a FNO f_w(x), which (given an initial condition) outputs the PDE solution at the time given to the hyper-network. The main claimed contributions of the proposed approach (as compared to, e.g., the original FNO architecture) are that the obtained solutions improve the learning accuracy at the supervision time points and that the solutions are able to interpolate and extrapolate to arbitrary times. The reviewers seemed to like the idea of using hyper-networks for modelling continuous-time FNOs. Several issues were raised by the reviewers, e.g., with respect to related work by Li et al. (2021, https://arxiv.org/abs/2106.06898), which I believe were addressed by the authors satisfactorily. However, it is still concerning that the reported performance for FNO in, e.g., 1d-Burgers does not quite match those recently published (Kovachki et al, 2021, https://arxiv.org/abs/2108.08481, Table 2). Although the authors did report additional results using GeLU, these results are still very different to those in Kovachki et al ( 2021) and it is unclear whether the improved performance is due to a lack of tuning the baseline FNO. I commend the authors for, as suggested by the reviewers, running more extrapolation experiments. However, I believe the reviewers also made a point about considering (training) times much longer than 1, as even the original FNO paper did this for Navier Stokes (NS) with T=50. Overall, the paper provides modest improvement wrt FNOs, although it does extend its capabilities to interpolation and extrapolation. The paper will also benefit from providing a brief overview of FNOs.
train
[ "-3vr_u_dnkO", "KpQxidZwo-F", "NpoQ9L3ff5O", "OJobtkxd9HA", "t1yJ64kwPO", "VOmzs8UdN1", "LYIA3Ntgf12", "fp89h3Kl07", "BumtsMCpoNV", "_6_xEATrXue", "WwuwEHfbUuR", "iG7K6Rs4E0Y" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose to use hyper-network to learn a continuous time-dependent evolution operator based on Fourier neural operator. The paper is quite interesting to me. It provides a continuous time representation of FNO using hyper-networks.\n\nComments:\n1. In the original FNO paper, there are two...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_dKLoUvtnq0C", "OJobtkxd9HA", "_6_xEATrXue", "t1yJ64kwPO", "-3vr_u_dnkO", "LYIA3Ntgf12", "iG7K6Rs4E0Y", "BumtsMCpoNV", "WwuwEHfbUuR", "WwuwEHfbUuR", "iclr_2022_dKLoUvtnq0C", "iclr_2022_dKLoUvtnq0C" ]
iclr_2022_yuv0mwPOlz3
Active Learning over Multiple Domains in Natural Language Tasks
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt the target domain. We survey a wide variety of techniques in active learning (AL), domain shift detection (DS), and multi-domain sampling to examine this challenging setting for question answering and sentiment analysis. We ask (1) what family of methods are effective for this task? And, (2) what properties of selected examples and domains achieve strong results? Among 18 acquisition functions from 4 families of methods, we find H- Divergence methods, and particularly our proposed variant DAL-E, yield effective results, averaging 2-3% improvements over the random baseline. We also show the importance of a diverse allocation of domains, as well as room-for-improvement of existing methods on both domain and example selection. Our findings yield the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks.
Reject
This paper considers the problem of active learning (AL) with data drawn from multiple domains. This framing motivates integrating work on domain shift detection and adaptation into standard AL approaches. The reviewers agreed that the work reports a robust set of experiments, which is a clear strength. However, they also raised key concerns, namely: (i) The heterogeneous setting considered is not particularly well motivated; (ii) The technical contributions of this work are limited. The latter would not be a major issue if the empirical evaluation addressed a clear open question (since this would constitute a useful contribution in and of itself), but the empirical contribution is somewhat limited given the unique setting considered and the relevant prior work (some of which seems to have been overlooked by the authors).
train
[ "ZD_cY7lmiZy", "ILdbDr9rQX-", "pFEzXRKtru", "FuJaD4ZRhWm", "gum2w4pPF5q", "60quAWoFWgy", "SZ4Lt_p0xKi", "f2vxgV_0Oy", "vduomrBKjp", "ejEBAcrezv", "5vj6M5jcn0T", "3BFF41AI5jc", "p2c4AQ7fNVP", "m1H1fmwFja2" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I appreciate the clarifications.", " We thank the reviewer one last time for the very relevant citation, as well as their positive feedback.\n\nWe have updated the paper accordingly. There are also several other significant additions (main paper and appendix) to address all reviewer'...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 2 ]
[ "ILdbDr9rQX-", "vduomrBKjp", "f2vxgV_0Oy", "gum2w4pPF5q", "60quAWoFWgy", "SZ4Lt_p0xKi", "m1H1fmwFja2", "p2c4AQ7fNVP", "3BFF41AI5jc", "5vj6M5jcn0T", "iclr_2022_yuv0mwPOlz3", "iclr_2022_yuv0mwPOlz3", "iclr_2022_yuv0mwPOlz3", "iclr_2022_yuv0mwPOlz3" ]
iclr_2022_-geBFMKGlkq
Density-based Clustering with Kernel Diffusion
Finding a suitable density function is essential for density-based clustering algorithms such as DBSCAN and DPC. A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in these algorithms. Such density suffers from capturing local features in complex datasets. To tackle this issue, we propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness. Furthermore, we develop a surrogate that can be efficiently computed in linear time and space and prove that it is asymptotically equivalent to the kernel diffusion density function. Extensive empirical experiments on benchmark and large-scale face image datasets show that the proposed approach not only achieves a significant improvement over classic density-based clustering algorithms but also outperforms the state-of-the-art face clustering methods by a large margin.
Reject
This paper proposes a kernel diffusion method to improve upon density-based clustering methods. The reviewers found the empirical results quite promising and there is consensus that there are some good ideas in this work. However, their criticisms are strikingly consistent that the technical details are lacking and some of the claims are not fully supported, and these criticisms were not found to be fully addressed in the author responses. I agree with the assessment that this is promising in a major revision toward a future submission but it is currently not complete, especially in the theoretical and technical details.
train
[ "-hZF1thsrif", "cRQi0LaPMOg", "8v6rRLSnnpa", "HUojLsMH9RJ", "Mf4OFO66ER1", "1eAay9rKfza", "nARO9Vy1XIv", "PoilDoOMX-m", "pLYIWhXPyby", "UfY8nsp3AG", "XStdNEfFNDA", "D5YK4LDJVep", "6f1qLEz5Pgx", "cZdYqtLhOXt", "jqVDo6JRfDM", "722_TQXHAA2", "kofXn8G-54U", "_nnZ_ZO3wLZ", "M2Bgk9Vrzn...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "...
[ "The authors propose a new density function for density clustering models such as DBSCAN and Density Peaks Clustering (DPC). Given a kernel $K$ and the corresponding normalized random walk transition matrix $P = diag(K\\mathbf{1})^{-1}K$, the authors propose to use the density function which corresponds to the stat...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2022_-geBFMKGlkq", "722_TQXHAA2", "D5YK4LDJVep", "UfY8nsp3AG", "XStdNEfFNDA", "PiaGw1aj-MM", "PoilDoOMX-m", "pLYIWhXPyby", "cZdYqtLhOXt", "OP3sO7amzPQ", "QS-hgKn1ooV", "6f1qLEz5Pgx", "jqVDo6JRfDM", "M2Bgk9Vrzn6", "fKNdh_hSOW", "QS-hgKn1ooV", "wOpXzF8UOo", "378A3Ib4MRe", "8q...
iclr_2022_ptxGmKMLH_
A Closer Look at Prototype Classifier for Few-shot Image Classification
The prototypical network is a prototype classifier based on meta-learning and is widely used for few-shot learning because it classifies unseen examples by constructing class-specific prototypes without adjusting hyper-parameters during meta-testing. Interestingly, recent research has attracted a lot of attention, showing that a linear classifier with fine-tuning, which does not use a meta-learning algorithm, performs comparably with the prototypical network. However, fine-tuning requires additional hyper-parameters when adapting a model to a new environment. In addition, although the purpose of few-shot learning is to enable the model to quickly adapt to a new environment, fine-tuning needs to be applied every time a new class appears, making fast adaptation difficult. In this paper, we analyze how a prototype classifier works equally well without fine-tuning and meta-learning. We experimentally found that directly using the feature vector extracted using standard pre-trained models to construct a prototype classifier in meta-testing does not perform as well as the prototypical network and linear classifiers with fine-tuning and feature vectors of pre-trained models. Thus, we derive a novel generalization bound for the prototypical network and show that focusing on the variance of the norm of a feature vector can improve performance. We experimentally investigated several normalization methods for minimizing the variance of the norm and found that the same performance can be obtained by using the L2 normalization and embedding space transformation without fine-tuning or meta-learning.
Reject
This paper explores prototype vs linear classifiers for few-shot learning. It has been found that pre-training a classifier network, followed by training a linear head can produce competitive results to meta-trained prototypical networks. A natural question therefore is whether one can directly derive prototypical classifiers from pre-trained classifiers. Naively applying this idea doesn’t work well in practice though, and this paper performs a theoretical investigation to determine why. The theory is a generalization of Cao et al., 2020, that doesn’t require assumptions on the class-conditional distributions. The theory suggests that the variance of the norm of the feature vectors plays a role, so the paper explores a few feature transformations to reduce this. It demonstrates on a few benchmark datasets that transforming the feature vectors can indeed allow us to create prototypical classifiers from pre-trained networks. As a minor quibble, the paper twice refers to “direction of the norm of class mean vectors” - This should just be the direction of the class mean vectors, right? Norm is not a direction, it’s a magnitude. During the discussion phase, a number of questions arose, mainly around the clarity of the presentation and a request for a few additional baselines (e.g., L2 normalization combined with LDA/V-N). These points were resolved by the authors. The main outstanding issue is whether there is enough novelty/significance in the paper to merit acceptance, and on that point, the reviewers felt this is borderline. On the one hand, the theory is more general and does directly point to aspects of the feature space that can yield better generalization results. On the other hand, tricks like L2 normalization are already known, and the utility of prototype classifiers over linear classifiers like SVMs is unclear. After careful consideration, further discussion with the reviewers, as well as the program committee, it was generally agreed that this paper does not quite meet the bar in terms of the novelty or significance of its contribution. The authors mentioned time and space benefits relative to fine-tuned classifiers in their response, and I think one way to improve the paper would be to demonstrate this advantage in a real-world application or challenging scenario.
train
[ "AQytOuFVVrT", "B5AI4aXStA", "6QmaJPbz6da", "Hb54ABzR1SD", "AVzl_6md4v", "5ZqDnEqh0_", "GnfK8aOwFM1", "XJ0usZOzkm", "neF3WkSgX4z", "gwyxr9I2Gt", "VG3l2_uKwSe", "1dBcJwpIAR", "xho5UKdcF82", "yGPVK-EL8eY", "zfUs8x8Naud", "oLRqembxyrg", "cMsLjaM22F" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your feedback. When evaluating the impact potential I see for the submission, I'm still hesitant. In some ways, work like Dvornik et al.'s SUR and Chen et al.'s New Meta-Baseline already propose solutions to the problem of training good nearest-centroid classifiers. The former does so using a cosine...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 4 ]
[ "B5AI4aXStA", "neF3WkSgX4z", "AVzl_6md4v", "iclr_2022_ptxGmKMLH_", "GnfK8aOwFM1", "GnfK8aOwFM1", "VG3l2_uKwSe", "yGPVK-EL8eY", "xho5UKdcF82", "xho5UKdcF82", "Hb54ABzR1SD", "iclr_2022_ptxGmKMLH_", "zfUs8x8Naud", "cMsLjaM22F", "1dBcJwpIAR", "1dBcJwpIAR", "iclr_2022_ptxGmKMLH_" ]
iclr_2022_Ti2i204vZON
Learning Representations for Pixel-based Control: What Matters and Why?
Learning representations for pixel-based control has garnered significant attention recently in reinforcement learning. A wide range of methods have been proposed to enable efficient learning, leading to sample complexities similar to those in the full state setting. However, moving beyond carefully curated pixel data sets (centered crop, appropriate lighting, clear background, etc.) remains challenging. In this paper, we adopt a more difficult setting, incorporating background distractors, as a first step towards addressing this challenge. We present a simple baseline approach that can learn meaningful representations with no metric-based learning, no data augmentations, no world-model learning, and no contrastive learning. We then analyze when and why previously proposed methods are likely to fail or reduce to the same performance as the baseline in this harder setting and why we should think carefully about extending such methods beyond the well-curated environments. Our results show that finer categorization of benchmarks on the basis of characteristics like the density of reward, planning horizon of the problem, presence of task-irrelevant components, etc., is crucial in evaluating algorithms. Based on these observations, we propose different metrics to consider when evaluating an algorithm on benchmark tasks. We hope such a data-centric view can motivate researchers to rethink representation learning when investigating how to best apply RL to real-world tasks.
Reject
Meta Review for Learning Representations for Pixel-based Control: What Matters and Why? In this work, the authors presented large-scale empirical evaluations and ablation studies to analyze various components (e.g. contrastive objectives, model-based approaches, data augmentation) for pixel-based control with distractors. Reviewer 7euW wrote a great summary for this paper: This paper presents an approach for learning representations from pixel data that are amenable for control tasks. The proposed approach is a simple baseline that does not require data augmentation, world models, contrastive losses etc. but only contains two simple sub-tasks that are supposed to contribute heavily towards an effective representation: reward prediction and state transition prediction. Along with evaluating this proposed baseline, the paper also compares it to several prior works on representation learning: i.e., several approaches such as data augmentation, distance metric losses, contrastive losses, relevant reconstruction etc. It is shown that the proposed simple baseline either outperforms several of these methods or at least is very close in performance. Finally, the paper presents an interesting discussion about how evaluating an algorithm is not just about the dataset and the chosen benchmark task, but requires a more nuanced point of view of several factors such as reward sparsity, action continuity/discreteness, relevance and irrelevance of features to the task, and so on. The findings of the paper are not just about the effectiveness of the proposed method, but a more overarching view of which types of representation learning methods work in what conditions. Along with myself, most reviewers (including the critical 61FY) agree that there is great value in the large-scale studies presented in this paper. Furthermore, I personally like how it links a large body of recent work in this topic together in one study. The reviews were mixed (6, 6, 3, 3), and the negative reviews (the 3's) generally had issues with not the study or experiments, but the conclusions the authors drew from them. In the words of 61FY (who managed to have a good discussion with the authors): *I'm not convinced by conclusions as the authors try to generalize behavior of specific implementation to a family of methods. If I were to implement a new agent, I don't feel like I can believe these conclusions so that makes me question what knowledge this paper can add to the community. Furthermore, many details are either missing or not made clear, and the main story isn't very strong. Therefore, I don't think this paper is ready for publication in the current status.* Although I really appreciate the effort and detail that went into this nice work, based on the current assessments from the 4 reviewers, I can't recommend it for acceptance in its current state. I feel though, that with a change of narrative, or even with a re-examination of the experimental results, the authors can turn the paper around into a highly impactful paper. The description of all of the methods explored, and experiments performed alone makes a wonderful survey of the field with sufficient impact, so I think the authors are *almost* there in publishing a highly impactful work that can make the community look deeper into pixel-based control methods (with distractors). I hope to read an updated version of this work in the future published at a journal or presented at a future conference. Good luck!
train
[ "RF-sNYIRKC", "NCfbS75MNUM", "QU8E6gbWDHj", "P-rhxbJ329z", "tp55_WgMma", "mPpwYGYEkJ", "gylK1xmLGIb", "_U4Lo9ZsrZ", "GWZjf1JiUCD", "QvKS2Gcp-KT", "67q-9Ey6LCC", "fjSVLTtT56C", "-fi8m07gAFy", "1WQmlOh_d9-", "KHLSgGFw2EX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your reply! We are glad that your concern on the system-wise comparisons has been resolved.\n\n**Main Story**: We strongly think that the main story is informative, while the obvious part might be subjective. There are certain findings that indeed are surprising. As pointed by us to the repl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "NCfbS75MNUM", "tp55_WgMma", "-fi8m07gAFy", "67q-9Ey6LCC", "mPpwYGYEkJ", "QvKS2Gcp-KT", "iclr_2022_Ti2i204vZON", "GWZjf1JiUCD", "KHLSgGFw2EX", "fjSVLTtT56C", "1WQmlOh_d9-", "iclr_2022_Ti2i204vZON", "iclr_2022_Ti2i204vZON", "iclr_2022_Ti2i204vZON", "iclr_2022_Ti2i204vZON" ]
iclr_2022_1T5FmILBsq2
SGORNN: Combining Scalar Gates and Orthogonal Constraints in Recurrent Networks
Recurrent Neural Network (RNN) models have been applied in different domains, producing high accuracies on time-dependent data. However, RNNs have long suffered from exploding gradients during training, mainly due to their recurrent process. In this context, we propose a variant of the scalar gated FastRNN architecture, called Scalar Gated Orthogonal Recurrent Neural Networks (SGORNN). SGORNN utilizes orthogonal linear transformations at the recurrent step. In our experiments, SGORNN forms its recurrent weights through a strategy inspired by Volume Preserving RNNs (VPRNN), though our architecture allows the use of any orthogonal constraint mechanism. We present a simple constraint on the scalar gates of SGORNN, which is easily enforced at training time to provide a theoretical generalization ability for SGORNN similar to that of FastRNN. Our constraint is further motivated by success in experimental settings. Next, we provide bounds on the gradients of SGORNN, to show the impossibility of (exponentially) exploding gradients. Our experimental results on the addition problem confirm that our combination of orthogonal and scalar gated RNNs are able to outperform both predecessor models on long sequences using only a single RNN cell. We further evaluate SGORNN on the HAR-2 classification task, where it improves slightly upon the accuracy of both FastRNN and VPRNN using far fewer parameters than FastRNN. Finally, we evaluate SGORNN on the Penn Treebank word-level language modelling task, where it again outperforms its predecessor architectures. Overall, this architecture shows higher representation capacity than VPRNN, suffers from less overfitting than the other two models in our experiments, benefits from a decrease in parameter count, and alleviates exploding gradients when compared with FastRNN on the addition problem.
Reject
This paper considers the exploding gradient problem in RNNs. The proposed network SGORNN can be seen as an extension to the FastRNN model by adding orthogonal weight matrices. I recommend rejection for this paper mainly for two reasons. First, as mentioned in the review of Reviewer 815o and Reviewer W7nS, adding orthogonal constraints into FastRNN should not be considered as a significant technical contribution. Second, more importantly, the experiments of the paper are not that convincing. All reviewers raise concerns about this issue. I also do not see the point of comparing the proposed model with a baseline LSTM model of much larger parameter size. I can’t think of a reason to do so. Also I think the small datasets will not give you a lot of meaningful insights in comparing the models – PTB for example, is a rather small dataset for language modeling and the results presented there are far from well. The numbers look really bad, reflecting the quality of how these experiments are done ( https://arxiv.org/pdf/1707.05589.pdf ).
train
[ "xstT0NPWB4g", "FaoLsyDFeX1", "vPGcbi5QJFB", "jaxHbSpjO_g", "ZJjB8E3pC2G", "MdTJ-GFk_Fz", "zTy8uu-8blp", "hbB_3m3iIhl", "zME7iL4EzsJ", "uPXZ7Epgg0H", "m8PVLyPwxE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the detailed response from the authors. However, I am still not convinced that the level of novelty meets the acceptance threshold. Therefore, I would like to keep my score.", "Authors demonstrate a new RNN-based architecture that avoids exploding gradients via careful selection of primitives and h...
[ -1, 6, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "hbB_3m3iIhl", "iclr_2022_1T5FmILBsq2", "MdTJ-GFk_Fz", "ZJjB8E3pC2G", "m8PVLyPwxE", "uPXZ7Epgg0H", "FaoLsyDFeX1", "zME7iL4EzsJ", "iclr_2022_1T5FmILBsq2", "iclr_2022_1T5FmILBsq2", "iclr_2022_1T5FmILBsq2" ]
iclr_2022_Qu_XudmGajz
Structured Uncertainty in the Observation Space of Variational Autoencoders
Variational autoencoders (VAEs) are a popular class of deep generative models with many variants and a wide range of applications. Improvements upon the standard VAE mostly focus on the modelling of the posterior distribution over the latent space and the properties of the neural network decoder. In contrast, improving the model for the observational distribution is rarely considered and typically defaults to a pixel-wise independent categorical or normal distribution. In image synthesis, sampling from such distributions produces spatially-incoherent results with uncorrelated pixel noise, resulting in only the sample mean being somewhat useful as an output prediction. In this paper, we aim to stay true to VAE theory by improving the samples from the observational distribution. We propose an alternative model for the observation space, encoding spatial dependencies via a low-rank parameterization. We demonstrate that this new observational distribution has the ability to capture relevant covariance between pixels, resulting in spatially-coherent samples. In contrast to pixel-wise independent distributions, our samples seem to contain semantically meaningful variations from the mean allowing the prediction of multiple plausible outputs with a single forward pass.
Reject
In order to evaluate the evidence lower bound (ELBO), VAEs typically use a parametric distribution-based decoder $p(x|z)$. If the data is continuous, one often considers a Gaussian VAE, where the canonical setting is to assume a diagonal covariance matrix $p(x|z) = N(x; \mu(z), \sigma^2 \mathbf{I})$. In this paper, the authors suggest replacing the diagonal covariance matrix with a structured covariance matrix (low-rank + diagonal). As this only amounts to a minor change to a canonical Gaussian VAE, strong empirical results are expected to justify its acceptance. However, the image generation results presented in the paper are not comparable to the state-of-the-art VAE results (e.g., Arash Vahdat, and Jan Kautz. "NVAE: A Deep Hierarchical Variational Autoencoder." Neural Information Processing Systems (NeurIPS), 2020).
test
[ "XW5XCEEyZyM", "UMy_U9P493R", "Sm8K1wFRzi", "ZqtnVdzy3Eg", "ktagnSpAYMc", "w_ksSzjLXAZ", "j9Dw7644Sfm", "gETWq1_zYS", "FrPpjkArUz", "vu2plr2fbB", "3ee6BuZ4R6H", "hWDB_cBAADh", "kTetnkfPMOP", "B5NOFD2Xtdw", "k6xyJgxhRwg", "kg9ToyatH5k", "6vGJ_pQcgah", "NunukOho74", "Uolbam2Ryfw", ...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "officia...
[ "This paper proposes to use a low-rank plus diagonal covariance matrix, rather than the usual diagonal ones, in the decoder of Gaussian VAEs. The authors empirically show the advantages of using more expressive covariance matrices in the decoder. This paper is well-written, easy to follow, and well motivated. I agr...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_Qu_XudmGajz", "vu2plr2fbB", "3ee6BuZ4R6H", "FrPpjkArUz", "kTetnkfPMOP", "iclr_2022_Qu_XudmGajz", "FrPpjkArUz", "iclr_2022_Qu_XudmGajz", "Uolbam2Ryfw", "B5NOFD2Xtdw", "NunukOho74", "Uolbam2Ryfw", "MJAJ-Xrj2IK", "kg9ToyatH5k", "iclr_2022_Qu_XudmGajz", "6vGJ_pQcgah", "XW5XCEE...
iclr_2022_aYSlxlHKEA
Fully Decentralized Model-based Policy Optimization with Networked Agents
Model-based RL is an effective approach for reducing sample complexity. However, when it comes to multi-agent setting where the number of agent is large, the model estimation can be problematic due to the exponential increased interactions. In this paper, we propose a decentralized model-based reinforcement learning algorithm for networked multi-agent systems, where agents are cooperative and communicate locally with their neighbors. We analyze our algorithm theoretically and derive an upper bound of performance discrepancy caused by model usage, and provide a sufficient condition of monotonic policy improvement. In our experiments, we compare our algorithm against other strong multi-agent baselines and demonstrate that our algorithm not only matches the asymptotic performance of model-free methods but also largely increases its sample efficiency.
Reject
As pointed out by reviewers, the presentation needs to be improved to clarify the algorithmic and theoretical contributions.
train
[ "9JsjKVTg0d", "FsRqaBLlqQ9", "OSt91g_kU_x", "TRvPRB4I5N", "f_BGV7vcKTG", "WuRpFsRe66", "-PkAIWhwol", "bl9xdFgWweN" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a decentralized model-based reinforcement learning algorithm for networked multi-agent systems where cooperative agents communicate locally with their neighbors. Q1. The current study assumes that the target system is an independent networked system or \\ksi-dependent system. First of all, the...
[ 5, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_aYSlxlHKEA", "OSt91g_kU_x", "TRvPRB4I5N", "9JsjKVTg0d", "bl9xdFgWweN", "-PkAIWhwol", "iclr_2022_aYSlxlHKEA", "iclr_2022_aYSlxlHKEA" ]
iclr_2022_psNSQsmd4JI
Containerized Distributed Value-Based Multi-Agent Reinforcement Learning
Multi-agent reinforcement learning tasks put a high demand on the volume of training samples. Different from its single-agent counterpart, distributed value-based multi-agent reinforcement learning faces the unique challenges of demanding data transfer, inter-process communication management, and high requirement of exploration. We propose a containerized learning framework to solve these problems. We pack several environment instances, a local learner and buffer, and a carefully designed multi-queue manager which avoids blocking into a container. Local policies of each container are encouraged to be as diverse as possible, and only trajectories with highest priority are sent to a global learner. In this way, we achieve a scalable, time-efficient, and diverse distributed MARL learning framework with high system throughput. To own knowledge, our method is the first to solve the challenging Google Research Football full game $\mathtt{5\_v\_5}$. On the StarCraft II micromanagement benchmark, our method gets 4-18$\times$ better results compared to state-of-the-art non-distributed MARL algorithms.
Reject
This paper proposes a distributed containerized multi-agent reinforcement learning(CMARL) framework that addresses three challenges in MARL: 1) Demanding data transfer. 2) Inter-process communication. 3) Effective Exploration. Using a container that collects environment experiences from parallel actors into buffers and learns local policies, CMARL demonstrates notable performance improvements with respect to time as compared to state-of-the-art benchmarks. Although the reviewers acknowledge that the paper addresses a relevant topic, proposes an effective method, and is well written, after reading the authors' feedback and discussing their concerns, the reviewers reached a consensus about rejecting this paper in its current form. They feel that the contribution is too incremental and that the experimental comparisons are somehow unfair. I suggest the authors take into consideration the reviewers' suggestions while preparing an updated version of their paper for one of the forthcoming machine learning conferences.
test
[ "uhCmppcy93", "WQNaPXCdvS", "aqXh51Ey7Q", "ngkwIm_a27", "DDhYw_K5fRz", "ZVzJIO791EF", "7WoWRYYPW-t", "KSWittlgfgK", "4BrbPrpuyD7", "9JQGNikZu0W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new distributed value-based multi-agent reinforcement learning framework to solve problems faced by multi-agent tasks. It divides the system into two parts, multiple containers, and one centralizer. Containers are trained with trajectories generated by their own actors interacting with the ...
[ 3, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_psNSQsmd4JI", "7WoWRYYPW-t", "DDhYw_K5fRz", "4BrbPrpuyD7", "9JQGNikZu0W", "uhCmppcy93", "KSWittlgfgK", "iclr_2022_psNSQsmd4JI", "iclr_2022_psNSQsmd4JI", "iclr_2022_psNSQsmd4JI" ]
iclr_2022_F0v5uBM-q5K
Beyond Quantization: Power aware neural networks
Power consumption is a major obstacle in the deployment of deep neural networks (DNNs) on end devices. Existing approaches for reducing power consumption rely on quite general principles, including avoidance of multiplication operations and aggressive quantization of weights and activations. However, these methods do not take into account the precise power consumed by each module in the network, and are therefore far from optimal. In this paper we develop accurate power consumption models for all arithmetic operations in the DNN, under various working conditions. Surprisingly, we reveal several important factors that have been overlooked to date. Based on our analysis, we present PANN (power-aware neural network), a simple approach for approximating any full-precision network by a low-power fixed-precision variant. Our method can be applied to a pre-trained network, and can also be used during training to achieve improved performance. In contrast to previous approaches, our method incurs only a minor degradation in accuracy w.r.t. the full-precision version of the network, even when working at the power-budget of a 2-bit quantized variant. In addition, our scheme enables to seamlessly traverse the power-accuracy tradeoff at deployment time, which is a major advantage over existing quantization methods that are constrained to specific bit widths.
Reject
This paper argues that the existing approaches for reducing power consumption do not model the precise power usage of each model. To remedy this an approximate power usage model is proposed using bit flips and a simple approach called PANN is introduced that relies on tricks such as unsigned arithmetics and implementation of multiplications with addition. The reviewers have found the overall direction of this paper in modeling power consumption important and have acknowledged the clarity of presentation. However, they have also raised serious concerns regarding (i) the efficacy of modeling power consumption with bit flips and ignoring memory power, (ii) its relevance to modern hardware, and (ii) the efficacy of replacing multipliers with repeated additions. Unfortunately, the paper in it is current form does not provide a compelling answer to these concerns. Given these criticisms, we don't believe that the paper is ready for publication at ICLR.
train
[ "Q8RLxTll5Re", "-EZdTEGa1ks", "op7hP90Q-1k", "fUu4BPGKKwN", "eb73U5txUM", "mxMZGBwspR1", "DSFFehO1TL", "gVi0qTzHwXj", "ND6_i23qz7T", "EtCH_RDOKV", "CRQodZOm97", "rp0Y3MOf0F1", "9rUO5R4eeCo", "h77-whKPpEW", "-RKrvlVkROu", "GqmpSKxpS0X", "hbgaFAUpqXq", "GBEANEXBjjJ", "l-6uj0jMwI", ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thanks a lot for your response.\n\nBy “modern” we mean 5nm or even 15nm. We are not aware of any publication that provides concrete numbers which show that memory energy is larger than compute energy in 5nm/15nm technologies. Can you please point us to such a reference?\nThe numbers for NVIDIA’s GPUs, Google’s TP...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "-EZdTEGa1ks", "op7hP90Q-1k", "eb73U5txUM", "mxMZGBwspR1", "DSFFehO1TL", "gVi0qTzHwXj", "rp0Y3MOf0F1", "ND6_i23qz7T", "CRQodZOm97", "iclr_2022_F0v5uBM-q5K", "9rUO5R4eeCo", "mxLAt4VXXl3", "hbgaFAUpqXq", "Soqp7nIjRIA", "iclr_2022_F0v5uBM-q5K", "59dFBEprtw", "EtCH_RDOKV", "l-6uj0jMwI"...
iclr_2022_mZsZy481_F
FROB: Few-shot ROBust Model for Classification with Out-of-Distribution Detection
Nowadays, classification and Out-of-Distribution (OoD) detection in the few-shot setting remain challenging aims mainly due to rarity and the limited samples in the few-shot setting, and because of adversarial attacks. Accomplishing these aims is important for critical systems in safety, security, and defence. In parallel, OoD detection is challenging since deep neural network classifiers set high confidence to OoD samples away from the training data. To address such limitations, we propose the Few-shot ROBust (FROB) model for classification and few-shot OoD detection. We devise a methodology for improved robustness and reliable confidence prediction for few-shot OoD detection. We generate the support boundary of the normal class distribution and combine it with few-shot Outlier Exposure (OE). We propose a self-supervised learning few-shot confidence boundary methodology based on generative and discriminative models, including classification. The main contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. FROB implicitly generates strong adversarial samples on the boundary and forces samples from OoD, including our boundary, to be less confident by the classifier. FROB achieves generalization to unseen anomalies and adversarial attacks, with applicability to unknown, in the wild, test sets that do not correlate to the training datasets. To improve robustness, FROB redesigns and streamlines OE to work even for zero-shots. By including our learned boundary, FROB effectively reduces the threshold linked to the model’s few-shot robustness, and maintains the OoD performance approximately constant and independent of the number of few-shot samples. The few-shot robustness analysis evaluation of FROB on different image sets and on One-Class Classification (OCC) data shows that FROB achieves competitive state-of-the-art performance and outperforms benchmarks in terms of robustness to the outlier OoD few-shot sample population and variability.
Reject
The paper proposes few-shot robust (FROB) model for classification and few-shot OOD detection. While the paper has some interesting contributions, all the reviewers felt that the current version falls short of the ICLR acceptance threshold and the consensus decision was to reject. I encourage the authors to revise the paper based on the reviewers' feedback and resubmit to a different venue. As Reviewer r838 pointed out, that this paper uses TinyImages dataset which has been since retracted. I appreciate that prior work used TinyImages, but please see "Why it is important to withdraw the dataset" https://groups.csail.mit.edu/vision/TinyImages/ and consider not using the TinyImages dataset for future revisions.
train
[ "11bRVTx0vE_", "TBvxvjMoSCc", "ql4HrYu7sgk", "9cNzWaCAuv7", "RZzejz1uiEF", "8hf73a63UMQ", "hMdov74PBIM", "O3NQjIQWUnB", "lO7iltXMcZ_", "-jHoij5Wr4_", "se8yyBPQb_M", "-iJjv3Rb8ry", "42jA3O2Z5VY", "9Gww1a-lFZ", "UTfKswe9pVl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the elaborate answers here. I think my main concerns are still here which is the lack of deeper discussion on related work, the lack of comparisons and modern datasets still causes me to keep my initial score.", "The paper addresses an important issue of Out-of-Distribution detection in a few-shot...
[ -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "RZzejz1uiEF", "iclr_2022_mZsZy481_F", "iclr_2022_mZsZy481_F", "UTfKswe9pVl", "8hf73a63UMQ", "hMdov74PBIM", "TBvxvjMoSCc", "lO7iltXMcZ_", "-jHoij5Wr4_", "9Gww1a-lFZ", "-iJjv3Rb8ry", "42jA3O2Z5VY", "ql4HrYu7sgk", "iclr_2022_mZsZy481_F", "iclr_2022_mZsZy481_F" ]
iclr_2022_Ctjb37IOldV
A Variance Principle Explains why Dropout Finds Flatter Minima
Although dropout has achieved great success in deep learning, little is known about how it helps the training find a good generalization solution in the high-dimensional parameter space. In this work, we show that the training with dropout finds the neural network with a flatter minimum compared with standard gradient descent training. We further study the underlying mechanism of why dropout finds flatter minima through experiments. We propose a Variance Principle that the variance of a noise is larger at the sharper direction of the loss landscape. Existing works show that SGD satisfies the variance principle, which leads the training to flatter minima. Our work show that the noise induced by the dropout also satisfies the variance principle that explains why dropout finds flatter minima. In general, our work points out that the variance principle is an important similarity between dropout and SGD that lead the training to find flatter minima and obtain good generalization.
Reject
The authors make an experimental case that dropout aids generalization by promoting "flatter minima". The reviewers felt that the work reported in this paper makes a useful step forward on a question of central interest. The consensus view was that the total weight of evidence presented was not sufficient for publication in ICLR. The paper could be strengthened was more extensive and varied experiments and/or theoretical analysis.
test
[ "p49C2H3xNOX", "jbPDO-AuR_E", "lzXtKZHNwWl", "lrMoQ744F2R", "DHdgJiiRv3m", "LZhutfsNoYf", "YO1NDwggiLC", "PYxmLVZsi6", "JMYrOUgH0gq", "oK7PBJHHbtT", "I18yUIRNiec4", "HVcZJ03Ylh", "sBqjQsAvliK", "yh2ujTRcB7h", "eWby_y1ganzK", "nTfErB7Z2L", "fYBoyN5OFNp", "jyi0TNOvONv", "itxUS5RkeC...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Dear reviewers,\n\nWe have updated the draft with new experiments on transformer and Multi30k dataset. We have done experiments on the first and the second method and obtain similar results in the new experimental setup. Up to now, our experiments are conducted over several representative datasets, i.e., MNIST, ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 5 ]
[ "8tKUqClzVUW", "L_lUAdJWhxx", "oK7PBJHHbtT", "sBqjQsAvliK", "8tKUqClzVUW", "L_lUAdJWhxx", "HVcZJ03Ylh", "sBqjQsAvliK", "oK7PBJHHbtT", "YO1NDwggiLC", "galbF5aLfw", "I18yUIRNiec4", "itxUS5RkeCu", "L_lUAdJWhxx", "iclr_2022_Ctjb37IOldV", "8tKUqClzVUW", "galbF5aLfw", "itxUS5RkeCu", "i...
iclr_2022_YTtMaJUN_uc
Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling
Universal user representation is an important research topic in industry, and is widely used in diverse downstream user analysis tasks, such as user profiling and user preference prediction. With the rapid development of Internet service platforms, extremely long user behavior sequences have been accumulated. However, existing researches have little ability to model universal user representation based on lifelong behavior sequences since user registration. In this study, we propose a novel framework called Lifelong User Representation Model (LURM) to tackle this challenge. Specifically, LURM consists of two cascaded sub-models: (i) Bag of Interests (BoI) encodes user behaviors in any time period into a sparse vector with super-high dimension (eg. 10^5); (ii) Self-supervised Multi-anchor Encoder Network (SMEN) maps sequences of BoI features to multiple low-dimensional user representations by contrastive learning. SMEN achieves almost lossless dimensionality reduction with the main help of a novel multi-anchor module which can learn different aspects of user preferences. Experiments on several benchmark datasets show that our approach can outperform state-of-the-art unsupervised representation methods in downstream tasks.
Reject
Ultimately somewhat below the threshold based on the scores. The reviewers raise issues of the overall contribution, as well as issues with the design/structure of the model/paper and issues with the experiments. While there are some positive aspects, collectively the issues put the paper below the bar for acceptance.
test
[ "QPVrYRlBvW", "R6LyQrh3VIl", "BHUciVVsFe", "dGVGSU7s1GZ", "2_uRlZxXyyb", "l6IHw6Yobb8", "r2e2i7KUCRq", "eNrPmKm3x_", "32HdJr-T6YP", "27J_6UyoAW_", "5QlChtfl9EC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This work proposes a universal Lifelong User Representation Model to encode user behaviors in any period with a sparse vector with super-high dimension and map these user features to user representations with almost lossless dimensionality reduction. In this way, the model could easily learn different aspects of u...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_YTtMaJUN_uc", "iclr_2022_YTtMaJUN_uc", "5QlChtfl9EC", "R6LyQrh3VIl", "iclr_2022_YTtMaJUN_uc", "5QlChtfl9EC", "QPVrYRlBvW", "QPVrYRlBvW", "R6LyQrh3VIl", "QPVrYRlBvW", "iclr_2022_YTtMaJUN_uc" ]
iclr_2022_F7_odJIeQ26
Pretrained Language Models are Symbolic Mathematics Solvers too!
Solving symbolic mathematics has always been of in the arena of human ingenuity that needs compositional reasoning and recurrence. However, recent studies have shown that large scale language models such as transformers are universal and surprisingly can be trained as a sequence-to-sequence task to solve complex mathematical equations. These large transformer models need humongous amounts of training data to generalize to unseen symbolic mathematics problems. In this paper, we present a sample efficient way of solving the symbolic tasks by first pretraining the transformer model with language translation and then fine-tuning the pretrained transformer model to solve the downstream task of symbolic mathematics. We achieve comparable accuracy on the integration task with our pretrained model while using around $1.5$ orders of magnitude less number of training samples with respect to the state-of-the-art deep learning for symbolic mathematics. The test accuracy on differential equation tasks is considerably lower comparing with integration as they need higher order recursions that are not present in language translations. We pretrain our model with different pairs of language translations. Our results show language bias in solving symbolic mathematics tasks. Finally, we study the robustness of the fine-tuned model on symbolic math tasks against distribution shift, and our approach generalizes better in distribution shift scenarios for the function integration.
Reject
This papers presents a method for solving symbolic mathematic tasks. It first pretrains a transformer model with language translation, and then fine-tunes the pretrained model to the downstream mathematic tasks. It contains interesting points but our reviewers have serious concerns which are not fully resolved in the rebuttal. For the integration task, the proposed method achieves good results comparing with Lample & Charleston 2019 (LC) with much less training data. However, as the authors also noted (see the rebuttal), the higher accuracies in LC are achieved with more data. If the authors could also at least show how much data the proposed method needs to achieve the best result in LC, it will be very helpful for understanding the value of this work. In addition, the proposed method did not show similar improvements on the ODE task. So it is hard to see how this proposed method can be generally useful. Our reviewers also have big concerns on writing. Many sentences are really confusing.
train
[ "12T3w_tfAv2", "g1Geg-zYggH", "RBhGMQjxirb", "ArK-mqFGtWc", "lHSHcxsd29S", "FQER8_zn_Ge", "QhY4SBq5c9U", "0Oy1gl0zyMG", "Kas3b8prODP", "S_MjRFZW-di", "AoD98eYmSNS", "YzTakbsvfNG", "wNR_MapawZd", "huEMwr30meP", "2quVtFDUFaR", "aaZCAQ6IwEE" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 4ER6,\n\nThank you very much for all your efforts during the review. We still sincerely look forward to your reply, and we are wondering if our response has addressed your concerns. We appreciate your time and consideration. \n\nMany thanks,\n\n Paper3389 Authors", " Dear Reviewer 3m2T,\n\nThank y...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "aaZCAQ6IwEE", "2quVtFDUFaR", "huEMwr30meP", "wNR_MapawZd", "iclr_2022_F7_odJIeQ26", "QhY4SBq5c9U", "aaZCAQ6IwEE", "AoD98eYmSNS", "YzTakbsvfNG", "2quVtFDUFaR", "huEMwr30meP", "wNR_MapawZd", "iclr_2022_F7_odJIeQ26", "iclr_2022_F7_odJIeQ26", "iclr_2022_F7_odJIeQ26", "iclr_2022_F7_odJIeQ2...
iclr_2022_lEoFUoMH2Uu
Foreground-attention in neural decoding: Guiding Loop-Enc-Dec to reconstruct visual stimulus images from fMRI
The reconstruction of visual stimulus images from functional Magnetic Resonance Imaging (fMRI) has received extensive attention in recent years, which provides a possibility to interpret the human brain. Due to the high-dimensional and high-noise characteristics of fMRI data, how to extract stable, reliable and useful information from fMRI data for image reconstruction has become a challenging problem. Inspired by the mechanism of human visual attention, in this paper, we propose a novel method of reconstructing visual stimulus images, which first decodes the distribution of visual attention from fMRI, and then reconstructs the visual images guided by visual attention. We define visual attention as foreground attention (F-attention). Because the human brain is strongly wound into sulci and gyri, some spatially adjacent voxels are not connected in practice. Therefore, it is necessary to consider the global information when decoding fMRI, so we introduce the self-attention module for capturing global information into the process of decoding F-attention. In addition, in order to obtain more loss constraints in the training process of encoder-decoder, we also propose a new training strategy called Loop-Enc-Dec. The experimental results show that the F-attention decoder decodes the visual attention from fMRI successfully, and the Loop-Enc-Dec guided by F-attention can also well reconstruct the visual stimulus images.
Reject
This paper present a model for reconstructing images from fMRI recordings, based on an encoder and decoder used in a loop. The reviewers were unanimous in their opinion that this paper is not ready for publication at this stage. They raised concerns ranging from the quality of the result and how to compare them to previous methods, to the justification behind different modeling choices. The authors were gracious in their responses to the reviewers. I do not recommend acceptance at this stage,
train
[ "Hno_DK_s5z", "ClYE4YlfR3y", "dj2Ra0bnfDZ", "UcrhwVPEHDx", "Ss5B2BRAQZ0", "i4ZeURLM9No", "VFIR3YdfFU", "zt9YsUE9xPz" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for constructive suggestions and comments on our work, as well as affirmation of our innovations. Our replies are as follows:\n\nIn this paper, we propose an algorithm for reconstructing visual stimulation image based on foreground attention. The foreground attention of the image is successfully decoded...
[ -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "i4ZeURLM9No", "zt9YsUE9xPz", "zt9YsUE9xPz", "VFIR3YdfFU", "VFIR3YdfFU", "iclr_2022_lEoFUoMH2Uu", "iclr_2022_lEoFUoMH2Uu", "iclr_2022_lEoFUoMH2Uu" ]
iclr_2022_iGffRQ9jQpQ
Enhancing semi-supervised learning via self-interested coalitional learning
Semi-supervised learning holds great promise for many real-world applications, due to its ability to leverage both unlabeled and expensive labeled data. However, most semi-supervised learning algorithms still heavily rely on the limited labeled data to infer and utilize the hidden information from unlabeled data. We note that any semi-supervised learning task under the self-training paradigm also hides an auxiliary task of discriminating label observability. Jointly solving these two tasks allows full utilization of information from both labeled and unlabeled data, thus alleviating the problem of over-reliance on labeled data. This naturally leads to a new learning framework, which we call Self-interested Coalitional Learning (SCL). The key idea of SCL is to construct a semi-cooperative ``game”, which forges cooperation between a main self-interested semi-supervised learning task and a companion task that infers label observability to facilitate main task training. We show with theoretical deduction its connection to loss reweighting on noisy labels. Through comprehensive evaluation on both classification and regression tasks, we show that SCL can consistently enhance the performance of semi-supervised learning algorithms.
Reject
This paper proposes a new method for the important problem of semi-supervised learning. This method relies on an auxiliary task, label observability prediction, to weight the examples according to the confidence in their pseudo-labels, so as to avoid the propagation of errors encountered in self-training. Limited experiments show that the proposed method can compete with other methods in terms of performance or training time. On the positive side, all evaluators agree on the potential value of the proposed approach, which is generic in nature. On the negative side, the experimental evaluation, although strengthened during the discussion, is not yet strong enough to have really convinced of the real merits of the method. In particular, comparisons with the state of the art still need to be improved. In addition, the paper would benefit from some rewriting, in particular of the mathematics (e.g. the d notation for task B should be avoided as suggested by one reviewer, there is a misplaced partial derivative in equation 6). The authors could also simplify their derivation by using the envelope theorem. I therefore recommend rejection, with an encouragement to strengthen the experimental part, and to improve the derivation of the proposed method.
val
[ "KCCRqM1Qe6m", "GiGu7IJ53f0", "gIkNoKjApzl", "fJZQpBf9MJ5", "A_GzS7wkNRs", "JeT4nIollW", "ogQvAaJOO7D", "eTyxuAwWzd1", "vDOChNGG7pM", "1hITkMtf2U_", "cnjeF2BHI8", "JK4cqFG8BWi", "AEO-hf3fpdq", "-1vV2ZffKuG", "AFsZsNnex98", "ZyAxzCIhYGN", "7JcX58NEYVU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a novel framework for semi-supervised learning, that solves two issues of previous methods: 1) over-reliance on labeled data and 2) error accumulation. It shows that jointly solving the main task together with another task (that discriminates whether the data label is real or not) leads to bette...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_iGffRQ9jQpQ", "iclr_2022_iGffRQ9jQpQ", "fJZQpBf9MJ5", "A_GzS7wkNRs", "eTyxuAwWzd1", "ogQvAaJOO7D", "ZyAxzCIhYGN", "ZyAxzCIhYGN", "7JcX58NEYVU", "GiGu7IJ53f0", "GiGu7IJ53f0", "KCCRqM1Qe6m", "KCCRqM1Qe6m", "iclr_2022_iGffRQ9jQpQ", "iclr_2022_iGffRQ9jQpQ", "iclr_2022_iGffRQ9jQp...
iclr_2022_U9zTUXVdoIr
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
The vulnerability of deep learning models to adversarial examples and semantic transformations has limited the applications in risk-sensitive areas. The recent development of certified defense approaches like randomized smoothing provides a promising direction towards building reliable machine learning systems. However, current certified defenses cannot handle complex semantic transformations like rotational blur and defocus blur which are common in practical applications. In this paper, we propose a generalized randomized smoothing framework (GSmooth) for certified robustness against semantic transformations. We provide both a unified and rigorous theoretical framework and scalable algorithms for certified robustness on complex semantic transformations. Specifically, our key idea is to use a surrogate image-to-image neural network to approximate a transformation which provides a powerful tool for studying the properties of semantic transformations and certify the transformation based on this neural network. Experiments on multiple types of semantic perturbations and corruptions using multiple datasets demonstrate the effectiveness of our approach.
Reject
This paper proposes a more generalized form of certified robustness and attempts to provide new results on applying randomized smoothing to semantic transformations such as different types of blurs or distortions. The main idea is to use an image-to-image neural network to approximate semantic transformations, and then certify robustness based on bounds on that neural network. The authors provide empirical results on standard benchmark datasets like MNIST and CIFAR showing that their method can achieve improved results on some transformations compared to prior work. The review committee appreciates the authors taking the time to attempt to respond to the concerns of all reviewers, and for updating and improving their work during the rebuttal process. The committee is glad to see that they do provide empirical evidence of improvement to common-corruption robustness, compared to AugMix (one of the state-of-the-art approaches for standard common-corruption robustness) and TSS. However, the reviewers still have concerns about the novelty of the paper. The main novelty is not improvement for resolvable transformations (prior works that the authors cite perform about the same or better), but rather, is the ability to handle non-resolvable transformations. The reviewers agree that robustness to non-resolvable transformations is important; however, the reviewers think certified robustness to non-resolvable transformations is not meaningful, because they are only being certified with respect to a neural network that is trained to approximate those non-resolvable transformations. Without MTurk studies to confirm how good the neural network's non-resolvable transforms are, the reviewers do not find certified robustness here meaningful.
val
[ "ER_nh7pfXtz", "WF8oold-pDJ", "tO7p6oAbGJ", "aLSFERo7fay", "AyrBmg1znzV", "4N0lu2yE9Ed", "IlUCSb5f2N", "Eqjqq9eCWKj", "HJRX4qm5asO", "_ggUcpH-36", "Kr_9touJt1f", "2XcR_TzyezX", "0eJ7AxyVKlZ", "Leiy3-8mEo5", "hhMEJHmXp1i", "fiBHnjfUGIy", "DMRrN_INj3d", "CLYy96k3U9R", "hng4TU_XCk4"...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ "This paper proposes a more generalized form of certified robustness and attempts to provide new results on applying randomized smoothing to semantic transformations such as different types of blurs or distortions. The main idea is to use an image-to-image neural network to approximate semantic transformations, and...
[ 5, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2022_U9zTUXVdoIr", "iclr_2022_U9zTUXVdoIr", "ceJwAFC7x7", "ER_nh7pfXtz", "IlUCSb5f2N", "lwfU9HQsETj", "hng4TU_XCk4", "iclr_2022_U9zTUXVdoIr", "iclr_2022_U9zTUXVdoIr", "Leiy3-8mEo5", "a5XBgMJEany", "0eJ7AxyVKlZ", "hng4TU_XCk4", "hhMEJHmXp1i", "ceJwAFC7x7", "ER_nh7pfXtz", "iclr_2...
iclr_2022_rq1-7_lwisw
Beyond Object Recognition: A New Benchmark towards Object Concept Learning
Understanding objects is a central building block of artificial intelligence, especially for embodied AI. Even though object recognition excels with deep learning, current machines still struggle to learn higher-level knowledge, e.g., what attributes does an object have, what can we do with an object. In this work, we propose a challenging Object Concept Learning (OCL) task to push the envelope of object understanding. It requires machines to reason out object affordances and simultaneously give the reason: what attributes make an object possesses these affordances. To support OCL, we build a densely annotated knowledge base including extensive labels for three levels of object concept: categories, attributes, and affordances, together with their causal relations. By analyzing the causal structure of OCL, we present a strong baseline, Object Concept Reasoning Network (OCRN). It leverages causal intervention and concept instantiation to infer the three levels following their causal relations. In extensive experiments, OCRN effectively infers the object knowledge while follows the causalities well. Our data and code will be publicly available.
Reject
This paper presents work on a dataset for object concept learning. The main contributions include causal relations in the dataset and a method (OCRN). The initial reviews pointed to concerns over the nature of the causal relations, the presentation of the paper, and the OCRN method and its motivations / use of the do-operator. The reviewers engaged in significant discussions after considering the authors' responses and the others' reviews. After this delibration, the concerns over the dataset, its annotations, and the presentation of the methods were deemed to be better served by a full revision and reconsideration of the paper. As such, the paper is not recommended for acceptance at this time.
train
[ "8zvSgALbjG", "zTAIqmXURgi", "_QlDAvBtl5", "X-pllBlYHjW", "7IeWzoO5zsP", "Ua3xPK9ukY", "xpCofymLklH", "Fmtrpqor53", "CAG57klo1V0", "I_5SGV0ZATn", "pUENnRCGVV2", "PDKY3EhvrJK", "BtZidM-7n-h", "Se5Q8ObVrgC", "NVEaOsRMWzr", "Ut5tgR1sgLk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new large-scale benchmark dataset for object concept learning, which consists of recognizing attributes, affordances, and their causal raltions about objects in input images. Detailed annotations of object categories, attributes and affordances on both category and instance levels, and their ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_rq1-7_lwisw", "_QlDAvBtl5", "Ut5tgR1sgLk", "7IeWzoO5zsP", "Ua3xPK9ukY", "Ut5tgR1sgLk", "Fmtrpqor53", "8zvSgALbjG", "I_5SGV0ZATn", "NVEaOsRMWzr", "BtZidM-7n-h", "iclr_2022_rq1-7_lwisw", "Se5Q8ObVrgC", "iclr_2022_rq1-7_lwisw", "iclr_2022_rq1-7_lwisw", "iclr_2022_rq1-7_lwisw" ]
iclr_2022_IhkSFe9YqMy
Experience Replay More When It's a Key Transition in Deep Reinforcement Learning
We propose a experience replay mechanism in Deep Reinforcement Learning based on Add Noise to Noise (AN2N), which requires agent to replay more experience containing key state, abbreviated as Experience Replay More (ERM). In the AN2N algorithm, we refer to the states where exploring more as the key states. We found that how the transitions containing the key state participates in updating the policy and Q networks has a significant impact on the performance improvement of the deep reinforcement learning agent, and the problem of catastrophic forgetting in neural networks is further magnified in the AN2N algorithm. Therefore, we change the previous strategy of uniform sampling of experience transitions. We sample the transition used for experience replay according to whether the transition contains key states and whether it is the most recently generated, which is the core idea of the ERM algorithm. The experimental results show that this algorithm can significantly improve the performance of the agent. We combine the ERM algorithm with Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic policy gradient (TD3) and Soft Actor-Critic (SAC), and evaluate algorithm on the suite of OpenAI gym tasks, SAC with ERM achieves a new state of the art, and DDPG with ERM can even exceed the average performance of SAC under certain random seeds, which is incredible.
Reject
The paper proposes an interesting way of prioritizing samples in replay that is compatible with many RL methods. It is evaluated experimentally on different tasks and with different RL algorithms. The reviewers highly appreciated the revised paper and the detailed replies and discussions. While this iteration improved the paper substantially, it is still not ready for publication in its current form. In particular: - The paper is still not self-contained enough - The reviewers are still not convinced about the statistical significance - More tasks should be added - PER needs to be added as a baseline The authors promised those changes for the final version, but those are so substantive that the paper will need to go thorough another complete review cycle. Hence, we'd like to encourage the authors to re-submit at a different venue. P.S.: Careful with double-blind submissions, acknowledgements should not be included.
train
[ "8dM678ZJoCD", "k7oIU45rn7L", "syrvnAizrmB", "KK1RRUblI4u", "Z6l471kADSP", "TV7WbhlLbXx", "HqTyG8YgkWy", "Hd9HLndBUSw", "wfPIVTCiQuU", "WRB_SHlJXxo", "TSTXhLUgtfQ", "F__FC8ke631", "qEtqYeSjmCj", "HmCZlc12UR", "3ejhJDQFAIL", "8W092kZvNHp", "JwdedW05xmH" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I've read the two papers you recommended and realized the problem of insufficient runs. In the follow-up work, I will increase the runs in each task to make the experimental results more statistical significance. Thank you again for your patient and detailed reply.", " I appreciate the aspects in which you plan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "k7oIU45rn7L", "Z6l471kADSP", "TV7WbhlLbXx", "wfPIVTCiQuU", "HqTyG8YgkWy", "F__FC8ke631", "qEtqYeSjmCj", "wfPIVTCiQuU", "TSTXhLUgtfQ", "JwdedW05xmH", "3ejhJDQFAIL", "8W092kZvNHp", "HmCZlc12UR", "iclr_2022_IhkSFe9YqMy", "iclr_2022_IhkSFe9YqMy", "iclr_2022_IhkSFe9YqMy", "iclr_2022_IhkS...
iclr_2022_4j4qVy8OQA1
A Koopman Approach to Understanding Sequence Neural Models
Deep learning models are often treated as "black boxes". Existing approaches for understanding the decision mechanisms of neural networks provide limited explanations or depend on local theories. Recently, a data-driven framework based on Koopman theory was developed for the analysis of nonlinear dynamical systems. In this paper, we introduce a new approach to understanding trained sequence neural models: the Koopman Analysis of Neural Networks (KANN) method. At the core of our method lies the Koopman operator, which is linear, yet it encodes the dominant features of the network latent dynamics. Moreover, its eigenvectors and eigenvalues facilitate understanding: in the sentiment analysis problem, the eigenvectors highlight positive and negative n-grams; and, in the ECG classification challenge, the eigenvectors capture the dominant features of the normal beat signal.
Reject
This paper proposes to apply the Koopman operator theory framework for analyzing sequence neural models. The authors considered two particular applications, namely sentiment analysis and ECG (electrocardiogram) classification. Reviewers generally agree that the results obtained on the two tasks are interesting. However, there are concerns that the paper lacks methodological novelty (concerning the Koopman operator framework, which the authors agreed) and that the paper would be more suited for an applied conference and/or journal.
train
[ "BS3wScEXDet", "96d3i0Hru4s", "gEP2W9pgQ3H", "29S_Uzp3vVz", "Wv70l_rHmSg", "JN-i7L9YalG", "1I7Y_PTb_h", "5UsatXhzUWL", "TTJxYDTvqUz5", "FWuHJ9p8tv", "0xIbnF_47lB", "6R2gMZVGJjz", "yCD6E-20ky", "z9ezjXoPaB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper applies the developed theory of the Koopman operator to analyzing neural sequential models empirically, on two tasks: ECG and sentiment analysis, and derives some insights on what the models are doing using the spectrum of the Koopman operator.\n Strengths: \n1. Applies the Koopman operator framework to ...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2022_4j4qVy8OQA1", "1I7Y_PTb_h", "29S_Uzp3vVz", "JN-i7L9YalG", "z9ezjXoPaB", "0xIbnF_47lB", "yCD6E-20ky", "yCD6E-20ky", "6R2gMZVGJjz", "6R2gMZVGJjz", "BS3wScEXDet", "iclr_2022_4j4qVy8OQA1", "iclr_2022_4j4qVy8OQA1", "iclr_2022_4j4qVy8OQA1" ]
iclr_2022_6NT1a56mNim
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. "make breakfast"), to a fixed set of actionable steps (i.e. "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. Videos at https://sites.google.com/view/language-model-as-planner
Reject
This manuscript presents a method to refine high-level task descriptions into mid-level executable steps. The idea of using language models to generate steps for a robot to follow is very interesting. Reviewer concerns focused on the general applicability of the approach and the evaluation. Reviewers pointed out that the method is tied to VirtualHome which has various properties that are in general not true: the action space is small, the action space is very sparse, and objects tend to be unique. First, the method enumerates a sentence for every possible action and object combination in the environment. The fact that VirtualHome has few verbs and few objects and that neither of these has complex additional structure (adjectives, adverbs, etc.) means that this is practical. But in any other practical setting this will be impossible. The manuscript mentions this limitation and hints at possible ways to resolve it. Second, the method requires that the action space must be incredibly sparse. Moreover, a set of common sense rules are needed which are environment specific and must be hand curated. VirtualHome disallows microwaving a cup for example. It also disallows opening the TV. Both of these are valid actions that happen all the time. Third, the method requires that objects be unique. If multiple plates, vacuum cleaners, lotions, etc. existed and had to be manipulated, e.g., there is no mechanism to refer to any one plate consistently. The model could generate something like "the first plate" but how to actually execute such an action is far from clear. This third issue is related to the problem of grounding. Normally, grounding means connecting an abstract concept to something concrete in the environment. All of the grounding that is performed here is by virtue of VirtualHome having unique objects in its environments and the actions not requiring multiple instances of the same object. This is not addressing the problem of grounding. Reviewers requested that grounding be removed from the manuscript. This would significantly enhance it, as the model is inherently incapable of grounding as the authors say: "Indeed, one limitation of our approach is that we do not condition on environment state" Reviewers took issue with details of the evaluation, which are largely a consequence of the choice of VirtualHome. Sometimes this manifested as strange results like models outperforming humans in terms of correctness. As reviewers pointed out, this is worrisome. Reviewers were also concerned about the title. It implies that language models are zero-shot planners, but this is not the case. They are instead able to decompose actions into mid-level steps. Reviewers suggested that it would be better to focus the title and tone of the manuscript on extracting task/subtask structures from language models. The idea presented here, that language models can break tasks into subtasks is interesting. But the manuscript goes a step further and discusses embodied agents which to reviewers appeared to be a reach: there is no grounding and in no sense is the output of the language model any different if the agent is embodied. Even the most positive reviewers felt that discussing embodied agents is unhelpful: it would be better to focus on task/subtask structures. And indeed, this would be more general. All of the concerns that reviewers had around the evaluation would be alleviated by focusing on a language task instead. And the effect of a narrow space of actions, constraints on those actions, and multiple objects of the same class, could be evaluated and reported. Even if the authors had to collect such a corpus, given the difficulties they describe in evaluating on VirtualHome, this would be less of a burden. This could be a strong submission in the future.
test
[ "axQZ2qqPqjo", "BBSB4sm78o_", "4VW_MCtJD7y", "9ppcziatwNR", "7h542iTSMl", "Y5u6s2LIpvj", "1WcQeJLzTYP", "zKHB8ZZA5Kj", "-fXBfp79LWs", "5e6o998c0_", "N3yxAVW7p6_", "_-w-tMWNo__", "l7krZQmUaNY", "jUPnLYsCLb8", "FIE44bBPz8", "1BmSEsKxpXd", "09FPiIlw9tM", "SMoh1WY9l0V", "eqkJDNnwV3c"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " That makes without providing context (current observation + history) that would be hard to find particular objects. I look forward to this future direction.", " Thanks for the reply and we appreciate the further clarification of your concern!\n\n- Regarding your suggestion about human evaluations, we will take ...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "Y5u6s2LIpvj", "9ppcziatwNR", "-fXBfp79LWs", "7h542iTSMl", "_-w-tMWNo__", "l7krZQmUaNY", "5e6o998c0_", "iclr_2022_6NT1a56mNim", "1BmSEsKxpXd", "eqkJDNnwV3c", "iclr_2022_6NT1a56mNim", "jUPnLYsCLb8", "09FPiIlw9tM", "N3yxAVW7p6_", "zKHB8ZZA5Kj", "zKHB8ZZA5Kj", "XTeq2BxDIU", "IXBMuHgTw...
iclr_2022_jNB6vfl_680
Global Magnitude Pruning With Minimum Threshold Is All We Need
Neural network pruning remains a very important yet challenging problem to solve. Many pruning solutions have been proposed over the years with high degrees of algorithmic complexity. In this work, we shed light on a very simple pruning technique that achieves state-of-the-art (SOTA) performance. We showcase that magnitude based pruning, specifically, global magnitude pruning (GP) is sufficient to achieve SOTA performance on a range of neural network architectures. In certain architectures, the last few layers of a network may get over-pruned. For these cases, we introduce a straightforward method to mitigate this. We preserve a certain fixed number of weights in each layer of the network to ensure no layer is over-pruned. We call this the Minimum Threshold (MT). We find that GP combined with MT when needed, achieves SOTA performance on all datasets and architectures tested including ResNet-50 and MobileNet-V1 on ImageNet. Code available on github.
Reject
The paper claims that one of the most common (and obvious) pruning methods in the literature today (global magnitude pruning) is "overlooked" and "seen as a mediocre baseline by the community." As an active member of the pruning research community myself, I can attest that this is simply not true. I am in strong agreement with reviewer MHY2 and - after reading the discussion around that review and the paper itself in detail - I confidently recommend rejection. Magnitude pruning itself dates back decades, at least to the work of Janowski (Pruning vs. Clipping in Neural Networks, 1988). The paper is correct that *global* magnitude pruning (in which all weights are compared in a layer-agnostic manner) was largely ignored in favor of layer-wise magnitude pruning (i.e., pruning all layers by the same amount) in much of the work that popularized magnitude pruning (e.g., Han et al., 2015). However, global magnitude pruning has become much more popular since that time. In work establishing the lottery ticket hypothesis, Frankle and Carbin (The Lottery Ticket Hypothesis) use it in certain cases and - later - in all cases (Frankle et al., Linear Mode Connectivity and the Lottery Ticket Hypothesis). In the past several years, global pruning in general has become the de facto way to use all new pruning heuristics (e.g., SNIP: Single-Shot Network Pruning based on Connection Sensitivity; Picking Winning Tickets Before Training by Preserving Gradient Flow; Pruning Neural Networks without Any Data by Iteratively Conserving Synaptic Flow). Moreover, other papers have specifically advocated that global magnitude pruning is state-of-the-art within recent years at this very conference: Comparing Rewinding and Fine-Tuning in Neural Network Pruning (Renda et al., ICLR 2020 oral): "We propose a pruning algorithm...that matches state-of-the-art tradeoffs between Accuracy and Parameter-Efficiency across networks and datasets:...globally prune the 20% of weights with the lowest magnitudes." (This paper does not cite Renda et al. despite the fact that it is a prominent paper that directly contradicts the purported problem that the paper relies on to support the significance of the findings.) In short, in the pruning literature, the idea that global pruning, magnitude pruning, or global magnitude pruning is overlooked or is not recognized as a strong baseline is simply preposterous. The reason that global magnitude pruning has "largely been ignored in recent years, generally being relegated to the position of a baseline for comparison" is because it is a simple technique whose efficacy has long been known and established - exactly what a good "baseline for comparison" should be. The paper has narrowed its claims somewhat during the discussion and revision period, advocating for a one-shot global magnitude pruning strategy that "does not require any complex pruning frameworks like RL or sparsification schedules [or]...iterative procedure." To do so, however, the proposed method replaces each of these "complex" hyperparameters with another set: whether or not to use a minimum threshold (MT) and where to set it. Even if the approach isn't iterative, the hyperparameter search necessary to set it almost certainly is, and it is unclear whether searching for the MT value is any more efficient than the other approaches. The costs of this hyperparameter search need to be measured. And iterative pruning's costs can often be mitigated by making pruning gradual, something the paper considers superficially in the revisions. Finally, as reviewer MHY2 observes, one of the primary reason papers *don't* use global magnitude pruning is that, although it leads to higher sparsities than layerwise magnitude pruning, it also often leads to higher FLOP counts. Although FLOP counts are a terrible indicator of real-world speedup, they are a much higher-fidelity indicator than parameter-count, which neglects the fact that - in convolutional networks - a small number of parameters can lead to vastly more FLOPs if they operate on larger activation maps (i.e., before the activation maps have been downsampled). In the revisions, the paper gives a token nod (and a superficial dismissal) to this fact in Sections 4.3 and 6, but the paper needs to fully acknowledge this point by measuring and discussing its consequences. "Look[ing] at this in future work" is not enough. Due to these many concerns, I strongly recommend rejection.
train
[ "79G10pZCmYK", "GQ7wdtM1_9J", "5eFJtOqkxrk", "TPVJA_NWF5F", "5spwnKC501H", "7syqErFPl3r", "DO2jGBHftEi", "UDmTWZf_7Jf", "xJa7bDnhZf5", "_R5fhF0aBPD", "ffRjvlMD8L", "ZO-hWUgyRrez", "YQ7_HGVWFZ", "78UvGEriRLD", "a_iZTqGp7bK", "YxRi-jZDGteM", "yk3qGpNokH2", "-VmicmQ5F25" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their comments and insights. While we have differing opinions with the reviewer on some aspects of the work (as discussed in the rebuttal), we nevertheless take the feedback seriously and appreciate the suggestions offered by the reviewer. Thank you.", "This paper revisits Global Pruni...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2 ]
[ "GQ7wdtM1_9J", "iclr_2022_jNB6vfl_680", "7syqErFPl3r", "DO2jGBHftEi", "UDmTWZf_7Jf", "xJa7bDnhZf5", "_R5fhF0aBPD", "ffRjvlMD8L", "ZO-hWUgyRrez", "78UvGEriRLD", "YQ7_HGVWFZ", "GQ7wdtM1_9J", "GQ7wdtM1_9J", "GQ7wdtM1_9J", "-VmicmQ5F25", "yk3qGpNokH2", "iclr_2022_jNB6vfl_680", "iclr_20...
iclr_2022_fGEoHDk0C
A framework of deep neural networks via the solution operator of partial differential equations
There is a close connection between deep neural networks (DNN) and partial differential equations (PDEs). Many DNN architectures can be modeled by PDEs and have been proposed in the literature. However, their neural network design space is restricted due to the specific form of PDEs, which prevents the design of more effective neural network structures. In this paper, we attempt to derive a general form of PDEs for the design of ResNet-like DNN. To achieve this goal, we first formulate DNN as an adjustment operator applied on the base classifier. Then based on several reasonable assumption, we show the adjustment operator for ResNet-like DNN is the solution operator of PDEs. To show the effectiveness for general form of PDEs, we show that several effective networks can be interpreted by our general form of PDEs and design a training method motivated by PDEs theory to train DNN models for better robustness and less chance of overfitting. Theoretically, we prove that the robustness of DNN trained with our method is certifiable and our training method reduces the generalization gap for DNN. Furthermore, we demonstrate that DNN trained with our method can achieve better generalization and is more resistant to adversarial perturbations than baseline model.
Reject
The paper formulates ResNet like classifiers as the evolution of a base classifier through an operator corresponding to a PDE, up to a given final time. Using a set of assumptions on the desired properties of the flow operator, the authors show that it can be obtained as the solution of a convection-diffusion equation. This generalizes ideas developed e.g. for Neural ODEs. The authors provide several examples showing that their formulation encompasses regularization methods proposed for deep NNs. They further provide robustness guaranties for a classifier defined according to their framework. They introduce an algorithm based on a restricted version of their framework and propose different experiments showing the increased robustness of their model to a family of adversarial attacks compared to baseline ResNets. The paper introduces an original idea, providing a very interesting connection between ResNets and PDEs. This allows the authors to exploit known properties of PDEs and opens the way to new theoretical insights on DNNs while allowing the development of DNN models with proved properties. As mentioned this generalizes the view of ResNets introduced in Neural ODEs. Besides, the paper presents weaknesses. First the form will make it accessible only to a very small audience in the ML community. No effort is made in the writing to introduce the required PDE concepts that would help a lot understanding and appreciating the contribution. This is a pity since given the current trend on this topic this could be of interest to a large community. Then the use cases in the experiments focus solely on robustness properties and one type of attacks. This illustrates only one aspect of the potential of the framework, and this does not provide a strong case in support of the ideas introduced before. The global message carried out by the paper then becomes unclear. Overall, the current version could be largely improved and this will certainly lead to a strong contribution.
val
[ "9zP3QW3fyv0", "8xASj2So4Zq", "6JS61l8PT_3", "P3VsL9pt-UV", "5yppHl1e1V0", "vxAO8sCVh3a", "-MXejCNkxA", "BrRMyeORxH3" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for handling our submission. We have some concerns about Reviewer mSDj’s comments. We feel that Reviewer mSDj did not read our paper carefully at all. We feel Reviewer mSDj is quite irresponsible, and the comments reflect a serious misunderstanding of our submission. Below we elaborate on this in detail.\n...
[ -1, 6, 5, -1, -1, -1, 3, 6 ]
[ -1, 3, 4, -1, -1, -1, 4, 4 ]
[ "-MXejCNkxA", "iclr_2022_fGEoHDk0C", "iclr_2022_fGEoHDk0C", "8xASj2So4Zq", "BrRMyeORxH3", "6JS61l8PT_3", "iclr_2022_fGEoHDk0C", "iclr_2022_fGEoHDk0C" ]
iclr_2022_YmONQIWli--
Gotta Go Fast When Generating Data with Score-Based Models
Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data. These approaches define a forward diffusion process for transforming data to noise and generate data by reversing it (thereby going from noise to data). Unfortunately, current score-based models generate data very slowly due to the sheer number of score network evaluations required by numerical SDE solvers. In this work, we aim to accelerate this process by devising a more efficient SDE solver. Existing approaches rely on the Euler-Maruyama (EM) solver, which uses a fixed step size. We found that naively replacing it with other SDE solvers fares poorly - they either result in low-quality samples or become slower than EM. To get around this issue, we carefully devise an SDE solver with adaptive step sizes tailored to score-based generative models piece by piece. Our solver requires only two score function evaluations, rarely rejects samples, and leads to high-quality samples. Our approach generates data 2 to 10 times faster than EM while achieving better or equal sample quality. For high-resolution images, our method leads to significantly higher quality samples than all other methods tested. Our SDE solver has the benefit of requiring no step size tuning.
Reject
The paper proposes numerical method for solving SDEs that empirically are faster than previous approaches. Two reviewers felt the paper was above threshold, while two felt it was below threshold for acceptance. While the paper is borderline in this sense, all four reviewers noted that the paper lacked a theoretical justification and rested on empirical evidence for the usefulness of the approach. Several reviewers also pointed out that these empirical results are on the weak side. While the paper may add a potentially useful learning trick to the optimization literature, these two significant concerns put it on the side of a borderline reject.
train
[ "D1jBKzg3Zhr", "MswWB7t__VX", "RDPLpBjUe73", "BJuozbDdJk", "QK3Mg5GBD0P", "vblVVWhrsRp", "6rW-iWWf2vC", "EqEwIlkry9k", "JdElWxd7Oga", "RD-snkZRn-Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response and I am happy to keep my original rating.", "The paper presents a new SDE solver for the reverse process in score-based models. The algorithm is fast and offers high quality, and avoids some hyperparameter tuning. There is theoretical analysis on the stabili...
[ -1, 5, -1, 6, -1, -1, -1, -1, 8, 5 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "6rW-iWWf2vC", "iclr_2022_YmONQIWli--", "EqEwIlkry9k", "iclr_2022_YmONQIWli--", "MswWB7t__VX", "RD-snkZRn-Q", "JdElWxd7Oga", "BJuozbDdJk", "iclr_2022_YmONQIWli--", "iclr_2022_YmONQIWli--" ]
iclr_2022_NgmcJ66xQz_
Divide and Explore: Multi-Agent Separate Exploration with Shared Intrinsic Motivations
One of the greatest challenges of reinforcement learning is efficient exploration, especially when training signals are sparse or deceptive. The main difficulty of exploration lies in the size and complexity of the state space, which makes simple approaches such as exhaustive search infeasible. Our work is based on two important observations. On one hand, modern computing platforms are extremely scalable in terms of number of computing nodes and cores, which can complete asynchronous and well load-balanced computational tasks very fast. On the other hand, Divide-and-Conquer is a commonly used technique in computer science to solve similar problems (such as SAT) of doing efficient search in extremely large state space. In this paper, we apply the idea of divide-and-conquer in the context of intelligent exploration. The resulting exploration scheme can be combined with various specific intrinsic rewards designed for the given task. In our exploration scheme, the learning algorithm can automatically divide the state space into regions, and each agent is assigned to explore one of these regions. All the agents run asynchronously and they can be deployed onto modern distributed computing platforms. Our experiments show that the proposed method is highly efficient and is able to achieve state-of-the-art results in many RL tasks such as MiniGrid and Vizdoom.
Reject
The paper proposes a strategy for multiple learning agents to explore a large RL problem's state space, via the divide and conqeuer principle. It prescribes a design for each agent's reward function, which when optimized enables the agents to 'carve out' and cover different parts of the state space yielding efficient exploratory behavior. The argument for efficacy of the proposed method is purely experimental, with numerical benchmarking on complex simulated environments. The reviewers have raised several concerns that persist even after receiving detailed responses from the author(s). These include the lack of discussion about comparisons with seemingly closely related and applicable work, the perception that the comparisons of this method with others are not fair ("not apples to apples"), and the assessment that the ablation studies and investigation of the sensitivity to hyperparameters may not be comprehensive to make a compelling argument. Thus, keeping in mind the unanimous impression of the reviewers, I am of the view that while the paper contributes an interesting principle, more work is needed to argue for its acceptance in a clear way.
train
[ "GCix8UO0eB-", "x1NxKdj0pMw", "7pijVeTyGqX", "B7_UjXNo-Bv", "3qBdjjOb887", "9_4s8hQzaf", "E6WaHXCAkEm", "v7zC7VgBUuT", "jP8LFiMKME", "-bNwqYcmpXx", "xGt5hY-4nAG", "_5CjDXTvDnt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In reading the other reviews I also find a few points convincing which stops me from upgrading my score more:\nFrom Reviewer BfC2:\n>* The results are not apples-to-apples comparisons. Figures 2 and 5 and Tables 1 and 2 should show the total number of steps across all agents for each method. Since D&E uses 3 agen...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "v7zC7VgBUuT", "v7zC7VgBUuT", "9_4s8hQzaf", "E6WaHXCAkEm", "_5CjDXTvDnt", "-bNwqYcmpXx", "xGt5hY-4nAG", "jP8LFiMKME", "iclr_2022_NgmcJ66xQz_", "iclr_2022_NgmcJ66xQz_", "iclr_2022_NgmcJ66xQz_", "iclr_2022_NgmcJ66xQz_" ]
iclr_2022_Bd8JSwLVWQ5
Equivalence of State Equations from Different Methods in High-dimensional Regression
State equations were firstly introduced in the approximate message passing (AMP) to describe the mean square error (MSE) in compressed sensing. Since then a set of state equations have appeared in studies of logistic regression, robust estimator and other high-dimensional statistics problems. Recently, a convex Gaussian min-max theorem(CGMT) approach was proposed to study high-dimensional statistic problems accompanying with another set of different state equations. This Paper provides a uniform viewpoint on these methods and shows the equivalence of their reduction forms, which causes that the resulting SE are essentially equivalent and can be converted into the same expression through parameter transformations. Combining these results, we show that these different state equations are derived from several equivalent reduction forms. We believe this equivalence shed light on discovering a deeper structure in high dimensional statistics.
Reject
Three out of the four reviews rated this paper well below the acceptance threshold. Although the review scores show a relatively large spread, I think that the review contents are more or less coherent across the four reviewers. The equivalence of the state equations (SEs; a set of equations that macroscopically characterizes optimal solutions of certain high-dimensional regression problems) derived from three different approaches (AMP, CGMT, and LOO) is well expected to hold, as the optimal solutions should be independent of how their macroscopic characterization in the form of an SE is derived, and this paper concretely showed such equivalence to hold for three problems. More concretely, Theorem 1 states the equivalence of the SEs for M-estimator derived from the three approaches, Theorem 2 states the equivalence of the SEs for LASSO derived from AMP and CGMT, and Theorem 4 states the equivalence of the SEs for logistic regression derived from LOO and CGMT. The main concern raised by all the reviewers is that this paper does not provide novel and significant insights as to why and how the equivalence arises. Some reviewers also pointed out that this paper lacks citation to the relevant statistical-mechanics literature, as well as that this paper contains so many typos, grammatical errors, and inappropriate typesetting styles. The authors responses were not instrumental in persuading the reviewers with negative evaluation. On the basis of these I would not be able to recommend acceptance of this paper for presentation at ICLR 2021.
train
[ "QI3gdCWMfT", "vyS7qBVdN3J", "XM2Jpdjv3Qi", "TStTIpFHY_", "nPg7bAYh4Zb", "TjGLowDuX_", "2BPieWLg_vY", "9KOl19JEwIm", "IVH60NHiUEC", "9ynlKThZ6tK", "XzBgVNnCm9B" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their thoughtful reply. However, while the aim of the paper is certainly of interest, regretfully the novelty and significance of the results is clearly below bar, in my opinion. Some of these results were known in the literature and the mere proof of the equivalence of the S...
[ -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "nPg7bAYh4Zb", "IVH60NHiUEC", "IVH60NHiUEC", "XzBgVNnCm9B", "9ynlKThZ6tK", "iclr_2022_Bd8JSwLVWQ5", "9KOl19JEwIm", "iclr_2022_Bd8JSwLVWQ5", "iclr_2022_Bd8JSwLVWQ5", "iclr_2022_Bd8JSwLVWQ5", "iclr_2022_Bd8JSwLVWQ5" ]
iclr_2022_jNsynsmDkl
Contrastively Enforcing Distinctiveness for Multi-Label Classification
Recently, as an effective way of learning latent representations, contrastive learning has been increasingly popular and successful in various domains. The success of constrastive learning in single-label classifications motivates us to leverage this learning framework to enhance distinctiveness for better performance in multi-label image classification. In this paper, we show that a direct application of contrastive learning can hardly improve in multi-label cases. Accordingly, we propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting, which learns multiple representations of an image under the context of different labels. This facilities a simple yet intuitive adaption of contrastive learning into our model to boost its performance in multi-label image classification. Extensive experiments on two benchmark datasets show that the proposed framework achieves state-of-the-art performance in the comparison with the advanced methods in multi-label classification.
Reject
This is an interesting paper trying to answer how contrastive learning can be used in multi-label classification. The reviewers however had raised several doubts about motivations, novelty, or the impact of contrastive module on final results. For many of them the authors had delivered satisfying responses, but after a long discussion, we decided that the paper needs revision to improve in these aspects. For example, the authors should make it clear whether the image retrieval application from Section 4.4 is the main motivation of the method. If so, what are the competitive approaches to solve such problem? How to measure the performance of such methods? Answers for the above questions are crucial to find the right motivations for the contrastive module used by the authors. We hope that the authors will follow the recommendations and resubmit the paper to another top conference.
train
[ "bNAPi4kT5r4", "NPnHeuRrA6G", "uOISrMIGyVQ", "jjN7tXuL7K7", "SsBKxnJm3yo", "ipuVY31fVtr", "71ZC1PmX88Y", "hB-MTom2KM_", "vfhgdeosel_", "LTIMhMIhQNf", "PBmrPommGnF", "ln6bOD_uPM", "ekf7IopMtqK" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the valuable and insightful comments, and we understand that additional reviews are usually required to be submitted in a short period of time. Therefore, the reviewer's effort is more appreciated. Our response is as follows:\n\n**1. Novelty**\n\nContrastive learning is a general idea, c...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "NPnHeuRrA6G", "iclr_2022_jNsynsmDkl", "hB-MTom2KM_", "vfhgdeosel_", "71ZC1PmX88Y", "iclr_2022_jNsynsmDkl", "ekf7IopMtqK", "ln6bOD_uPM", "PBmrPommGnF", "iclr_2022_jNsynsmDkl", "iclr_2022_jNsynsmDkl", "iclr_2022_jNsynsmDkl", "iclr_2022_jNsynsmDkl" ]
iclr_2022_8IXBbFjkMat
Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Text autoencoders are often used for unsupervised conditional text generation by applying mappings in the latent space to change attributes to the desired values. Recently, Mai et al. (2020) proposed $\operatorname{Emb2Emb}$, a method to $\textit{learn}$ these mappings in the embedding space of an autoencoder. However, their method is restricted to autoencoders with a single-vector embedding, which limits how much information can be retained. We address this issue by extending their method to $\textit{Bag-of-Vectors Autoencoders}$ (BoV-AEs), which encode the text into a variable-size bag of vectors that grows with the size of the text, as in attention-based models. This allows to encode and reconstruct much longer texts than standard autoencoders. Analogous to conventional autoencoders, we propose regularization techniques that facilitate learning meaningful operations in the latent space. Finally, we adapt $\operatorname{Emb2Emb}$ for a training scheme that learns to map an input bag to an output bag, including a novel loss function and neural architecture. Our experimental evaluations on unsupervised sentiment transfer and sentence summarization show that our method performs substantially better than a standard autoencoder.
Reject
The paper addresses unsupervised conditional text generation extending emb2emb (Mai et al, 2020) with bag-of-vectors antoencoders. Reviewers shared several concerns about the clarity of this paper and empirical results.
test
[ "plD7OByeLE2", "iIm2YyzxYhI", "AmNKx5jJQS", "cn0OLbnlm_1", "xzKwV-7TMOs", "nbPhohIpyM9", "kwCkdKygF8O", "6sEKd95R4Xp", "ujj64idIDPI", "D3TffAnlW-w", "bSMSJVXOK9d", "3YoFKPkGs4h", "kEeu0YOZ9bA", "Y449MSM1dw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your addition response. My questions are resolved.", " Thanks for the responses; the writing is now much clearer. Given that the outputs in Table 5 aren't of particularly high quality, I still think that a weak accept is the appropriate score for this paper. ", " Our mapping $\\Phi$ generates the o...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "xzKwV-7TMOs", "bSMSJVXOK9d", "iclr_2022_8IXBbFjkMat", "iclr_2022_8IXBbFjkMat", "kwCkdKygF8O", "iclr_2022_8IXBbFjkMat", "D3TffAnlW-w", "kEeu0YOZ9bA", "3YoFKPkGs4h", "nbPhohIpyM9", "Y449MSM1dw", "iclr_2022_8IXBbFjkMat", "iclr_2022_8IXBbFjkMat", "iclr_2022_8IXBbFjkMat" ]
iclr_2022_aKZeBGUJXlH
Gradient Broadcast Adaptation: Defending against the backdoor attack in pre-trained models
Pre-trained language models (e.g, BERT, GPT-3) have revolutionized the NLP research and fine-tuning becomes the indispensable step of downstream adaptation. However, the covert attack is the emerging threat to the pre-train-then-fine tuning learning paradigm. The backdoor attack is a typical challenge, which the victim model fails on the trigger-activated samples while behaves normally on others. These backdoors could survive the cascading fine-tuning stage, which continually posing the application of pre-trained models. In this paper, we proposed a Gradient Broadcast Adaptation (GBA) method, prevent the model from controlled producing outputs in a trigger-anchor-free manner. We design the prompt-based tuning, flexibly accessing the rare tokens while providing a fair measure of distance in word embedding space. The gradient broadcast alleviates lazy updating of potential triggers and purges the underlying abnormal weights. The GBA defense method is evaluated over five text-classification tasks against three state-of-the-art backdoor attacks. We find our method can cover nearly 100% embedded backdoor with negligible performance loss on clean data.
Reject
This paper introduces a defense method (gradient broadcast adaptation) against backdoor attacks on pretrained language models. It proposes to utilize prompt tuning to guide the perturbed weights back to a normal state and thus helps avoid the degradation of model's generalization ability. Strengths: - Experiments are conducted across multiple datasets with different types of backdoor attacks, demonstrating the effectiveness of the proposed approach - The proposed idea is well motivated and intuitive Weakness: - Improvement on experiment results seems marginal - Some technical details of the attack setup are unclear - Writing of the paper needs improvement
train
[ "pqNHkii84y3", "Jq7weXsSZIc", "RSmjAdNUX0P", "wgoWffC-eSD", "088ANqVY9Ka", "1grZVzxVBMD", "ue-VzpP-HmM", "3KgBygU9fk", "6zHpX3Tk_lY", "Iaku8O8MGI4" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your feedback! Please feel free to contact us if you have any other questions.", " Dear reviewer,\n\nDo you still have any concerns about our manuscript? We are sincerely looking forward to your further feedback!", " Dear reviewer,\n\nDo you still have any concerns about our manuscript? W...
[ -1, -1, -1, -1, 5, -1, -1, -1, 3, 8 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, 4, 4 ]
[ "wgoWffC-eSD", "ue-VzpP-HmM", "3KgBygU9fk", "1grZVzxVBMD", "iclr_2022_aKZeBGUJXlH", "Iaku8O8MGI4", "088ANqVY9Ka", "6zHpX3Tk_lY", "iclr_2022_aKZeBGUJXlH", "iclr_2022_aKZeBGUJXlH" ]
iclr_2022_hW2kwAcXq5w
Discriminator-Weighted Offline Imitation Learning from Suboptimal Demonstrations
We study the problem of offline Imitation Learning (IL) where an agent aims to learn an optimal expert behavior policy without additional online environment interactions. Instead, the agent is provided with a static offline dataset of state-action-next state transition triples from both optimal and non-optimal expert behaviors. This strictly offline imitation learning problem arises in many real-world problems, where environment interactions and expert annotations are costly. Prior works that address the problem either require that expert data occupies the majority proportion of the offline dataset, or need to learn a reward function and perform offline reinforcement learning (RL) based on the learned reward function. In this paper, we propose an imitation learning algorithm to address the problem without additional steps of reward learning and offline RL training for the case when demonstrations containing large-proportion of suboptimal data. Built upon behavioral cloning (BC), we introduce an additional discriminator to distinguish expert and non-expert data, we propose a cooperation strategy to boost the performance of both tasks, this will result in a new policy learning objective and surprisingly, we find its equivalence to a generalized BC objective, where the outputs of discriminator serve as the weights of the BC loss function. Experimental results show that the proposed algorithm can learn behavior policies that are much closer to the optimal policies than policies learned by baseline algorithms.
Reject
The authors introduce a method for offline imitation learning in the presence of optimal and non-optimal data. In particular, they propose to learn a discriminator that can be then further used to modify the behavior cloning loss which leads to performance improvements over baselines. The reviews mention that the idea is novel and most sections of the paper are well written and self-explanatory. They do point out, however, several flaws such as the clarity of the derivation and the thoroughness of experimental evaluation. While the paper has significantly improved during the rebuttal, its significant changes warrant another round of reviews. I encourage the authors to continue improving the paper, addressing the reviewers' feedback and resubmitting it as it has a potential to be a strong submission.
train
[ "U7ANH-Fs5R", "Jz5LcFyKBEa", "GAbv0LKyeA", "fwM6JIXcLxt", "EtWi-C51lO", "UGVx2IWntsm", "feSMg1SiSTh", "k5hreyT36w", "CPNQSdL7cbY", "_ptygU9Ui1F", "uYSlaqwyZ8Z", "H4EI4gsO9cm", "UuvOrQ_PdBK", "tnVDn0--L26", "ayEPfgB7ZF2", "u-AdnzduGz", "IN6R1lFy3zz", "gLEWPLo5qkg", "GGpgiVMOOtK", ...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thank you for your comments and feedback. \n\nWe appologize for the lack of clarity in this part. As discussed previously, we need to ensure $\\partial F/\\partial \\theta_\\pi = 0$, by the chain rule, we have\n> $\\frac{\\partial F}{\\partial \\theta_\\pi}=\\frac{\\partial F}{\\partial d}\\cdot \\frac{\\partial ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "Jz5LcFyKBEa", "UGVx2IWntsm", "iclr_2022_hW2kwAcXq5w", "EtWi-C51lO", "_ptygU9Ui1F", "CPNQSdL7cbY", "k5hreyT36w", "UuvOrQ_PdBK", "IN6R1lFy3zz", "uYSlaqwyZ8Z", "u-AdnzduGz", "iclr_2022_hW2kwAcXq5w", "GGpgiVMOOtK", "gLEWPLo5qkg", "iclr_2022_hW2kwAcXq5w", "Xpqe7Z1T2Z", "dT2CndhvoPa", "...
iclr_2022__qc3iqcq-ps
On the Evolution of Neuron Communities in a Deep Learning Architecture
Deep learning techniques are increasingly being adopted for classification tasks over the past decade, yet explaining how deep learning architectures can achieve state-of-the-art performance is still an elusive goal. While all the training information is embedded deeply in a trained model, we still do not understand much about its performance by only analyzing the model. This paper examines the neuron activation patterns of deep learning-based classification models and explores whether the models' performances can be explained through neurons' activation behavior. We propose two approaches: one that models neurons' activation behavior as a graph and examines whether the neurons form meaningful communities, and the other examines the predictability of neurons' behavior using entropy. Our comprehensive experimental study reveals that both the community quality and entropy can provide new insights into the deep learning models' performances, thus paves a novel way of explaining deep learning models directly from the neurons' activation pattern .
Reject
Understanding neural networks once they have been trained is a big open problem for machine learning. This manuscript designed graph theoretic and information theoretic measures aimed at helping us understand community structure and function in trained networks. In particular, they measure community structure (modularity) and entropy for trained networks and related these to the performance of the networks. The manuscript runs experiments with fully connected networks on problems such as MNIST and CIFAR. Both community structure and entropy measures are shown to correlate (Spearman and Pearson correlation coefficients) with performance metrics in the networks studied. Reviewers tended to agree that the paper was well written and motivated by an interesting and timely question (understanding trained networks). However, on the whole, most of the reviewers believe that the manuscript is too preliminary for publication at ICLR and I agree. A central issue cited by most of the reviewers is that the experiments are performed on small/toy models for small tasks and under particular hyperparameter regimes. It is therefore unclear to what extent the results would generalize to other situations. E.g. would the results hold for larger dataset or for convolutional neural networks? Connected to this complaint, reviewers worry that there is not enough connection to the literature and baseline methods that could be used to predict performance given measures of trained network activity. Even allowing that the observed correlations are true and generalizable, are these measures better than those covered elsewhere in the literature? Additionally problematic, the measures are not theoretically justified either. Thus, we are missing both reasoned arguments for the metrics and robust quantification beyond a limitted experimental setting. One reviewer, Xmnm, is compelled by the work and recommends acceptance. However, they do not present a compelling case for acceptance, and even repeat several of the concerns raised by other reviewers. In sum, the work is on an interesting subject and timely, but needs further work to be ready for publication.
test
[ "B5INbHOHODb", "KW6jHFFWq2W", "Rthf0TpeO0x", "1oQMZ9hIGfR", "6fXz6l0XLsh", "lLEbjMP80yX", "TsFgNx0xVn8", "pDR6-1DbZcS", "V0zAsgYksTD", "YIteKrC-X0m" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors very much for their detailed and elaborated response to my comments, questions and concerns. However, I must admit that my feeling that this paper, in its current state, is below the acceptance threshold persists. I encourage researchers to continue researching along these lines, and to try to...
[ -1, -1, -1, -1, -1, -1, 3, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "6fXz6l0XLsh", "iclr_2022__qc3iqcq-ps", "TsFgNx0xVn8", "YIteKrC-X0m", "V0zAsgYksTD", "pDR6-1DbZcS", "iclr_2022__qc3iqcq-ps", "iclr_2022__qc3iqcq-ps", "iclr_2022__qc3iqcq-ps", "iclr_2022__qc3iqcq-ps" ]
iclr_2022_RuC5ilX2m6O
Local Patch AutoAugment with Multi-Agent Collaboration
Data augmentation (DA) plays a critical role in improving the generalization of deep learning models. Recent works on automatically searching for DA policies from data have achieved great success. However, existing automated DA methods generally perform the search at the image level, which limits the exploration of diversity in local regions. In this paper, we propose a more fine-grained automated DA approach, dubbed Patch AutoAugment, to divide an image into a grid of patches and search for the joint optimal augmentation policies for the patches. We formulate it as a multi-agent reinforcement learning (MARL) problem, where each agent learns an augmentation policy for each patch based on its content together with the semantics of the whole image. The agents cooperate with each other to achieve the optimal augmentation effect of the entire image by sharing a team reward. We show the effectiveness of our method on multiple benchmark datasets of image classification and fine-grained image recognition (e.g., CIFAR-10, CIFAR-100, ImageNet, CUB-200-2011, Stanford Cars and FGVC-Aircraft). Extensive experiments demonstrate that our method outperforms the state-of-the-art DA methods while requiring fewer computational resources.
Reject
The paper proposed an approach to search image augmentation policies. The paper formulates this problem as a cooperative multi-agent decision-making problem, which is interesting. The paper received 3 borderline accept and 1 borderline reject ratings. The reviewers originally had multiple concerns regarding the necessity of RL-based approach, lacking references, and additional experiments, and the authors responded to some of the concerns of the reviewers reasonably. However, none of the reviewers ended up strongly supporting the paper, staying with their ratings. The RL formulation of the problem is interesting, but it requires multiple rounds of the target network training due to its nature (i.e., it is not an end-to-end approach). The paper misses some details on how exactly the patch-wise RL-based augmentation works and it requires additional hyperparameters for the selection of patch size and shape. It is also unclear how this RL-based method is conceptually superior to previous augmentation approaches and the empirical results are not strong enough, as some of the reviewers also pointed out. Although the paper has interesting ideas and the AC also think the paper has some merit, the senior AC finds the technical contribution of the paper weaker than the others. We unfortunately need to recommend the rejection of the paper.
test
[ "tNnWjo1xoZq", "hVuxmq_u3ec", "1BqFKavKCjj", "3pVfGXoQDeW", "bPT6v8vux0_", "zC2XvKI5oqM", "Rce_BD-V4so", "zMc15ykKZxA", "_7vX2Z-3OAl", "gXPN-VWub2F", "CHsj1HZGKB", "uGVoeWfa5jR", "7uxrtwXSB2", "CcVFFkAKXVy", "nlQhurPOy_8", "1TQ6xUTdMSn", "3AIGA5R3Ezx" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer CpsG,\n\nWe have not received any reply from you. Considering that Reviewer ci48 just commented *\"I wrongly thought I already replied to your response\"*, we would like to friendly remind you of the discussion in the review period. If you have any concerns or suggestions, please feel free to leave ...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "1TQ6xUTdMSn", "iclr_2022_RuC5ilX2m6O", "1TQ6xUTdMSn", "3AIGA5R3Ezx", "hVuxmq_u3ec", "zMc15ykKZxA", "iclr_2022_RuC5ilX2m6O", "CHsj1HZGKB", "iclr_2022_RuC5ilX2m6O", "iclr_2022_RuC5ilX2m6O", "_7vX2Z-3OAl", "3AIGA5R3Ezx", "1TQ6xUTdMSn", "nlQhurPOy_8", "hVuxmq_u3ec", "iclr_2022_RuC5ilX2m6O...
iclr_2022_q23I9kJE3gA
Conditional set generation using Seq2seq models
Conditional set generation learns a mapping from an input sequence of tokens to a set. Several popular natural language processing (NLP) tasks, such as entity typing and dialogue emotion tagging, are instances of set generation. Sequence-to-sequence models are a popular choice to model set generation but this typical approach of treating a set as a sequence does not fully leverage its key properties, namely order-invariance and cardinality. We propose a novel data augmentation approach that recovers informative orders for labels using their dependence information. Further, we jointly model the set cardinality and output by listing the set size as the first element and taking advantage of the autoregressive factorization used by seq2seq models. Our experiments in simulated settings and on three diverse NLP datasets show that our method improves over strong seq2seq baselines by about 9% on absolute F1 score. We will release all code and data upon acceptance.
Reject
In this paper, the authors propose a method to generate sets, which are order invariant, with a sequence-to-sequence model. The main idea is to order the elements of the sets, and then treat them as regular sequences. The authors propose to use PMI and conditional probability to obtain a partial order on the elements of sets. Overall, while the reviewers note that the proposed method is simple and intuitive, they also raised concerns about the paper: one of the main concerns is about missing baselines, such as non seq2seq models for set generation, such as binary classification (to predict whether an element should be included or not). For this reason, I recommend to reject the paper.
train
[ "at67xDIvZk2", "-v7FyDDXxgb", "kN4eroPHsRW", "zVd_n-zfp-", "OlDNlqFCj9B", "XEjNQd-CRWV", "E9MCU3twgzK", "j2XRWMWyG3n", "Fo6KR4cyrSu" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an approach to set-generation within a seq2seq framework. The authors propose to train a standard seq2seq model by ordering the discrete elements in the target sets (as a sequence) under a partial order defined by taking $y_i < y_j$ if both $y_i$ and $y_j$ have sufficiently high PMI and $p(y_i |...
[ 6, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_q23I9kJE3gA", "iclr_2022_q23I9kJE3gA", "E9MCU3twgzK", "at67xDIvZk2", "j2XRWMWyG3n", "Fo6KR4cyrSu", "iclr_2022_q23I9kJE3gA", "iclr_2022_q23I9kJE3gA", "iclr_2022_q23I9kJE3gA" ]
iclr_2022_AT0K-SZ3QGq
On Heterogeneously Distributed Data, Sparsity Matters
Federated learning (FL) is particularly vulnerable to heterogeneously distributed data, since a common global model in FL may not adapt to the heterogeneous data distribution of each user. To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user. However, PFL is far from its maturity, because existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory. In this work, we propose federated learning with personalized sparse mask (FedSpa), a novel personalized federated learning scheme that employs personalized sparse masks to customize sparse local models on the edge. Instead of training fully dense PFL models, FedSpa only maintains a fixed number of active parameters throughout training (aka sparse-to-sparse training), which enables users' models to achieve personalization with consistently cheap communication, computation, and memory cost. We theoretically show that with the rise of data heterogeneity, setting a higher sparsity of FedSpa may potentially result in a smaller error bound on its personalized models, which also coincides with our empirical observations. Comprehensive experiments demonstrate that FedSpa significantly saves communication and computation costs, while simultaneously achieves higher model accuracy and faster convergence speed against several state-of-the-art PFL methods.
Reject
Dear authors, I have carefully read the reviews, rebuttals and the subsequent discussion. The review scores are mixed (5, 5, 6, 6). Let me comment on some of the key issues raised by the reviewers. I will elaborate on some of them with my own insights. 1) You insist that (P4) cannot be regarded as a particular case of (P3). But this is trivially incorrect. The hard constraint in the reformulation (P6) you mention in the discussion can be written as a regularizer: *indicator function* of the constraint set. Indeed, let $\cal C$ be the set of points $W=(w_1,\dots,w_K)$ for which there exists $w$ such that $w_k = m_k^* \circ w$ for all $k$. Then the regularizer defined by ${\cal R}(W) = 0$ if $W\in {\cal C}$ and ${\cal R}(W) = +\infty$ if otherwise does the job. This is a well defined regularizer. Such regularizers are routinely used in optimization to model hard constraints. So, the formulation you consider is a special case of (P3). Moreover, as pointed out by Reviewer Zg2F, and acknowledged by the authors, "The idea of using sparse masks to model personalization for federated learning is not novel in this work. Prior works utilize this idea with other techniques (Li et al., 2020) (Vahidian et al., 2021). Moreover, several side-benefits such as low communication cost, cheaper computation, and fewer memory requirements should also be attributed to those original works where sparse masks are used, and the same side-benefits of sparsity were mentioned." The claim that one of the novel contributions of FedSpa is "we formulate a clear optimization problem for FedSpa" is weak, especially in the light of the above comment, and the "moral" existence of the formulation in prior work, albeit not expressed in a mathematical notation. The fact that previous works did not formulate this properly is a major issue with those works, and not a major contribution of this work. A clear mathematical formulation of what one wants to achieve should be a standard requirement. In any case, I appreciate the clarity nevertheless. 2) The same reviewer states that the key idea of the paper that differs from the above two mentioned papers is how the sparse masks are handled. One of the two ideas proposed is trivial and is equivalent to standard non-personalized FL (if all masks are the same, the submodes they defined can be considered a global model). The second idea does not seem to have any interesting/distinctive theoretical support. 3) Sparse-to-sparse training in FedSpa may be novel, but the claim that "the masks continue to evolve (towards the optimal masks) in the training process" is not supported by theory nor experiments. If indeed you can show that the local masks evolve to some meaningful notion of an optimal mask, this would be interesting. 4) I also agree with the other points raised by this reviewer. I have read the author response to these comments. (BTW: Language such as "you bet" is inappropriate). While some of them make sense, they do not reduce the severity of the concerns by a large enough margin. 5) The comment about the weakness of the main theorem is particularly concerning. Indeed, the main theorem may be vacuous, and the authors need to do a thorough explanation of the result and its importance (on its own and in comparison with existing literature and rates). I do not believe such a comparison could be advantageous to the proposed method though. The expressions are complicated. It seems that for any meaningful mask size, the non-vanishing term will be too large. The theorem is not a valid convergence result as the authors do not show that the right hand side can indeed be provably made arbitrarily small by some choice of the parameters of the method. For instance, it is not guaranteed that $dist(m_{k,t}, m_k^*)$ will converge to zero. In this sense, calling this theorem "Convergence of personalized models" is incorrect and misleading. This is a fatal issue, unfortunately. The authors should make it absolutely clear that the result does not prove convergence. 6) Assumptions 1, 2, and 4 are very strong. For example, Assumption 2 is not provably satisfied for lower bounded nonconvex smooth functions when subsampling (=minibatching) is used to produce the stochastic gradient. Assumption 3 is also quite strong: it is not satisfied by convex quadratics. Assumption 1 is also strong - most recent works on FL do not require any similarity assumptions. In summary, while this direction of research is interesting, the level of contributions in this work is marginal at best. The key theoretical result is misleading in that it does not imply convergence while it is marketed as such. Moreover, strong assumptions (relative to what is achieved in the latest papers) are used to obtain it. Because of these concerns, and other concerns raised by the reviewers, I do not have any other choice but to reject the paper. Area Chair
val
[ "u5LJltoWpg", "eVoUZ29F23d", "0xs54ncXBtT", "MC_IdE0RS3N", "Alp5lsDuJfC", "pH8_FRvukU", "8GqgMElxxCA", "MK7MxtaULM_", "g0ZYRUgf-ka", "18KscIs07V_", "0Gdbi5-h8XB", "8kv_uKyAQqu", "FkOe-gzSg-b", "LiyvbAhlvSI", "WSQL-CLyn3q", "OwbFTfOm2mJ", "5bwYf9wvvzR", "dRdEVgA9Qop", "7u5KwBJJEKe...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " Thanks for the critical comments from this reviewer. Still, the authors would like to make the following clarification on the mentioned issues:\n\n**Insufficient novelty on the SPFL problem and no consensus on the ultimate PFL problem:** Firstly, we admit that our SPFL problem is somehow similar to the idea descr...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "eVoUZ29F23d", "LiyvbAhlvSI", "iclr_2022_AT0K-SZ3QGq", "0Gdbi5-h8XB", "iclr_2022_AT0K-SZ3QGq", "FkOe-gzSg-b", "0Gdbi5-h8XB", "Rx1aMF36e8A", "FkOe-gzSg-b", "FkOe-gzSg-b", "iclr_2022_AT0K-SZ3QGq", "iclr_2022_AT0K-SZ3QGq", "uhRhcYetD2s", "Rx1aMF36e8A", "0Gdbi5-h8XB", "Rx1aMF36e8A", "0Gd...
iclr_2022_I_RLPhVUfw8
Dense Gaussian Processes for Few-Shot Segmentation
Few-shot segmentation is a challenging dense prediction task, which entails segmenting a novel query image given only a small annotated support set. The key problem is thus to design a method that aggregates detailed information from the support set, while being robust to large variations in appearance and context. To this end, we propose a few-shot segmentation method based on dense Gaussian process (GP) regression. Given the support set, our dense GP learns the mapping from local deep image features to mask values, capable of capturing complex appearance distributions. Furthermore, it provides a principled means of capturing uncertainty, which serves as another powerful cue for the final segmentation, obtained by a CNN decoder. Instead of a one-dimensional mask output, we further exploit the end-to-end learning capabilities of our approach to learn a high-dimensional output space for the GP. Our approach sets a new state-of-the-art for both 1-shot and 5-shot FSS on the PASCAL-5$^i$ and COCO-20$^i$ benchmarks, achieving an absolute gain of $+14.9$ mIoU in the COCO-20$^i$ 5-shot setting. Furthermore, the segmentation quality of our approach scales gracefully when increasing the support set size, while achieving robust cross-dataset transfer.
Reject
This paper proposes the use of Gaussian process regression embedded into a neural network architecture for few-shot segmentation. In more detail, support and query images and support masks are fed through their encoders and their corresponding features are then used for Gaussian process regression to infer the distribution of the query mask encoding given the support set and the query images. The mean and the variance characterizing the GP predictive distribution is then fed into a CNN-based decoder to make the final prediction (segmentation). The method is evaluated on PASCAL-5^i and COCO-20^i datasets, showing the superiority of the proposed approach wrt several competitive baselines. Overall, the reviewers found the approach of using GPs within the proposed architecture interesting and somewhat significant and novel to the few-shot segmentation community. Technically, the proposed method does not develop a new algorithm and simply uses standard Gaussian process regression. The authors seemed to have addressed several concerns raised by the reviewers including the ablation study evaluating the influence of the GP module. However, the reviewers felt that there were quite a few changes/clarifications to the paper and new results that were not highlighted in the revised version, which made it difficult to provide a new assessment of the paper. Furthermore, the reviewers also thought that the authors did not provide convincing explanations in terms of the improvements from 1-shot to 5-shots, the not-so-good results when the model was trained with standard SGD without loss weighting and the rationale behind the success of the 5-shot setting.
train
[ "22tSL8DwQSa", "0OtZO9ECT4G", "4SE9E0kLRaB", "cjHTiuJWNuo", "i5--bu61MRV", "2aWF4tbpPuD", "g99LuFQVLxW", "XZxw_pSqbA9", "yKTfsJy6S-", "geuCEvwcjCK", "SzXbOOjQULT", "AjGYEAOMc48", "8LFfyQ_GD36", "YHfvl9qbhBI" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have addressed the reviewers’ concerns and posted a summary under each reviewer. If there are any further questions, we are happy to answer them.", " Dear Reviewer Kjwm,\n \nThanks for your valuable feedback! We address all of your concerns and a summary of our detailed response is provided below. We humbly ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "iclr_2022_I_RLPhVUfw8", "YHfvl9qbhBI", "8LFfyQ_GD36", "AjGYEAOMc48", "SzXbOOjQULT", "YHfvl9qbhBI", "iclr_2022_I_RLPhVUfw8", "8LFfyQ_GD36", "AjGYEAOMc48", "SzXbOOjQULT", "iclr_2022_I_RLPhVUfw8", "iclr_2022_I_RLPhVUfw8", "iclr_2022_I_RLPhVUfw8", "iclr_2022_I_RLPhVUfw8" ]
iclr_2022_NqDLrS73nG
Transliteration: A Simple Technique For Improving Multilingual Language Modeling
While impressive performance in natural language processing tasks has been achieved for many languages by transfer learning from large pretrained multilingual language models, it is limited by the unavailability of large corpora for most languages and the barrier of different scripts. Script difference forces the tokens of two languages to be separated at the input. Thus we hypothesize that transliterating all the languages to the same script can improve the performance of language models. Languages of South Asia and Southeast Asia present a unique opportunity of testing this hypothesis as almost all of the major languages in this region have their own script. Nevertheless, it is possible to transliterate them to a single representation easily. We validate our hypothesis empirically by pretraining ALBERT models on the Indo-Aryan languages available on the OSCAR corpus and measuring the model's performance on the Indo-Aryan subset of the IndicGLUE benchmark. Compared to the non-transliteration-based model, the transliteration-based model (termed XLM-Indic) shows significant improvement on almost all tasks of IndicGLUE. For example, XLM-Indic performed better on News Classification (0.41%), Multiple Choice QA (4.62%), NER (6.66%), and Cloze-Style QA (3.32%). In addition, XLM-Indic establishes new SOTA results for most tasks the on IndicGLUE benchmark while being competitive at the rest. Across the tasks of IndicGLUE, the most underrepresented languages seem to gain the most improvement. For instance, for the NER, XLM-Indic achieves 10%, 35%, and 58.5% better F1-scores on Gujarati, Panjabi, and Oriya languages compared to the current SOTA.
Reject
The authors show that it is possible to overcome the script barrier in MLLMs by using transliteration. In effect, they show that transliterating all text to a single script improves the performance for low-resource languages. They also provide additional analysis in the form of statistical tests and crosslingual representation analysis to substantiate their claims. The main concerns raised by the reviewers are: (i) lack of novelty: the idea of using transliteration has been extensively studied in the context of NMT, Speech. It has also been studied in the context of MLLMs by some recent work (which can be considered to be contemporary). IMO, this is a concern. (ii) focus on Indic languages: there are some concerns raised about the broader applicability of the techniques presented in the paper (personally, I disagree with this concern as Indic languages are important - for example, there are numerous papers which only report results on En-De, En-Ru translation) (iii) limited evaluation: the technique is evaluated only using the ALBERT model and other configurations (such as ROBERTA, XLM, etc) are not considered. IMO, it would have helped if the authors presented results on these models also (at least we would know if transliteration only helps in the case of small/compact models or even in the case of large models) (iv) missing references: there is a large body of related work on NMT, speech, etch which the authors had missed in their initial draft. This has been rectified in the updated version. The reviewers did participate in the discussion with the meta-reviewer (not with the authors though) and even after looking at the revised draft mentioned that the novelty is limited. To summarise my views, I think the initial draft of the paper did need improvements and the final draft is a significantly improved version of the initial draft. However, I still feel the novelty is missing. Even the empirical novelty claimed by the authors is ;lacking due to the use of a single model (ALBERT).
train
[ "dKg6UBstBMR", "uUhsBQxGbeM", "dUGobICxXdp", "GPIIEgj-FkF", "f54X-3G6up1", "gjVgbp_W7rO", "c3REnf35t58", "XoXcNBG-JI0", "VPqW_h6J1yI", "WmWuFkjAXK4", "qaJv1J2QaGy", "3haCillax6", "-PnCMlbu2zr", "RovGAh29KoL", "X1nKUtwj5y2", "dz4gGY3QBBe", "9nFvo6ibn_5", "Q55SdWw0f9v", "Y2GOVlBagn...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ...
[ " We thank the reviewer for their suggestion. Nevertheless, it is discouraging that the score remained the same due to novelty only, even after significant improvement to the paper as recognized by the reviewer. ", "This paper uses transliteration to build better multilingual language models. This is particularly...
[ -1, 5, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "uUhsBQxGbeM", "iclr_2022_NqDLrS73nG", "f54X-3G6up1", "WmWuFkjAXK4", "iclr_2022_NqDLrS73nG", "RovGAh29KoL", "XoXcNBG-JI0", "VPqW_h6J1yI", "iclr_2022_NqDLrS73nG", "f54X-3G6up1", "3haCillax6", "WQX3RjHabcD", "gjVgbp_W7rO", "uUhsBQxGbeM", "dz4gGY3QBBe", "gDxkuElvrLj", "HdOudkm054d", "...
iclr_2022_8QE3pwEVc8P
Zero-Cost Operation Scoring in Differentiable Architecture Search
Differentiable neural architecture search (NAS) has attracted significant attention in recent years due to its ability to quickly discover promising architectures of deep neural networks even in very large search spaces. Despite its success, many differentiable NAS methods lack robustness and may degenerate to trivial architectures with excessive parameter-free operations such as skip connections thus leading to inferior performance. In fact, selecting operations based on the magnitude of architectural parameters was recently proven to be fundamentally wrong, showcasing the need to rethink how operation scoring and selection occurs in differentiable NAS. To this end, we formalize and analyze a fundamental component of differentiable NAS: local "operation scoring" that occurs at each choice of operation. When comparing existing operation scoring functions, we find that existing methods can be viewed as inexact proxies for accuracy. We also find that existing methods perform poorly when analyzed empirically on NAS benchmarks. From this perspective, we introduce new training-free proxies to the context of differentiable NAS, and show that we can significantly speed up the search process while improving accuracy on multiple search spaces. We take inspiration from zero-cost proxies that were recently studied in the context of sample-based NAS but shown to degrade significantly for larger search spaces like DARTS. Our novel "perturbation-based zero-cost operation scoring" (Zero-Cost-PT) improves searching time and accuracy compared to the best available differentiable architecture search for many search space sizes, including very large ones. Specifically, we are able improve accuracy compared to the best current method (DARTS-PT) on the DARTS CNN search space while being over 40x faster (total searching time 25 minutes on a single GPU). Our code is available at: https://github.com/avail-upon-acceptance.
Reject
This paper received scores of 5,5,6,8. The reviewer giving a score of 8 stated that they would've given a 7, but that that is not an option in the system. The other reviewer giving an acceptance scores mentioned that they would also be OK with a rejection. The details of the assessment are thus less enthusiastic than could be assumed with an overall average score of 6. I am therefore weakly recommending rejection. The main criticisms of the reviewers are lack of novelty, lack of deeper analyses that really provide insights into why zero-cost operation scoring works, and lack of the number of NAS benchmarks tested. Out of these, personally, I would not criticize a lack of novelty, since it is not trivial to put together zero-cost and one-shot methods and the results appear promising. However, even the most positive reviewer criticized that the work focuses on NAS-Bench-201 heavily (which is particularly problematic given that NAS-Bench-201 uses a fixed wiring and only allows the choice of operations; this may make the proposed method particularly applicable). During the rebuttal, the authors added NAS-Bench-1shot, which is a very good step, but the proposed technique does not actually work as well there. While this may be due to the special nature of operations in the nodes rather than in the edges for NAS-Bench-1shot1, for a revision, it would be good to add additional experiments on further NAS benchmarks in order to allow for a better understanding under which circumstances the proposed method works well. In particular, it would be interesting how well the method works on a quite different search space, such as the one of MobileNet.
test
[ "0BT1sk9XllV", "DLlMpC_zR3l", "L9Z4PhK0yJC", "jjF8FKmXT_H", "5WdzX4vasYg", "PecIgRMOyEZ", "ahf0brbPsC", "gN1Q1-WQweJ", "pjBFWiLvqCW", "MN3W3j-2pne", "DHw2o4PVGJX", "n7qwocRE0e", "e4dyZM_WBd1", "HAI0vJnx_hM", "L-z75e1BmW", "68Wq3y_vxPe" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for this comment. We agree that, in general, it is unlikely to have a universal \"golden score\" that performs well across all datasets for all tasks. However, in the context of our work, we disagree with the statement that \"there is no clear sign of how to select a zero-order proxy\".\nBef...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "DLlMpC_zR3l", "DHw2o4PVGJX", "pjBFWiLvqCW", "gN1Q1-WQweJ", "PecIgRMOyEZ", "n7qwocRE0e", "iclr_2022_8QE3pwEVc8P", "e4dyZM_WBd1", "HAI0vJnx_hM", "ahf0brbPsC", "L-z75e1BmW", "68Wq3y_vxPe", "ahf0brbPsC", "iclr_2022_8QE3pwEVc8P", "iclr_2022_8QE3pwEVc8P", "iclr_2022_8QE3pwEVc8P" ]
iclr_2022_vxlAHR9AyZ6
$\alpha$-Weighted Federated Adversarial Training
Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack. However, the inner-maximization optimization of Adversarial Training can exacerbate the data heterogeneity among local clients, which triggers the pain points of Federated Learning. This makes that the straightforward combination of two paradigms shows the performance deterioration as observed in previous works. In this paper, we introduce an $\alpha$-Weighted Federated Adversarial Training ($\alpha$-WFAT) method to overcome this problem, which relaxes the inner-maximization of Adversarial Training into a lower bound friendly to Federated Learning. We present the theoretical analysis about this $\alpha$-weighted mechanism and its effect on the convergence of FAT. Empirically, the extensive experiments are conducted to comprehensively understand the characteristics of $\alpha$-WFAT, and the results on three benchmark datasets demonstrate $\alpha$-WFAT significantly outperforms FAT under different adversarial learning methods and federated optimization methods.
Reject
This manuscript proposes and analyses a weighting approach to improve the conformance of adversarial training in federated learning. The authors observe that adversarial training seems to degrade during the late stages of training, and suggest that this degradation is a consequence of exacerbated cross-device bias in federated averaging. They suggest and analyze a weighted scheme to fix this issue. During the review, the main concerns are related to the novelty of the work compared to existing work, the clarity of the technical contributions, and unclear technical statements. The authors respond to these concerns and partially satisfy the reviewers. After discussion, reviewers remain mixed, with multiple weak rejects and one strong accept. No fatal flaws are noted. The opinion of the area chair is that while there are no fatal flaws, there is very limited enthusiasm for this paper. This limited enthusiasm seems to be a result of intuition for observed phenomena that seem incorrect or insufficient to reviewers. Overall, I think this paper outlines and addresses an interesting issue of real concern. Flaws in the intuition building/explanation, and issues with clarity of presentation need to be improved for this work to have some impact.
test
[ "v1jC_2gmHcj", "GIaYiackKDF", "gG-fyTP8Wz", "60hyPuIqK5o", "oDzfTcwyLL", "RuUFL5sDh3-", "1ujkU5XaXUH", "jAXI5vlfx_B", "2xq_PDXVlW", "uJ603tlbrL", "Qiv6mYemxLm", "NSgHMHlSuCe", "wRrQYHx_jWK", "KaCjLziJ3Q", "f1Jjyo76wrC", "lJ-ln0oy2WK", "Nqjb0TyvusW", "oHbH5q4oE4", "h8E9kBjtKr6", ...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Q1: The bias-variance explanation of the proposed low-bound relaxation is not clear. How are the bias and variance defined in this case? Why would introducing such a bias improve the model performance from the perspective of bias-variance trade-off? \n\n**A1:** In terms of the combination of adversarial trainin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 4 ]
[ "GIaYiackKDF", "oDzfTcwyLL", "52ghZVMX56D", "h8E9kBjtKr6", "oHbH5q4oE4", "52ghZVMX56D", "h8E9kBjtKr6", "oHbH5q4oE4", "Nqjb0TyvusW", "52ghZVMX56D", "52ghZVMX56D", "h8E9kBjtKr6", "oHbH5q4oE4", "h8E9kBjtKr6", "oHbH5q4oE4", "oHbH5q4oE4", "iclr_2022_vxlAHR9AyZ6", "iclr_2022_vxlAHR9AyZ6"...
iclr_2022_2PSrjVtj6gU
Graph Attention Multi-layer Perceptron
Recently, graph neural networks (GNNs) have achieved a stride of success in many graph-based applications. However, most GNNs suffer from a critical issue: representation learned is constructed based on a fixed k-hop neighborhood and insensitive to individual needs for each node, which greatly hampers the performance of GNNs. To satisfy the unique needs of each node, we propose a new architecture -- Graph Attention Multi-Layer Perceptron (GAMLP). This architecture combines multi-scale knowledge and learns to capture the underlying correlations between different scales of knowledge with two novel attention mechanisms: Recursive attention and Jumping Knowledge (JK) attention. Instead of using node feature only, the knowledge within node labels is also exploited to reinforce the performance of GAMLP. Extensive experiments on 12 real-world datasets demonstrate that GAMLP achieves state-of-the-art performance while enjoying high scalability and efficiency.
Reject
The paper proposes an improvement to graph based neural network, by improving their attention mechanism (introducing recursive attention and jumping knowledge attention) to flexibly attend to its neighborhood. The paper shows solid experimental results over competitive baselines, as acknowledged by reviewers. The reviewers agree that the paper is clearly written, but overall have issues with the novelty of the approach. The paper combines multiple components (last residual connection module, improved attention mechanism) to show gains, but none of the pieces are very new.
train
[ "L0Sjj8yAXpF", "bnldNyYMk8G", "dafgN2l8MPj", "ffj2E3RIJL1", "VBt8JF1SGhC", "ksPY_h_M7bY", "vuOdHSoafjr", "TabGf3sa13y", "_2uaD81goQJ", "CDo58zUix6h", "Myjpzuub9r", "PNclOlDpUee", "WtFS1R20n96", "mIeM2aWXISv", "WDrOa_gIMjF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an efficient method for inductive/transductive node labeling based on fast feature propagation, label propagation, and attention and yields good experimental results. The paper introduces an efficient way to perform node labeling tasks on inductive and transductive setting. It achieves good resu...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_2PSrjVtj6gU", "L0Sjj8yAXpF", "mIeM2aWXISv", "iclr_2022_2PSrjVtj6gU", "iclr_2022_2PSrjVtj6gU", "L0Sjj8yAXpF", "ksPY_h_M7bY", "_2uaD81goQJ", "WDrOa_gIMjF", "mIeM2aWXISv", "PNclOlDpUee", "WtFS1R20n96", "ffj2E3RIJL1", "iclr_2022_2PSrjVtj6gU", "iclr_2022_2PSrjVtj6gU" ]
iclr_2022_jKzjSZYsrGP
SCformer: Segment Correlation Transformer for Long Sequence Time Series Forecasting
Long-term time series forecasting is widely used in real-world applications such as financial investment, electricity management and production planning. Recently, transformer-based models with strong sequence modeling ability have shown the potential in this task. However, most of these methods adopt point-wise dependencies discovery, whose complexity increases quadratically with the length of time series, which easily becomes intractable for long-term prediction. This paper proposes a new Transformer-based model called SCformer, which replaces the canonical self-attention with efficient segment correlation attention (SCAttention) mechanism. SCAttention divides time series into segments by the implicit series periodicity and utilizes correlations between segments to capture long short-term dependencies. Besides, we design a dual task that restores past series with the predicted future series to make SCformer more stable. Extensive experiments on several datasets in various fields demonstrate that our SCformer outperforms other Transformer-based methods and training with the additional dual task can enhance the generalization ability of the prediction model.
Reject
The paper proposes a Transformer-based model called SCformer to perform long sequence time series forecasting by computing efficient segment correlation attention. The reviewers think the method lacks novelty and the experiments need a detailed ablation study.
train
[ "ol0C9VFVRaI", "uJ9bkLsFuFh", "shtJ6PA6OVS", "CH4s1MJ-Gyt", "o2Ej6wsB4bu", "K_yLjLzhERW", "lzxHTMphqL", "dE40u8ZFeRm", "sHEdzpAhp-t", "HHXBpLj_wYG", "d6bTUsRMGOw", "RZ-3dkodfXd", "kqGhJnNGrXH", "cNZomURTbia", "igqesXD_KC4", "GNq12NC8ilU", "6LaA-TB07ck", "UyYYnvGvdWk", "Kx936DwL_-...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I just found the Informer paper's authors have updated the experiment results of all methods due to the change in data scaling. But we are referring to the second version which is the version received by AAAI (i.e. https://arxiv.org/pdf/2012.07436v2.pdf), not the latest version (i.e. https://arxiv.org/pdf/2012.07...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "uJ9bkLsFuFh", "HHXBpLj_wYG", "K_yLjLzhERW", "K_yLjLzhERW", "K_yLjLzhERW", "iclr_2022_jKzjSZYsrGP", "lGzo4KC2mh1", "lGzo4KC2mh1", "lGzo4KC2mh1", "lGzo4KC2mh1", "UyYYnvGvdWk", "Kx936DwL_-T", "lGzo4KC2mh1", "Kx936DwL_-T", "Kx936DwL_-T", "UyYYnvGvdWk", "UyYYnvGvdWk", "iclr_2022_jKzjSZ...
iclr_2022_b8mo34uDObn
Ensembles and Cocktails: Robust Finetuning for Natural Language Generation
When finetuning a pretrained language model for natural language generation tasks, one is currently faced with a tradeoff. Lightweight finetuning (e.g., prefix-tuning, adapters), which freezes all or most of the parameters of the pretrained model, has been shown to achieve stronger out-of-distribution (OOD) performance than full finetuning, which tunes all of the parameters. However, lightweight finetuning can underperform full finetuning in-distribution (ID). In this work, we present methods to combine the benefits of full and lightweight finetuning, achieving strong performance both ID and OOD. First, we show that an ensemble of the lightweight and full finetuning models achieves the best of both worlds: performance matching the better of full and lightweight finetuning, both ID and OOD. Second, we show that we can achieve similar improvements using a single model instead of two with our proposed cocktail finetuning, which augments full finetuning via distillation from a lightweight model. Finally, we provide some explanatory theory in a multiclass logistic regression setting with a large number of classes, describing how distillation on ID data can transfer the OOD behavior of one model to another.
Reject
This paper presents a method for ensembling light fine-tuning methods and full fine-tuning methods to achieve better performance both in-domain and out-of-domain distributions. As authors agree, similar idea has been explored in the computer vision literature. The reviewers like the overall idea of the paper, but they all had some concerns regarding the experiments. The reviewers provide valuable feedback on how to improve the experiments, potentially running the same idea on more datasets and tasks, provide more analyses and discussions on how to understand the results.
val
[ "bCHMl2Q9OWZ", "C5R_JqgIJd", "VwDT2CBuhlw", "7pSk5QDtkY", "AD_QlTZAWx", "SolQYDhvc7V", "Ym-GUDplZ6b", "KjGZ_JwkKrJ", "hNBLjh88RO5", "pTiyXW61d_6", "zxtNxFWqkgA", "03xXSi1oFRm", "27tPsVQ6La_", "wMzCrD-Obgk" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > self-training is definitely easy to apply to sequence generation tasks. \n\nIf we had access to unlabeled data, we agree that self-training could be applied. However, in our paper we do not have access to any extra unlabeled data. Self-training on the original training data would be very similar to our distilla...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "AD_QlTZAWx", "iclr_2022_b8mo34uDObn", "KjGZ_JwkKrJ", "Ym-GUDplZ6b", "pTiyXW61d_6", "iclr_2022_b8mo34uDObn", "C5R_JqgIJd", "wMzCrD-Obgk", "27tPsVQ6La_", "zxtNxFWqkgA", "03xXSi1oFRm", "iclr_2022_b8mo34uDObn", "iclr_2022_b8mo34uDObn", "iclr_2022_b8mo34uDObn" ]
iclr_2022_UTdxT0g6ZuC
Automatic Forecasting via Meta-Learning
In this work, we develop techniques for fast automatic selection of the best forecasting model for a new unseen time-series dataset, without having to first train (or evaluate) all the models on the new time-series data to select the best one. In particular, we develop a forecasting meta-learning approach called AutoForecast that allows for the quick inference of the best time-series forecasting model for an unseen dataset. Our approach learns both forecasting models performances over time horizon of same dataset and task similarity across different datasets. The experiments demonstrate the effectiveness of the approach over state-of-the-art (SOTA) single and ensemble methods and several SOTA meta-learners (adapted to our problem) in terms of selecting better forecasting models (i.e., 2X gain) for unseen tasks for univariate and multivariate testbeds.
Reject
In this work the authors consider the automatic selection of time-series forecasting model (and hyperparameters) based on historical data. It adopts a conventional feature-based meta-learning approach. Experimental results show an improved performance over the considered baselines. The reviewers appreciated the clarifications provided by the authors, but a number of concerns were unresolved. For instance, questions remained regarding the dataset collection, the baselines against which the proposed method was compared to (which were considered too weak) and the large number of missing details in the presentation of the method. Based on this the reviewers concluded that the paper could not be accepted in its current form and would require a major revision.
train
[ "VZHM0Omvs_w", "GeKvNb3t6Q", "tj_hXDfovOc", "cSL-CD-KFyi", "_rEyzremBim", "58UqKSm7a56" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a meta learning approach for time series. Specifically, the approach learns to select the best model and hyper-parameters for unseen test datasets, when trained on performance data of various models applied to training datasets. The approach utilizes various dataset specific features in order t...
[ 5, -1, -1, 6, 3, 3 ]
[ 3, -1, -1, 3, 4, 3 ]
[ "iclr_2022_UTdxT0g6ZuC", "cSL-CD-KFyi", "VZHM0Omvs_w", "iclr_2022_UTdxT0g6ZuC", "iclr_2022_UTdxT0g6ZuC", "iclr_2022_UTdxT0g6ZuC" ]
iclr_2022_dIVrWHP9_1i
G-Mixup: Graph Augmentation for Graph Classification
This work develops \emph{mixup to graph data}. Mixup has shown superiority in improving the generalization and robustness of neural networks by interpolating features and labels of random two samples. Traditionally, Mixup can operate on regular, grid-like, and Euclidean data such as image or tabular data. However, it is challenging to directly adopt Mixup to augment graph data because two graphs typically: 1) have different numbers of nodes; 2) are not readily aligned; and 3) have unique topologies in non-Euclidean space. To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i.e., graphon) of different classes of graphs. Specifically, we first use graphs within the same class to estimate a graphon. Then, instead of directly manipulating graphs, we interpolate graphons of different classes in the Euclidean space to get mixed graphons, where the synthetic graphs are generated through sampling based on the new graphons.
Reject
This work tries to extend mixup to graph structured data, where graphs can differ in the number of nodes, and the space is not Euclidean. This is achieved by G-Mixup, which interpolates the generator (graphon) of different classes of graphs through the latent Euclidean space. Experimental results show some promise. Several concerns have been raised by the reviewers, and although the rebuttal helped, some concerns remain. For example, how to confirm that the graphon can be accurate estimated. Several weakness in experiment is also raised, and a revision is needed before the paper can be published.
train
[ "KVvp0TQ2rp1", "WlWtU38Gt87", "kso7Aw4hBDn", "ROt-M5lToiZ", "pPxR_L8KNPX", "Tyy8dgXgfLn", "UnYxyobBwE", "gnheBunCTT1", "n58dVyu2Ndm", "GD7zElAVQT6", "jAgXtKRXgc5", "8KhRt2gnbOu", "qnC-kSz_O8z", "sZjjRbYJzvJ", "PUKpQU2kUYO", "Ta4raZcxeZX", "uhS_GqmNsB0", "I1NGDz6NUwz", "E_8d7BZH3P...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", ...
[ "This work studied the research problem of graph data augmentation for supervised graph classification. The authors proposed G-mixup that performs mixup on graph data via graphon generators. Due to the irregular characteristics of graphs, instead of directly mixing up the data objects, G-mixup mix up the graph gene...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_dIVrWHP9_1i", "ROt-M5lToiZ", "pPxR_L8KNPX", "E_8d7BZH3P", "PUKpQU2kUYO", "E_8d7BZH3P", "E_8d7BZH3P", "PUKpQU2kUYO", "jAgXtKRXgc5", "iclr_2022_dIVrWHP9_1i", "dEqxajW1bll", "PUKpQU2kUYO", "GD7zElAVQT6", "iclr_2022_dIVrWHP9_1i", "I1NGDz6NUwz", "KVvp0TQ2rp1", "GD7zElAVQT6", ...
iclr_2022_vBn2OXZuQCF
How does Contrastive Pre-training Connect Disparate Domains?
Pre-training on massive unlabeled datasets greatly improves accuracy under distribution shifts. As a first step toward understanding this, we study a popular pre-training method, contrastive learning, in the unsupervised domain adaptation (UDA) setting where we only have labeled data from a source domain and unlabeled data from a target domain. We begin by showing on 4 benchmark datasets that out-of-the-box contrastive pre-training (even without large-scale unlabeled data) is competitive with other UDA methods. Intuitions from classical UDA methods such as domain adversarial training focus on bringing the domains together in feature space to improve generalization from source to target. Surprisingly, we find that contrastive pre-training learns features that are very far apart between the source and target domains. How then does contrastive learning improve robustness to distribution shift? We develop a conceptual model for contrastive learning under domain shifts, where data augmentations form connections between classes and domains that can be far apart. We propose a new measure of connectivity ---the relative connection strengths between same and different classes across domains---that governs the success of contrastive pre-training for domain adaptation in a simple example and strongly correlates with our results on benchmark datasets.
Reject
This is a borderline paper. While reviewers believe the findings from this paper may be of potential interest, they are fully convinced. For instance, if the authors want to claim the proposed mechanism is general for UDA, then they should demonstrate its effectiveness to other application domain(s), such as the NLP domain, where the pretrain-finetuning strategy is widely adopted for transfer learning. However, the authors did not provide correspondingly additional experiments as requested by a reviewer but claimed they only focused on the CV domain. If the focus is on the CV domain, then the authors need to explain in detail why in the CV domain, the proposed mechanism works well (while in other domains, it may not). There are many other concerns about the assumptions, experimental settings, etc. In summary, this is a borderline paper below the acceptance bar of ICLR.
train
[ "z-nCeNUR0z1", "EMs7YOtLghk", "_rcFgF9LvUm", "_kA8Vd9RJGN", "kZ1tS51duD", "JCcaaQkbDsb", "95ZfEJF29eD", "NisVHvX4KT", "zxApUb2vPIg", "k8_g3vaATER", "ic4ZkJ7MQfy", "0ornXvEk7F0", "5khknPQU4Dr", "E4wjGxrRq_m", "6M-NrbOpO3U" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Further, from the experiments and descriptions in current paper, the actual support is the same across all domains\n\nBy \"support\", we mean the \"set of points that have probability > 0\". For sketch to real images, for example, even though they have the same input shape, they have disjoint support since sket...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "EMs7YOtLghk", "_rcFgF9LvUm", "kZ1tS51duD", "iclr_2022_vBn2OXZuQCF", "NisVHvX4KT", "95ZfEJF29eD", "iclr_2022_vBn2OXZuQCF", "6M-NrbOpO3U", "5khknPQU4Dr", "0ornXvEk7F0", "E4wjGxrRq_m", "iclr_2022_vBn2OXZuQCF", "iclr_2022_vBn2OXZuQCF", "iclr_2022_vBn2OXZuQCF", "iclr_2022_vBn2OXZuQCF" ]
iclr_2022_adjl32ogfqD
Learning Stochastic Shortest Path with Linear Function Approximation
We study the stochastic shortest path (SSP) problem in reinforcement learning with linear function approximation, where the transition kernel is represented as a linear mixture of unknown models. We call this class of SSP problems as linear mixture SSP. We propose a novel algorithm for learning the linear mixture SSP, which can attain a $\tilde O(dB_{\star}^{1.5}\sqrt{K/c_{\min}})$ regret. Here $K$ is the number of episodes, $d$ is the dimension of the feature mapping in the mixture model, $B_{\star}$ bounds the expected cumulative cost of the optimal policy, and $c_{\min}>0$ is the lower bound of the cost function. Our algorithm also applies to the case when $c_{\min} = 0$, where a $\tilde O(K^{2/3})$ regret is guaranteed. To the best of our knowledge, this is the first algorithm with a sublinear regret guarantee for learning linear mixture SSP. In complement to the regret upper bounds, we also prove a lower bound of $\Omega(dB_{\star} \sqrt{K})$, which nearly matches our upper bound.
Reject
This paper studies the stochastic shortest path (SSP) problem with a linear approximation to the transition model. The authors propose a doubling algorithm for regret minimization in this setting and bound its regret. This is a theory paper with no experiments. This paper received three borderline reviews. All reviewers agreed on its strengths and weaknesses during the discussion. The strengths are that the paper is well written and that the results are novel. The weaknesses are that the proposed solution is standard and analyzed using standard tools. The reviewers noted departures from the standard analyses but these seem to be minor technical issues. Therefore, although well executed, this paper lacks novelty. No reviewer argued for the acceptance of this paper and therefore it is rejected.
train
[ "_rz8Xhxhfvn", "qYmHqwaskNW", "KgXtQ3Ek8It", "NLC2thoY-uL", "Y_sm469rdxm", "Hzz4VfyyThL", "QwTA2TXa8J", "szblWllF6Ei" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ***Q3***. The provided regret bounds appear to be loose than what one would expect. For example, (i) the regret bound has a dependency of order $\\log^2⁡(KB d^2/\\delta)$, while one would expect a dependency of order $\\sqrt{\\log ⁡(KB d^2/\\delta)}$ (as in linear bandits for instance)\n\n***A3***. \nFirst of a...
[ -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "Hzz4VfyyThL", "QwTA2TXa8J", "QwTA2TXa8J", "szblWllF6Ei", "Hzz4VfyyThL", "iclr_2022_adjl32ogfqD", "iclr_2022_adjl32ogfqD", "iclr_2022_adjl32ogfqD" ]
iclr_2022_iaxWbVx-CG_
Hierarchical Cross Contrastive Learning of Visual Representations
The rapid progress of self-supervised learning (SSL) has greatly reduced the labeling cost in computer vision. The key idea of SSL is to learn invariant visual representations by maximizing the similarity between different views of the same input image. In most SSL methods, the representation invariant is measured by a contrastive loss which compares one of the network outputs after the projection head to its augmented version. Albeit being effective, this approach overlooks the information containing in the hidden layer of the projection head therefore could be sub-optimal. In this work, we propose a novel approach termed Hierarchical Cross Contrastive Learning(HCCL) to further distill the information mismatched by the conventional contrastive loss. The HCCL uses a hierarchical projection head to project the raw representations of the backbone into multiple latent spaces and then compares latent features across different levels and different views. By cross-level contrastive learning, HCCL not only regulates invariant on multiple hidden levels but also crosses different levels, improving the generalization ability of the learned visual representations. As a simple and generic method, HCCL can be applied to different SSL frameworks. We validate the efficacy of HCCL under classification, detection, segmentation, and few-shot learning tasks. Extensive experimental results show that HCCL outperforms most previous methods in various benchmark datasets.
Reject
In general, the reviewers were lukewarm about the paper. They all acknowledged the strength of the paper: it is well written, HCCL showed (somewhat) improvements over previous methods, and it is easy to implement. However, it still feels incremental, and the improvement over the full training setting is small due to the natural limitation of consistency assumption. The AC feels that while there is merit of the proposed method, the impact seems to be limited to specific scenarios such as limited epochs.
train
[ "qYc0-fF03GD", "vaeJozTL60G", "03YQW0V2cR0", "uFglKG45QEL", "FNqb0uX-uG3", "gb0KFMbj7a4", "w5IODx_9yle", "u7nRi0eMNN", "7dXivqnucIM", "bwRKSyk3afW", "Z009Gtcc32O", "JITKcCheBPZ", "jkT4R54PjCw", "oQ9993zTm9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a hierarchical cross contrastive self-supervised learning framework for learning visual representation. This paper proposes to project the representations of an image and its augmented version to multiple latent spaces and also make predictions on each of the latent spaces. A contrastive loss ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2022_iaxWbVx-CG_", "Z009Gtcc32O", "uFglKG45QEL", "FNqb0uX-uG3", "gb0KFMbj7a4", "w5IODx_9yle", "u7nRi0eMNN", "oQ9993zTm9S", "jkT4R54PjCw", "JITKcCheBPZ", "qYc0-fF03GD", "iclr_2022_iaxWbVx-CG_", "iclr_2022_iaxWbVx-CG_", "iclr_2022_iaxWbVx-CG_" ]
iclr_2022_NrkAAcMpRoT
C-MinHash: Improving Minwise Hashing with Circulant Permutation
Minwise hashing (MinHash) is an important and practical algorithm for generating random hashes to approximate the Jaccard (resemblance) similarity in massive binary (0/1) data. The basic theory of MinHash requires applying hundreds or even thousands of independent random permutations to each data vector in the dataset, in order to obtain reliable results for (e.g.,) building large-scale learning models or approximate near neighbor search in massive data. In this paper, we propose {\bf Circulant MinHash (C-MinHash)} and provide the surprising theoretical results that using only \textbf{two} independent random permutations in a circulant manner leads to uniformly smaller Jaccard estimation variance than that of the classical MinHash with $K$ independent permutations. Experiments are conducted to show the effectiveness of the proposed method. We also analyze a more convenient C-MinHash variant which reduces two permutations to just one, with extensive numerical results to validate that it achieves essentially the same estimation accuracy as using two permutations with rigorous theory.
Reject
This was a somewhat unusual submission in that the authors tried to motivate their paper by pointing to a separate anonymous manuscript. However, the authors didn't seem to want to confirm they would merge the manuscripts when asked about this. It was thought that in fairness the submitted manuscript should be judged on its own. After discussion, it was agreed that the submitted paper on its own, did not generate enough enthusiasm to merit acceptance.
val
[ "WKdz_-k3-Nq", "llYfD4MWdsk", "o_Fb-u6dVD6", "i6aGvccUAQk", "Wq9nq-w_XK4", "xi3zMiP7Cw7", "DRfRxKHF-a", "xFhY-OhbokI", "B_q8rm-Ei6q", "mXafO24e8Ki", "V5Q6EMUM_BS", "6kziMKA0dpY", "pgGDP_0dTxA", "ZaRxTwziZ8" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe highly appreciate your support and acknowledging that the anonymous report has addressed your concerns. It is also nice of you to agree that \"the circulant permutation idea is the building brick that can improve the accuracy of MinHash and many of its variants\". \n\nIndeed, these are two m...
[ -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "o_Fb-u6dVD6", "iclr_2022_NrkAAcMpRoT", "6kziMKA0dpY", "mXafO24e8Ki", "DRfRxKHF-a", "iclr_2022_NrkAAcMpRoT", "xFhY-OhbokI", "xi3zMiP7Cw7", "ZaRxTwziZ8", "iclr_2022_NrkAAcMpRoT", "pgGDP_0dTxA", "llYfD4MWdsk", "iclr_2022_NrkAAcMpRoT", "iclr_2022_NrkAAcMpRoT" ]
iclr_2022_PGGjnBiQ84G
Learning Surface Parameterization for Document Image Unwarping
In this paper, we present a novel approach to learn texture mapping for a 3D surface and apply it to document image unwarping. We propose an efficient method to learn surface parameterization by learning a continuous bijective mapping between 3D surface positions and 2D texture-space coordinates. Our surface parameterization network can be conveniently plugged into a differentiable rendering pipeline and trained using multi-view images and rendering loss. Recent work on differentiable rendering techniques for implicit surfaces has shown high-quality 3D scene reconstruction and view synthesis results. However, these methods typically learn the appearance color as a function of the surface points and lack explicit surface parameterization. Thus they do not allow texture map extraction or texture editing. By introducing explicit surface parameterization and learning with a recent differentiable renderer for implicit surfaces, we demonstrate state-of-the-art document-unwarping via texture extraction. We show that our approach can reconstruct high-frequency textures for arbitrary document shapes in both synthetic and real scenarios. We also demonstrate the usefulness of our system by applying it to document texture editing.
Reject
This paper proposes an architecture for learned surface parameterization, with application to image unwarping, which can be coupled with differentiable rendering, multi-view data, and other modern objective terms. The shape of the document is parameterized using an SDF technique, coupled with neural rendering and objective terms inspired by classical geometry processing. This machinery is quite "heavy," leading to slow training times. As pointed out by reviewer QH85, there were some experimental discrepancies---rightfully acknowledged by the authors---which make comparisons to DewarpNet less favorable for the new method, at least from a quantitative perspective. Visual inspection makes the comparison more favorable, although it would be preferable for the quantitative quality metrics and qualitative examples to align. Runtime measurements here are also not favorable and severely limit applicability of this technique in real-world scenarios, as pointed out by reviewers hfPz and QH85. While the mistaken quantitative results are forgivable, the AC agrees that the scope of this work is quite narrow; it is not clear where this architecture would be applied relative to the motivating application.
train
[ "HcSdCbw0jln", "JA3gmoFylVt", "49l7k1UZlH_", "_cmrkGboYSn", "95NAcAlv6xs", "W04YGBd-D1a", "qIpZ3RjEYZz", "giIfJbazMq", "OLz9YY1GiEj", "vNSCRpcG63", "XZIJqFJWbk1", "FaCl-TpmqwA", "RCoLQFtyseC", "uP4Ivd4aYfR" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the comments. Following are the clarifications regarding the remaining concerns:\n* We have discussed the contributions of NeuTex in the related work and would like to highlight that it doesn't allow unwarping documents due to the use of spherical UV domain without a shape specific prior...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "JA3gmoFylVt", "vNSCRpcG63", "W04YGBd-D1a", "95NAcAlv6xs", "OLz9YY1GiEj", "XZIJqFJWbk1", "RCoLQFtyseC", "FaCl-TpmqwA", "W04YGBd-D1a", "uP4Ivd4aYfR", "iclr_2022_PGGjnBiQ84G", "iclr_2022_PGGjnBiQ84G", "iclr_2022_PGGjnBiQ84G", "iclr_2022_PGGjnBiQ84G" ]
iclr_2022_J1uOGgf-bP
Test Time Robustification of Deep Models via Adaptation and Augmentation
While deep neural networks can attain good accuracy on in-distribution test points, many applications require robustness even in the face of unexpected perturbations in the input, changes in the domain, or other sources of distribution shift. We study the problem of test time robustification, i.e., using the test input to improve model robustness. Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions, such as access to multiple test points, that prevent widespread adoption. In this work, we aim to study and devise methods that make no assumptions about the model training process and are broadly applicable at test time. We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and then adapt (all of) the model parameters by minimizing the entropy of the model's average, or marginal, output distribution across the augmentations. Intuitively, this objective encourages the model to make the same prediction across different augmentations, thus enforcing the invariances encoded in these augmentations, while also maintaining confidence in its predictions. In our experiments, we demonstrate that this approach consistently improves robust ResNet and vision transformer models, achieving accuracy gains of 1-8% over standard model evaluation and also generally outperforming prior augmentation and adaptation strategies. We achieve state-of-the-art results for test shifts caused by image corruptions (ImageNet-C), renditions of common objects (ImageNet-R), and, among ResNet-50 models, adversarially chosen natural examples (ImageNet-A).
Reject
The reviewers agree that test-time model adaptation is an interesting problem, providing a new perspective to improve model robustness. The proposed method builds on intuitive assumptions that are easy to understand. However, there are mainly two concerns regarding novelty and effectiveness. The paper can improve in these two aspects to meet ICLR standard.
train
[ "iaWv4H5N7qu", "p56ukxusLx7", "ypDITFinrjH", "-aL3N0YZA-", "nSJ9gRnO1e", "lBtl92Cr-iZ", "mppR1KZhJT1", "TGA-sBu1gnS", "ds1lvbnawyv", "0WiwbpZlFzn", "kgHLNLSwkMB", "UA3dpB6PtZR", "fVVuKGkgVI0", "ME2Mbbsxqi", "TpJmFMXzKG", "GBHSTUUG6TI", "wMtRN3uQMwN", "lvemvr-8E3", "Lhrykv3qrJ6", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "...
[ "The paper focus on single image test time adaptation of deep neural network for distribution shift in classification task. Prior works perform test time adaptation on a batch of images or entire test dataset to capture the distribution statistics. The authors propose to create augmented copies of a provided test i...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_J1uOGgf-bP", "UA3dpB6PtZR", "ds1lvbnawyv", "mppR1KZhJT1", "iclr_2022_J1uOGgf-bP", "ME2Mbbsxqi", "TGA-sBu1gnS", "nSJ9gRnO1e", "knsiVlfckQ9", "lBtl92Cr-iZ", "fVVuKGkgVI0", "iaWv4H5N7qu", "knsiVlfckQ9", "nSJ9gRnO1e", "CLeCNhYMglT", "Lhrykv3qrJ6", "lvemvr-8E3", "nSJ9gRnO1e",...
iclr_2022_qrdbsZEZPZ
Certified Robustness for Free in Differentially Private Federated Learning
Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users. As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private way (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification? To this end, we first investigate both the user-level and instance-level privacy of FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy. We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances. Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while may exhibit tradeoffs regarding the certified prediction. We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties.
Reject
The premise is an exciting observation: Differential privacy in federated learning might imply being certified against poisoning attacks. While this may be considered not surprising by some, the connection between differential privacy and robustness is interesting to many. The relationship was characterized both theoretically and empirically. The reviewers discussed the paper extensively with the authors, and while many issues were clarified, issues on correctness still remained: it is unclear if the proposed DP mechanism actually is DP, and subsampling amplification also had issues. Clarity needs to be added in the writing, and the extensive comments by the reviewers hopefully help the authors in that.
train
[ "DxHC9AhV0z4", "tqxwwUYMANw", "xZNFzmlxN7o", "Sfmdetx-_cE", "G4OS5Pp4I5Z", "mLY-CJfFfX", "Ic8tLWrcH4U", "Dcf9hnf20mY", "S3rhQRZcVa", "lEF8YGGwp7w", "ZYXvdZJhZDL", "zHWypM2lIDl", "GO5E9lSBgF3", "ELCick7W_sl", "gn2bhGMs1hV", "T8yu-PtdeA1", "Ai-9cMhg9u1", "bkosg8DH9Ai", "HFmuxiVSuyn...
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Update after rebuttal and discussions\n\nI thank the authors for taking the time to discuss the issues pointed out in the reviews at length. Unfortunately, I am still not convinced that the paper is ready for publication. My main concerns:\n\n1) There are now experiments in the updated paper claimed to be DP wh...
[ 3, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_qrdbsZEZPZ", "Ai-9cMhg9u1", "Dcf9hnf20mY", "Dcf9hnf20mY", "ELCick7W_sl", "Sfmdetx-_cE", "iclr_2022_qrdbsZEZPZ", "bkosg8DH9Ai", "T8yu-PtdeA1", "GO5E9lSBgF3", "iclr_2022_qrdbsZEZPZ", "lEF8YGGwp7w", "gn2bhGMs1hV", "S3rhQRZcVa", "DxHC9AhV0z4", "7dh6dYlakpE", "Ic8tLWrcH4U", "...
iclr_2022_R6hvtDTQmb
Adapting Stepsizes by Momentumized Gradients Improves Optimization and Generalization
Adaptive gradient methods, such as Adam, have achieved tremendous success in machine learning. Scaling gradients by square roots of the running averages of squared past gradients, such methods are able to attain rapid training of modern deep neural networks. Nevertheless, they are observed to generalize worse than stochastic gradient descent (SGD) and tend to be trapped in local minima at an early stage during training. Intriguingly, we discover that substituting the gradient in the second moment estimation term with the momentumized version in Adam can well solve the issues. The intuition is that gradient with momentum contains more accurate directional information and therefore its second moment estimation is a better choice for scaling than that of the raw gradient. Thereby we propose AdaMomentum as a new optimizer reaching the goal of training fast while generalizing better. We further develop a theory to back up the improvement in optimization and generalization and provide convergence guarantees under both convex and nonconvex settings. Extensive experiments on a wide range of tasks and models demonstrate that AdaMomentum exhibits state-of-the-art performance consistently. The source code is available at https://anonymous.4open.science/r/AdaMomentum_experiments-6D9B.
Reject
The paper proposes to substitute the gradient in the second moment estimation term with the "momentumized" version, arguing that it improves both optimization and generalization. Some theoretical results are shown as well as empirical results. The paper has been widely discussed by the reviewers and several weak points have been raised. Let me list some of the most important ones. - The theory appears to be incremental and overall very weak. The authors themselves acknowledged that this "is not a pure optimization theory paper". In details, the generalization analysis is a straightforward extension of Zhou et al. [NeurIPS 2020], while the optimization analysis inherits all the known weaknesses of previous similar analysis in deep learning optimization papers. In particular, *none* of the following is correct: the use of a regret analysis for a stochastic non-convex optimization algorithm, the assumption of bounded iterates, Assumption 5, the assumption in Theorem 2 on $\alpha_t/\sqrt{v_t}$. The fact that similar mistakes were done in previous papers does not make them correct: The community should aspire at doing better not at reiterating known mistakes. - On $\epsilon$: the reviewers correctly pointed out that moving $\epsilon$ under the square root and not changing its value is not fair. The answers of the authors on this point were unconvincing. - Doubts on empirical results: it seems that not all the possible hyperparameters of the baselines were properly tuned. For example, despite being common practice, epsilon should also be tuned, see for example the experiments in Agarwal et al. 2020. I didn't consider the discussion on AdaBelief because only marginally relevant to this paper. Overall, the paper does not seem interesting from a theoretical point of view and its empirical comparison cannot be fully trusted for the presence of some weaknesses.
train
[ "9WeMPjxMjqM", "4e65Vj_yd8I", "UMqqJVVnzfR", "ugf_OUN8E3_", "BS9hD2w4lFW", "559l_EacuU", "evlOyS2vije", "jqXncGUmQ_-", "NOtgFFmhwyn", "obcZSsuqszr", "9MvtOfDCRoO", "MVstFEp1kLf", "DvQK133yPUi", "mBc4Litfpl", "U0i16QLWQwl", "V5OrjAmXKLP", "Clg488WVwy6", "t8LMqN5f0Ic", "uc3oe7EzY1"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Dear AC and Reviewers,\n\nWe thank the AC for your time and efforts in handling the review process. \nWe thank the reviewers for giving valuable comments to make our paper better.\n\nWe will not participate in any further discussion.\n\nBest, \nAuthors", " Dear Authors and Reviewers,\n\nPlease keep the conve...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 2 ]
[ "4e65Vj_yd8I", "iclr_2022_R6hvtDTQmb", "ugf_OUN8E3_", "jqXncGUmQ_-", "559l_EacuU", "evlOyS2vije", "iclr_2022_R6hvtDTQmb", "gB59fFh1InA", "obcZSsuqszr", "9MvtOfDCRoO", "mBc4Litfpl", "DvQK133yPUi", "Clg488WVwy6", "gB59fFh1InA", "hSe5EZT3Qtb", "gB59fFh1InA", "hSe5EZT3Qtb", "iclr_2022_...
iclr_2022_B2pZkS2urk_
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks
While artificial neural networks (ANNs) have been widely adopted in machine learning, researchers are increasingly obsessed by the gaps between ANNs and natural neural networks (NNNs). In this paper, we propose a framework named as Evolutionary Plastic Recurrent Neural Networks (EPRNN). Inspired by NNN, EPRNN composes Evolution Strategies, Plasticity Rules, and Recursion-based Learning all in one meta learning framework for generalization to different tasks. More specifically, EPRNN incorporates with nested loops for meta learning --- an outer loop searches for optimal initial parameters of the neural network and learning rules; an inner loop adapts to specific tasks. In the inner loop of EPRNN, we effectively attain both long term memory and short term memory by forging plasticity with recursion-based learning mechanisms, both of which are believed to be responsible for the formation of memories in NNNs. The inner-loop setting closely simulate that of NNNs, which neither query from any gradient oracle for optimization nor require the exact forms of learning objectives. To evaluate the performance of EPRNN, we carry out extensive experiments in two groups of tasks: Sequence Predicting, and Wheeled Robot Navigating. The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning based approaches in the tasks. The experiment results suggest the potential of EPRNN to generalize to variety of tasks and encourage more efforts in plasticity and recursion based learning mechanisms.
Reject
In this paper the authors demonstrate the use of meta-learning in plastic recurrent neural networks with an evolutionary approach, avoiding gradients. They show that this approach can be used to develop networks that can solve problems like sequence prediction and simple navigation. The reviews for this paper all had scores below the acceptance threshold (3,5,3,3). The principal concerns were: (1) The lack of novelty. Other papers have taken very similar approaches (e.g. Najarro & Risi, 2020 or Miconi et al., 2019), and fundamentally this paper simply ties together different elements in one package. (2) Lack of demonstration of the approach beyond some very simple tasks. (3) Lack of connection to the related literature on neuro-evolution and ML. (4) General clarity and style of writing issues. The authors responded to the reviewers, but the responses did not convince the reviewers enough to increase their scores past threshold. Given this, a reject decision was reached.
train
[ "MDwfMBnQIbH", "0fbxrrfdRqL", "xYKDwWLeUr9", "XZsF4C3xaOq", "4QCeUW8Zizj", "evK4tLGkt5H", "HyC4vjs2SQ", "wCwhz74fkYv", "JOtdUTcqM-q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. \n\n1. I appreciate your disagreement, but running an experiment with a single neuron and random weight guessing (no need for training even) is so easy to run that I cannot see a reason not to definitely prove me wrong. I have however worked with neuroevolution for so long that I am k...
[ -1, -1, -1, -1, -1, 3, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "XZsF4C3xaOq", "4QCeUW8Zizj", "HyC4vjs2SQ", "JOtdUTcqM-q", "evK4tLGkt5H", "iclr_2022_B2pZkS2urk_", "iclr_2022_B2pZkS2urk_", "iclr_2022_B2pZkS2urk_", "iclr_2022_B2pZkS2urk_" ]
iclr_2022_ybsh6zEzIKA
Intrusion-Free Graph Mixup
We present a simple and yet effective interpolation-based regularization technique to improve the generalization of Graph Neural Networks (GNNs). We leverage the recent advances in Mixup regularizer for vision and text, where random sample pairs and their labels are interpolated to create synthetic samples for training. Unlike images or natural sentences, which embrace a grid or linear sequence format, graphs have arbitrary structure and topology, which play a vital role on the semantic information of a graph. Consequently, even simply deleting or adding one edge from a graph can dramatically change its semantic meanings. This makes interpolating graph inputs very challenging because mixing random graph pairs may naturally create graphs with identical structure but with different labels, causing the manifold intrusion issue. To cope with this obstacle, we propose the first input mixing schema for Mixup on graph. We theoretically prove that our mixing strategy can recover the source graphs from the mixed graph, and guarantees that the mixed graphs are manifold intrusion free. We also empirically show that our method can effectively regularize the graph classification learning, resulting in superior predictive accuracy over popular graph augmentation baselines.
Reject
This paper proposes a “Mixup” type of data augmentation for graphs that accounts for the difficulty of mixing graphs of different number of nodes. The authors show that the mixed graphs are invertible functions of the original graphs. Reviewer d3Ri liked the simplicity and effectiveness of the technique. They called it a “healthy and useful contribution for the field”. Reviewer n1Dk thought that the paper explored an important problem and thought the paper was clear, though some of the math could have been simplified. This reviewer was concerned that a central claim of the paper, that the method avoids “Manifold Intrusion” was unsubstantiated. Specifically that it could not be deduced from the fact that edge connectivity could be recovered from the mixed graphs. The reviewer claimed that node features of the individual graphs were unrecoverable. The authors responded in detail to the reviewer’s criticism, adding two new lemmas which purportedly guaranteed node feature vectors could be uniquely recovered. The authors admitted to some conversion between “Manifold intrusion” and invertibility and added a Theorem and its proof that invertibility guarantees no manifold intrusion. The authors also responded to reviewer n1Dk’s concern about the significance of the reported improvements. Reviewer n1Dk responded to the author rebuttal with concerns about the strong and unrealistic assumption of linear independence of the feature matrix. They had further concerns that for the case of weighted edges the “Intrusion-free” property could not be enforced. Discussion ensued, with the authors arguing that the independence assumption was not as strong as the reviewer claimed and that the “Intrusion-free” property was only every for graphs with binary edge weights. Reviewer 7hBS and q8bs were both on the fence. 7hBS also raised some concern with the case of non-binary weighted edges. They also raised the same issue with respect to the connection b/w invertibility and the “Intrusion-free” property, which the authors addressed. Reviewer q8bs also thought the problem was interesting, the paper was clear, yet like n1Dk thought the performance improvement was marginal and had concerns with technical novelty of the work. This was a tough call, so I engaged the reviewers in further discussion. 7hBS agreed with n1Dk’s opinion that the central claim of the paper (the method being intrusion-free) was not presented with strong evidence. They also raised another concern, which was that the paper didn’t evaluate on node classification like most other graph mixup-style models. Q8bs agreed with n1Dk’s concerns and felt that post discussion the technical novelty of the work was limited. Without strong support from the reviewers, I think that this paper could use further development, either lightening the “intrusion-free” claim or presenting evidence for it in other settings.
test
[ "gXqIaQ7XIrj", "e44jpij94M2", "Y29s6rkdfz", "LfhTqgdWVB", "vsXbXV3Z_Gs", "b6FHAbhiZFO", "28YCcXtawO5", "2H0BP9iwsY3", "rQ7OTbgl1Xk", "bmzpY3sEVg6", "lyi4EdA_o_5", "LYFsXxVnu99", "82hLPpbpvyr", "sjDNdlUPozc", "ONCvpId1kC", "1fw0aA9QEQC" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank again all reviewers.\n\nPlease let us know if there are additional questions or concerns before the end of the discussion period. \nWe would be happy to discuss or address any additional comments.", " Dear Reviewer 7hBS,\n\nThank you again for your constructive feedback. \nWe have submitt...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "iclr_2022_ybsh6zEzIKA", "ONCvpId1kC", "sjDNdlUPozc", "vsXbXV3Z_Gs", "28YCcXtawO5", "sjDNdlUPozc", "iclr_2022_ybsh6zEzIKA", "rQ7OTbgl1Xk", "1fw0aA9QEQC", "lyi4EdA_o_5", "ONCvpId1kC", "82hLPpbpvyr", "iclr_2022_ybsh6zEzIKA", "iclr_2022_ybsh6zEzIKA", "iclr_2022_ybsh6zEzIKA", "iclr_2022_yb...
iclr_2022_nK7eZEURiJ4
Towards Understanding Distributional Reinforcement Learning: Regularization, Optimization, Acceleration and Sinkhorn Algorithm
Distributional reinforcement learning~(RL) is a class of state-of-the-art algorithms that estimate the whole distribution of the total return rather than only its expectation. Despite the remarkable performance of distributional RL, a theoretical understanding of its advantages over expectation-based RL remains elusive. In this paper, we interpret distributional RL as entropy-regularized maximum likelihood estimation in the \textit{neural Z-fitted iteration} framework and establish the connection of the resulting risk-aware regularization with maximum entropy RL. In addition, We shed light on the stability-promoting distributional loss with desirable smoothness properties in distributional RL, which can yield stable optimization and guaranteed generalization. We also analyze the acceleration behavior while optimizing distributional RL algorithms and show that an appropriate approximation to the true target distribution can speed up the convergence. From the perspective of representation, we find that distributional RL encourages state representation from the same action class classified by the policy in tighter clusters. Finally, we propose a class of \textit{Sinkhorn distributional RL} algorithm that interpolates between the Wasserstein distance and maximum mean discrepancy~(MMD). Experiments on a suite of Atari games reveal the competitive performance of our algorithm relative to existing state-of-the-art distributional RL algorithms.
Reject
It appears that the reviewers have reached a consensus that the paper is not ready for publication at ICLR.
val
[ "aYCRkVabz6B", "tuqYFNb7D00", "u7G6YR0Qwxy", "EQ8mA-yB9w", "dzgnLlSS7Er", "fGFufnL5pxG", "yKgkOtWusk", "kHGt4JTx5_", "uh7FnXHrBQd" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the author for the response. I'm happy with the explanation for concern 3. However, other concerns are not fully addressed in the current version. Thus, I will keep my score unchanged.", " Thanks for your valuable comments. Below we address all your concerns. Please let us know if there ar...
[ -1, -1, -1, -1, -1, 5, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "EQ8mA-yB9w", "uh7FnXHrBQd", "kHGt4JTx5_", "yKgkOtWusk", "fGFufnL5pxG", "iclr_2022_nK7eZEURiJ4", "iclr_2022_nK7eZEURiJ4", "iclr_2022_nK7eZEURiJ4", "iclr_2022_nK7eZEURiJ4" ]
iclr_2022_LLHwQh9zEb
Permutation invariant graph-to-sequence model for template-free retrosynthesis and reaction prediction
Synthesis planning and reaction outcome prediction are two fundamental problems in computer-aided organic chemistry for which a variety of data-driven approaches have emerged. Natural language approaches that model each problem as a SMILES-to-SMILES translation lead to a simple end-to-end formulation, reduce the need for data preprocessing, and enable the use of well-optimized machine translation model architectures. However, SMILES representations are not an efficient representation for capturing information about molecular structure, as evidenced by the success of SMILES augmentation to boost empirical performance. Here, we describe a novel Graph2SMILES model that combines the power of Transformer models for text generation with the permutation invariance of molecular graph encoders. As an end-to-end architecture, Graph2SMILES can be used as a drop-in replacement for the Transformer in any task involving molecule(s)-to-molecule(s) transformations. In our encoder, an attention-augmented directed message passing neural network (D-MPNN) captures local chemical environments, and the global attention encoder allows for long-range and intermolecular interactions, enhanced by graph-aware positional embedding. Graph2SMILES improves the top-1 accuracy of the Transformer baselines by $1.7\%$ and $1.9\%$ for reaction outcome prediction on USPTO_480k and USPTO_STEREO datasets respectively, and by $9.8\%$ for one-step retrosynthesis on the USPTO_50k dataset.
Reject
While the reviewers appreciated the method's ability to replace transformer models and SMILES data augmentation their main concerns were with (a) the experimental section, and (b) the technical innovation over prior work, which updated drafts of the paper did not fully resolve. Specifically for (a) this work performs very similarly to prior work: for reaction outcome prediction the proposed method improves top-1/3/5 for USPTO_STEREO_mixed but is outperformed by prior work for top-1/5/10 for USPTO_460k_mixed; for retrosynthesis the model is outperformed for USPTO_full and only outperforms prior work that does not use templates/atom-mapping/augmentation for top-1 on USPTO_50k. The authors argue that their method should be preferred because their method does not require templates, atom-mapping, and data augmentation. The reviewers agree that template-free and atom-mapping-free methods are more widely applicable. However, the benefits of being augmentation-free is not convincingly stated by the authors who only state that their approach is beneficial by "simplifying data preprocessing and potentially saving training time." The authors should have empirically verified these claim by reporting training time, because it is not obvious that their model which requires pairwise shortest path lengths is actually faster to train. For (b) the reviewers believed that the paper lacked technical novelty given recent work (e.g., NERF). The authors should more clearly distinguish this work from past work (e.g., graphical depictions and finer past work categorization may help with this). Given the similar performance to prior work, the lack of evidence to support training time claims, and the limited technical novelty, I believe this work should be rejected at this time. Once these things are clarified this paper will be improved.
train
[ "ErWOGnxjj5s", "NWgRYLmj6HE", "4D7jHNuFaP", "dcSxkmMCYsU", "v-1RCv04jeP", "sGJrK1MKh5v", "fcUardElt-F", "9DiqmJhn-Uq", "yD3y7HieAoL", "O-YG8VcnB4V", "a-24ymbKfAp", "4R5bZ-WDRc", "H4Tm2Vflp2g", "MCWJc5o5nXD", "ATLAM_T7hv", "NxPgyy-AxAt", "hNcuqExBE4k", "AXWkgle9wkC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed reply. \n\n- **Top-1** \\\nThe aspect with the surrogate model makes sense. The fact that the ML venues focus on top-n is true but also questionable.\n\n- **No Discredits** \\\nI definitely agree that using template-free models would be more interesting; I know about rxn4chemistry and t...
[ -1, 5, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "4D7jHNuFaP", "iclr_2022_LLHwQh9zEb", "v-1RCv04jeP", "fcUardElt-F", "a-24ymbKfAp", "NxPgyy-AxAt", "9DiqmJhn-Uq", "H4Tm2Vflp2g", "iclr_2022_LLHwQh9zEb", "MCWJc5o5nXD", "4R5bZ-WDRc", "NWgRYLmj6HE", "AXWkgle9wkC", "yD3y7HieAoL", "iclr_2022_LLHwQh9zEb", "hNcuqExBE4k", "iclr_2022_LLHwQh9z...
iclr_2022_-7UeX2KPqs
State-Action Joint Regularized Implicit Policy for Offline Reinforcement Learning
Offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment. The lack of environmental interactions makes the policy training vulnerable to state-action pairs far from the training dataset and prone to missing rewarding actions. For training more effective agents, we propose a framework that supports learning a flexible and well-regularized policy, which consists of a fully implicit policy and a regularization through the state-action visitation frequency induced by the current policy and that induced by the data-collecting behavior policy. We theoretically show the equivalence between policy-matching and state-action-visitation matching, and thus the compatibility of many prior work with our framework. An effective instantiation of our framework through the GAN structure is provided, together with some techniques to explicitly smooth the state-action mapping for robust generalization beyond the static dataset. Extensive experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
Reject
The authors propose to use implicit policies (similar to a conditional GAN) with a GAN-inspired regularizer. Theoretically, they show an equivalence between policy-matching and state-action-visitation matching. Finally, they evaluate their approach on D4RL and showed improved performance as well as ablations. Reviewers did not find the theoretical contribution to be significant. While the exact form may be novel, the general result has been shown in previous work and they only use the general result as a loose motivation for their approach. All reviewers acknowledge their empirical improvements as the primary strength of the paper. While a central component of their story is joint state-action regularization, Reviewer Ht1b identified that their proposed approach does not appear to directly regularize the joint state-action distribution, but rather behaves more similarly to existing policy constraint methods. I agree with Reviewer Ht1b and after much back-and-forth discussion (both Reviewer Ht1b and myself) with the authors, I have not been persuaded otherwise. The paper has a lot of potential - strong empirical results, but the justification and explanation of the method needs to be rewritten in light of the policy constraint regularization or a stronger argument needs to be put forth in support of joint state-action regularization. I don't think this diminishes the results though, but without this substantial revision, I cannot accept the paper at this time.
train
[ "D5biUll0ec1", "JV8oWv3lGlx", "SXmxV0Zj4Wb", "3YKan_fFly3", "HtDr84hoR_9", "RgbIWWW_EQ8", "DEiyDaJEOZZ", "RR6DoiMw6Bz", "HE_F3mLlLuB", "ownHX36mGa", "Q_6ZiQ1EXXw", "KRPjd1DSww", "omWpwwyFSrK", "krCVj6pVKkj", "nvMQzaPpgvU", "AJyqKs6w1YW", "YnIlk1met2P", "3Ha3m9uS-uG", "YZAK0uWZdbM...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer Ht1b,\n\nWe appreciate your responding to our rebuttal and updating your review. It appears to us that you have been making the assumption that as approximating the state-action visitation of a target policy is still an ongoing topic in the OPE community, our proposed state-action joint regularizati...
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "HtDr84hoR_9", "SXmxV0Zj4Wb", "DEiyDaJEOZZ", "RgbIWWW_EQ8", "iclr_2022_-7UeX2KPqs", "HE_F3mLlLuB", "iclr_2022_-7UeX2KPqs", "iclr_2022_-7UeX2KPqs", "ownHX36mGa", "Q_6ZiQ1EXXw", "KRPjd1DSww", "omWpwwyFSrK", "krCVj6pVKkj", "AJyqKs6w1YW", "Y3T4J-S0pIJ", "YnIlk1met2P", "3Ha3m9uS-uG", "Y...
iclr_2022_aMaQjwz5IXI
Style Equalization: Unsupervised Learning of Controllable Generative Sequence Models
Controllable generative sequence models with the capability to extract and replicate the style of specific examples enable many applications, including narrating audiobooks in different voices, auto-completing and auto-correcting written handwriting, and generating missing training samples for downstream recognition tasks. However, typical training algorithms for these controllable sequence generative models suffer from the training-inference mismatch, where the same sample is used as content and style input during training but different samples are given during inference. In this paper, we tackle the training-inference mismatch encountered during unsupervised learning of controllable generative sequence models. By introducing a style transformation module that we call style equalization, we enable training using different content and style samples and thereby mitigate the training- inference mismatch. To demonstrate its generality, we applied style equalization to text-to-speech and text-to-handwriting synthesis on three datasets. Our models achieve state-of-the-art style replication with a similar mean style opinion score as the real data. Moreover, the proposed method enables style interpolation between sequences and generates novel styles.
Reject
This work aims to improve style transfer in the unsupervised non-parallel case. It does this by proposing a style equalization approach to prevent content leakage and assuming that content information is time-dependent whereas style information is time-dependent. This is an important problem to solve and lots of prior work in the area exists. The work is well-organised with good experimental results. However, there are strong claims in the paper and there is insufficient experimental comparison to similar related work such as Hsu et al. 2019 and Ma et al. 2018 to back that up. If there's no comparison with the current state of the art (e.g. due to a private implementation or dataset) then it's hard to justify calling a new work a new state of the art. Even though an implementation may be private, it can be worth spending time to reproduce a paper or asking the authors for an implementation. Finally task and metric selection could be improved to better highlight the performance of the approach. The reviewers thank the authors for the rebuttal but it was insufficient to change their decision.
test
[ "4dVPAGnzqOY", "6bhqfw7NDd0", "7jhBLKBhfoF", "o7oxOks5K7a", "upSOQBk6hrp", "ZJAY0MKF5Ex", "vYujcHS9Ra6", "mu0f1KrT9i1", "KD_6kHzmB_", "98bVyCbhga", "hJ4o8qGBngA", "cs3dZIIuAq8", "VJA6B4S1MJX" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the reply and evaluation. We have revised the paper according to your comments and added a new discussion in Appendix H. We answer your questions below. \n\n**[1] Strong claims not supported by the experiments... Importantly, the baseline models chosen in the experiments are known to be weak on th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "7jhBLKBhfoF", "98bVyCbhga", "vYujcHS9Ra6", "mu0f1KrT9i1", "iclr_2022_aMaQjwz5IXI", "VJA6B4S1MJX", "VJA6B4S1MJX", "hJ4o8qGBngA", "iclr_2022_aMaQjwz5IXI", "cs3dZIIuAq8", "iclr_2022_aMaQjwz5IXI", "iclr_2022_aMaQjwz5IXI", "iclr_2022_aMaQjwz5IXI" ]
iclr_2022_POvMvLi91f
DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Despite overparameterization, deep networks trained via supervised learning are surprisingly easy to optimize and exhibit excellent generalization. One hypothesis to explain this is that overparameterized deep networks enjoy the benefits of implicit regularization induced by stochastic gradient descent, which favors parsimonious solutions that generalize well on test inputs. It is reasonable to surmise that deep reinforcement learning (RL) methods could also benefit from this effect. In this paper, we discuss how the implicit regularization effect of SGD seen in supervised learning could in fact be harmful in the offline deep RL setting, leading to poor generalization and degenerate feature representations. Our theoretical analysis shows that when existing models of implicit regularization are applied to temporal difference learning, the resulting derived regularizer favors degenerate solutions with excessive aliasing, in stark contrast to the supervised learning case. We back up these findings empirically, showing that feature representations learned by a deep network value function trained via bootstrapping can indeed become degenerate, aliasing the representations for state-action pairs that appear on either side of the Bellman backup. To address this issue, we derive the form of this implicit regularizer and, inspired by this derivation, propose a simple and effective explicit regularizer, called DR3, that counteracts the undesirable effects of this implicit regularizer. When combined with existing offline RL methods, DR3 substantially improves performance and stability, alleviating unlearning in Atari 2600 games, D4RL domains and robotic manipulation from images.
Accept (Spotlight)
The paper proposes an interesting hypothesis about deep nets' generalization behavior inside RL methods: it suggests that the nets' implicit regularization favors a particular form of degeneracy, in which there is excessive aliasing of state-action pairs that tend to co-occur. It proposes a new regularizer to mitigate this problem. It evaluates the hypothesis and the regularizer empirically, and it provides suggestive derivations to motivate both. The reviewers praised the comprehensive empirical analysis, the insights into learning, and the combination of empirical and theoretical evidence. The authors participated responsively and helpfully in the discussion period, and addressed any concerns raised by the reviewers. This is a strong paper: it derives and motivates a novel hypothesis about an important problem, and analyzes this hypothesis both mathematically and experimentally.
train
[ "r6qTXvcKwp", "g6PzPQu2cVy", "1eMOjLgKyxa", "IBmPoUzOhbt", "yugwlyStxXT", "sGl17AcfiX", "PnNjdlSqc7L", "j471pVvkbdm", "DLmwJeNbx0-", "RbQiZzP1_yR", "2d6JA9zh-pQ", "SmdwVryn_Z", "NTT5pxFWpDh", "RDVbX1hDYcM", "Gi7N8UbOiJD", "bXNydo6a1hv", "lAM4Ih2rvr", "47SF0Acovi-", "jSFtiDyWsV" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_re...
[ "This paper provides empirical and theoretical evidence that value-based\nmethods that optimize TD errors with SGD have an implicit \"regularizer\" that\nincreases the dot-product of the representation at successive states.\nThe theoretical analysis shows that this can be offset by a term that\npenalizes large dot ...
[ 8, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_POvMvLi91f", "IBmPoUzOhbt", "iclr_2022_POvMvLi91f", "yugwlyStxXT", "sGl17AcfiX", "PnNjdlSqc7L", "SmdwVryn_Z", "iclr_2022_POvMvLi91f", "lAM4Ih2rvr", "47SF0Acovi-", "iclr_2022_POvMvLi91f", "Gi7N8UbOiJD", "1eMOjLgKyxa", "jSFtiDyWsV", "NTT5pxFWpDh", "r6qTXvcKwp", "j471pVvkbdm"...
iclr_2022_pbduKpYzn9j
A Comprehensive Overhaul of Distilling Unconditional GANs
Generative adversarial networks (GANs) have achieved impressive results on various content generation tasks. Yet, their high demand on storage and computation impedes their deployment on resource-constrained devices. Though several GAN compression methods have been proposed to address the problem, most of them focus on conditional GANs. In this paper, we provide a comprehensive overhaul of distilling unconditional GAN, especially for the popular StyleGAN2 architecture. Our key insight is that the main challenge of unconditional GAN distillation lies in the output discrepancy issue, where the teacher and student model yield different outputs given the same input latent code. Standard knowledge distillation losses typically fail under this heterogeneous distillation scenario. We conduct thorough analysis about the reasons and effects of this discrepancy issue, and identify that the style module plays a vital role in determining semantic information of generated images. Based on this finding, we propose a novel initialization strategy for the student model, which can ensure the output consistency to the maximum extent. To further enhance the semantic consistency between the teacher and student model, we present another latent-direction-based distillation loss that preserves the semantic relations in latent space. Extensive experiments demonstrate that our framework achieves state-of-the-art results in StyleGAN2 distillation, outperforming the existing GAN distillation methods by a large margin.
Reject
The paper proposes a method for compressing unconditional generative models by leveraging a knowledge distillation framework. Two reviewers consider the paper slightly above the acceptance threshold for the interesting topic studied in the paper and the simplicity of the method. However, the other three reviewers consider the paper below the acceptance threshold with two reviewers rating the paper slighting below the acceptance threshold and one reviewer rating the paper as not good enough. Several issues were raised, including that the paper only contains results from one unconditional model (StyleGAN2) and that the presented results are not convincing enough. Consolidating the reviews and the rebuttal, the meta-reviewer found the concern raised by the reviewers justified. It would be more ideal if the paper can present results on different unconditional models and more datasets. The authors are encouraged to incorporate the reviewers' feedback to make the paper stronger for a future venue.
train
[ "sjBb2F7j_QO", "F6_Y-pxUiQE", "-r6xfDFKn6", "m9a9rys2O5", "si_V6U2UDw", "urlfYzHJjnt", "9GTOMuHKMJ9", "f1tld3TKLLF", "5N4EfxQ-kad", "rPt2eSUs6md" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Typical knowledge distillation loss/pipeline failed for unconditional GAN distillation, due to the output discrepancy between teacher and student model even if same inputs are fed. This paper proposed a framework for distilling unconditional GANs. The framework mainly contains two parts: 1). Inherit the weights fr...
[ 6, 5, 6, 5, -1, -1, -1, -1, -1, 3 ]
[ 4, 4, 3, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_pbduKpYzn9j", "iclr_2022_pbduKpYzn9j", "iclr_2022_pbduKpYzn9j", "iclr_2022_pbduKpYzn9j", "-r6xfDFKn6", "m9a9rys2O5", "F6_Y-pxUiQE", "rPt2eSUs6md", "sjBb2F7j_QO", "iclr_2022_pbduKpYzn9j" ]
iclr_2022_XuS18b_H0DW
Tactics on Refining Decision Boundary for Improving Certification-based Robust Training
In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation. However, these certification based methods treat all the examples equally regardless of their vulnerability and true adversarial distribution, limiting the model's potential in achieving optimal verifiable accuracy. In the paper, we propose new methods to include the customized weight distribution and automatic schedule tuning methods on the perturbation schedule. These methods are generally applicable to all the verification-based robust training with almost no additional computational cost. Our results show improvement on MNIST with $\epsilon = 0.3$ and CIFAR on $\epsilon = 8/255$ for both IBP and CROWN-IBP based methods.
Reject
The authors develop an approach to improve upon methods for training certifiably robust models. They propose an input dependent margin-based weighting and an automatically generated curriculum schedule and demonstrate improvements on training certifiably robust models on MNIST and CIFAR-10. Reviewers agree that the paper makes interesting and novel contributions. However, the lack of novelty in the approach combined with the limited empirical gains make it difficult to justify acceptance. In particular, reviewers raise valid concerns on the quality of experiments comparing to prior work (in particular Crown-IBP (Zhang et al 2020) and COLT (Balunovic & Vechev 2020)) (in particular hyperparameter tuning, inability to recreate baseline results and unjustified claims that the prior art cannot run on GPU hardware). Further, even the gains demonstrated are marginal. Hence, I recommend rejection, but encourage the authors to revise the paper based on the feedback received.
train
[ "fOCCyW8nXtB", "chzxzkXC2DN", "ScLv8m_zphX", "2qJcrnXoRdy", "ieX0Qb72aM4", "hVtYA5lMKE7", "yQdZgPUPP--", "uo7WJTZIOOr", "lSL_5avdFfz", "jtY-3blUCFc", "_d88wlpRIPB", "_RkFUdPt4d", "quCgOHz3ZVO", "m6d6QNWmSGQ", "LPSYX94_Oe7", "Jd2pur1__VP", "q8Up9FavhUf", "wEw5tIIaWHc", "7wjzXqnepa...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_re...
[ " Dear reviewer:\n\nSorry for the late reply on updated results, we did our best to finish a set of results following your suggestions. \n\nWe tried elevating the batch size from 256 to 1024 for our CROWN-IBP experiments with $\\epsilon_{test}=8/255$, we achieve $66.08\\pm0.38\\\\%$ as robust accuracy. In our run, ...
[ -1, -1, -1, -1, -1, 5, -1, 8, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ -1, -1, -1, -1, -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "ScLv8m_zphX", "ScLv8m_zphX", "jtY-3blUCFc", "yQdZgPUPP--", "lSL_5avdFfz", "iclr_2022_XuS18b_H0DW", "_RkFUdPt4d", "iclr_2022_XuS18b_H0DW", "RIDiQIK6ax", "_d88wlpRIPB", "m6d6QNWmSGQ", "KMpJWdhmMyl", "RIDiQIK6ax", "4bXzHBviT_d", "q8Up9FavhUf", "iclr_2022_XuS18b_H0DW", "wEw5tIIaWHc", ...
iclr_2022_HMR-7-4-Zr
Contractive error feedback for gradient compression
On-device memory concerns in distributed deep learning are becoming more severe due to i) the growth of model size in multi-GPU training, and ii) the adoption of neural networks for federated learning on IoT devices with limited storage. In such settings, this work deals with memory issues emerging with communication efficient methods. To tackle associated challenges, key advances are that i) instead of EFSGD that inefficiently manages memory, the sweet spot of convergence and memory usage can be attained via what is here termed contractive error feedback (ConEF); and, ii) communication efficiency in ConEF should be achieved by biased and allreducable gradient compression. ConEF is validated on various learning tasks that include image classification, language modeling, and machine translation. ConEF saves 80% – 90% of the extra memory in EFSGD with almost no loss on test performance, while also achieving 1.3x – 5x speedup of SGD.
Reject
The reviewers initially struggled to position this contribution in terms of usefulness. During the discussion phase, it became (more) clear that the proposed method is best used to reduce the communication overhead of ZeRO3. While the integration of this work and ZeRO hasn't been attempted yet, the authors claim that this work "clears the theoretical barrier". From that point of view, the reviewers were not satisfied with the guarantees of the method, arguing that the resulting algorithm is slower than standard EF and could suffer in terms of runtime (when one factors the cost of compression) even when compared to standard uncompressed SGD. Overall, the discussion greatly improved the paper, although directly integrating ConEF with ZeRO could be even more convincing.
train
[ "QMK29_dDzZJ", "43n2Alv3X9B", "NNwAJMtytVQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a two-stage error correction procedure for approximated (doubly compressed) gradients. Multiple theorems are derived to prove convergence and scaling properties. The algorithm can be combined with any compression algorithm for the gradient. The algorithm is applied to CIFAR-10 with results sho...
[ 3, 3, 5 ]
[ 3, 4, 4 ]
[ "iclr_2022_HMR-7-4-Zr", "iclr_2022_HMR-7-4-Zr", "iclr_2022_HMR-7-4-Zr" ]
iclr_2022_NQrx8EYMboO
Task-Agnostic Graph Neural Explanations
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data. Due to their broad applications, there is an increasing need to develop tools to explain how GNNs make decisions given graph structured data. Existing learning-based GNN explanation approaches are task-specific in training and hence suffer from crucial drawbacks. Specifically, they are incapable of producing explanations for a multitask prediction model with a single explainer. They are also unable to provide explanations in cases where the GNN is trained in a self-supervised manner, and the resulting representations are used in future down-stream tasks. To address these limitations, we propose a Task-Agnostic Graph Neural Explainer (TAGE) trained under self-supervision without knowledge about downstream tasks. TAGE enables the explanation of GNN embedding models without downstream tasks and allows efficient explanation of multitask models. Our extensive experiments show that TAGE can significantly speed up the explanation efficiency while achieving explanation quality as good as or even better than current state-of-the-art GNN explanation approaches.
Reject
Summary of the paper and the reviews: The authors propose a method to explain the GNN predictions in a task-agnostic setting, meaning that the method can be applied to a new downstream task without fine-tuning. The task is formulated as to predict the important subgraph given the input graph and the ground truth label. The learning algorithm optimizes the mutual information between the embedding of the subgraph and the original graph. The experiment shows the quantitative improvement measured by the fidelity score, the qualitative visualization of highlighted subgraphs and comparison of the cost w.r.t the baseline GNN explainer models. Strength: 1) The task-agnostic setting is novel. 2) The proposed method shows improvement in the fidelity score with a reduced training cost over the baseline models Weakness: 1) The proposed objective requires additional justification. To optimize the intractable mutual information objective, the authors propose JSE and InfoNCE as upper bound estimations of mutual information, but the negative sampling technique in the proposed method is not fully justified. 2) During training, the authors simulate the task-specific importance vector by sampling masks from a Laplace distribution. During testing, the importance vector is obtained by gradient-based approach. Further analysis is needed to quantify the effect of this training/testing discrepancy. 3) In the empirical experiment, the proposed task-agnostic model outperforms the task-specific baselines. Why should such an outcome happen? The reason needs additional analysis but is not provided in the current paper. Moreover, the qualitative results of a few examples are not sufficiently convincing for the reported empirical success. Summary of the discussions and the decision by reviewers: One reviewer asked for a justification of the negative sampling approaches used to approximate the mutual information objective. While the authors described their implementation design in their rebuttal, the theoretical justification of the method was not enough. Two reviewers raised the question about how randomly sampling importance masks during training could affect the downstream tasks performance, which was not fully addressed in the rebuttal. Other than that, the experimental concerns about new baselines and datasets were well addressed by the authors. Recommendation: The paper has received borderline review scores (5, 5, 5, 6). Although the authors addressed some of the concerns in their rebuttal on the experimental design and added important baselines, more convincing justifications/analysis for the proposed method are still missing. Therefore, the reviewers didn’t raise their scores. Based on the above concerns, the recommendation is to reject.
train
[ "4dhaUA7VaYw", "xHmkizDGHsK", "8J4nCM7TjWI", "fRQCnPPWdpC", "R0uOOhTghn6", "WNp8XULZXa", "IbpxivvNwnI", "Ei_dszJP2r", "gUjdptwLpH", "NfEhDe2kgFs", "QdiinKjccr_", "M1QZovP6xlA", "38CePqS0N8q", "b-nIy5WPi8l", "h4z8ZLujRKL", "csX3yCizs5E", "9OGXJ8H6b4V" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nSince the discussion period will end soon, could you kindly check our response and revision? We believe we have addressed all of your concerns and are looking forward to hearing from you.\n\nThank you!", " Dear reviewer y6T6,\n\nThank you again for your valuable comments! We believe we have a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ "iclr_2022_NQrx8EYMboO", "h4z8ZLujRKL", "csX3yCizs5E", "9OGXJ8H6b4V", "b-nIy5WPi8l", "iclr_2022_NQrx8EYMboO", "h4z8ZLujRKL", "9OGXJ8H6b4V", "csX3yCizs5E", "h4z8ZLujRKL", "h4z8ZLujRKL", "h4z8ZLujRKL", "b-nIy5WPi8l", "iclr_2022_NQrx8EYMboO", "iclr_2022_NQrx8EYMboO", "iclr_2022_NQrx8EYMbo...
iclr_2022_O0g6uPDLW7
On the Adversarial Robustness of Vision Transformers
Following the success in advancing natural language processing and understanding, transformers are expected to bring revolutionary changes to computer vision. This work provides the first and comprehensive study on the robustness of vision transformers (ViTs) against adversarial perturbations. Tested on various white-box and transfer attack settings, we find that ViTs possess better adversarial robustness when compared with convolutional neural networks (CNNs). This observation also holds for certified robustness. We summarize the following main observations contributing to the improved robustness of ViTs: 1) Features learned by ViTs contain less low-level information and are more generalizable, which contributes to superior robustness against adversarial perturbations. 2) Introducing convolutional or tokens-to-token blocks for learning low-level features in ViTs can improve classification accuracy but at the cost of adversarial robustness. 3) Increasing the proportion of transformers in the model structure (when the model consists of both transformer and CNN blocks) leads to better robustness. But for a pure transformer model, simply increasing the size or adding layers cannot guarantee a similar effect. 4) Pre-training on larger datasets does not significantly improve adversarial robustness though it is critical for training ViTs. 5) Adversarial training is also applicable to ViT for training robust models. Furthermore, feature visualization and frequency analysis are conducted for explanation. The results show that ViTs are less sensitive to high-frequency perturbations than CNNs and there is a high correlation between how well the model learns low-level features and its robustness against different frequency-based perturbations.
Reject
The paper studies the adversarial robustness of vision transformers. The authors conclude that vision transformers are generally more adversarially robust than the convolutional neural networks. Several interesting empirical conclusions are made for the robustness property of vision transformers. Sufficient empirical experiments are conducted. Overall, the paper is well-written, well-organized, and interesting. However, there are some concerns about the current version. (1) There are some concurrent works having similar empirical findings, which have been formally published and would weaken the interest of readers in the paper. (2) The reviews suggest that the authors use the insights from the paper to design more robust and effective vision transformers. The four reviewers have unanimous recommendations below the acceptance threshold. We therefore cannot recommend acceptance. However, we believe that by taking the comments, the next version would be a very strong paper.
train
[ "wJ7pX7QPbx", "zOz-PNc_Wx7", "3_H3wu1T03f", "ArQsefoDA6N", "XuAmiQsWbb1", "hkIY_I_vI6W", "mEKeV5bXil6", "mLWg0I46z1_", "X5Kikyyh2Di", "waP7MwDAkEV", "uEr8N6esO1e", "ao5P29SMhx7", "oz19oZ6YX9", "8NgCQo-lLjc", "Bhjv6dKtsk", "0VNoCI9NDB", "VnE9RbpAr5Q", "569wz5mdTPK", "uI0DvmaICvC" ...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the valuable suggestions from reviewers. A new version of our submission has been uploaded with the changes marked in blue. There are three major changes: \n1. Some expressions were revised with more detailed explanations to avoid confusion, e.g. we added ​​the qualification of “without adversarial trai...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "iclr_2022_O0g6uPDLW7", "3_H3wu1T03f", "VnE9RbpAr5Q", "569wz5mdTPK", "VnE9RbpAr5Q", "mEKeV5bXil6", "Bhjv6dKtsk", "X5Kikyyh2Di", "waP7MwDAkEV", "uI0DvmaICvC", "uI0DvmaICvC", "569wz5mdTPK", "569wz5mdTPK", "VnE9RbpAr5Q", "0VNoCI9NDB", "iclr_2022_O0g6uPDLW7", "iclr_2022_O0g6uPDLW7", "i...
iclr_2022_fuYtttFI-By
Programmable 3D snapshot microscopy with Fourier convolutional networks
3D snapshot microscopy enables fast volumetric imaging by capturing a 3D volume in a single 2D camera image and performing computational reconstruction. Fast volumetric imaging has a variety of biological applications such as whole brain imaging of rapid neural activity in larval zebrafish. The optimal microscope design for this optical 3D-to-2D encoding is both sample- and task-dependent, with no general solution known. Deep learning based decoders can be combined with a differentiable simulation of an optical encoder for end-to-end optimization of both the deep learning decoder and optical encoder. This technique has been used to engineer local optical encoders for other problems such as depth estimation, 3D particle localization, and lensless photography. However, 3D snapshot microscopy is known to require a highly non-local optical encoder which existing UNet-based decoders are not able to engineer. We show that a neural network architecture based on global kernel Fourier convolutional neural networks can efficiently decode information from multiple depths in a volume, globally encoded across a 3D snapshot image. We show in simulation that our proposed networks succeed in engineering and reconstructing optical encoders for 3D snapshot microscopy where the existing state-of-the-art UNet architecture fails. We also show that our networks outperform the state-of-the-art learned reconstruction algorithms for a computational photography dataset collected on a prototype lensless camera which also uses a highly non-local optical encoding.
Reject
The paper proposes an application of the CNN to the microscopy problem of constructing 3D volumes from 2D captured images. The four reviewers thought the paper was a straightforward application of existing techniques to a new problem, while they were borderline towards accept the overall sentiment was that the technical novelty was very low from an ML perspective, and the ICLR community would only find the application potentially of interest. (Two reviewers changed from borderline reject, while the other two chose not to change their scores following the authors’ request.)
train
[ "PTejv45s2fn", "m99IFV2zDo4", "pxiXW1BN4Ku", "A8GZRrr9uIv", "s6RAooELydv", "AelJP0FK22S", "fpkaXAfzEty", "PxKHCa6UVq", "J8kiuKjLBsr", "AhcWdL3TEqJ", "r9lcbJFyvDX", "c-jB4VzHtJU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces an end-to-end optical encoder and deep learning decoder optimization for 3D snapshot microscopy. The main challenge is that the decoder needs to be able to handle the global PSFs used in the optical encoder in 3D snapshot microscope. This is difficult to be achieved in conventional UNet archi...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_fuYtttFI-By", "J8kiuKjLBsr", "fpkaXAfzEty", "AelJP0FK22S", "iclr_2022_fuYtttFI-By", "AhcWdL3TEqJ", "r9lcbJFyvDX", "c-jB4VzHtJU", "PTejv45s2fn", "s6RAooELydv", "iclr_2022_fuYtttFI-By", "iclr_2022_fuYtttFI-By" ]
iclr_2022_Da3ZcbjRWy
Self-Supervised Representation Learning via Latent Graph Prediction
Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data. Currently, most methods are based on contrastive learning adapted from the image domain, which requires view generation and a sufficient number of negative samples. In contrast, existing predictive models do not require negative sampling, but lack theoretical guidance on the design of pretext training tasks. In this work, we propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction. Learning objectives of LaGraph are derived as self-supervised upper bounds to objectives for predicting unobserved latent graphs. In addition to its improved performance, LaGraph provides explanations for recent successes of predictive models that include invariance-based objectives. We provide theoretical analysis comparing LaGraph to related methods in different domains. Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.
Reject
This paper studies self-supervised learning for graph neural networks by proposing a framework called LaGraph. Both theoretical analysis and experimental evaluation are provided in the paper. We acknowledge the merits of this paper, which include studying a relatively less explored topic, providing theoretical analysis and comparison with other methods, and requiring less memory than a strong baseline. On the other hand, there are also outstanding concerns (even after the discussions) regarding the novelty and significance of the proposed method (despite the claims of the authors during the discussions), whether the performance improvement over strong baselines is significant across different datasets, and missing a more comprehensive ablation study (beyond the preliminary results provided during the discussion period), among others. In its current form, this is certainly a borderline paper for a top conference such as ICLR. It would be a better paper if the outstanding concerns could also be addressed before publication.
train
[ "6QDsRn4FQTz", "rWw3YV37xPc", "MnM9LBQ3FC8", "UPr8L25LIP0", "zVN-loNmJ6", "x8eDQCpIOJ0", "FXMx838SNGJ", "P8GrUDefFet", "6j9PtGnsCA6", "nYcqNkdmYfJ", "m0AnUMDeKl", "MlXpC269cqs", "y10bYk4Cl4T", "CpZn39Ju4Jl", "w3wNyckt6cl", "udCML6EbS3", "MWdtcy7N9LF", "SKNWP3Hy1a", "CfUvRNDR2lT",...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author",...
[ " Dear reviewer,\n\nThank you for your further response. Regarding the novelty, we argue that LaGraph and existing methods are totally different methods and have substantial differences summarized below:\n- Theoretically, LaGraph and existing methods are **derived from different and independent theoretical groundin...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "zVN-loNmJ6", "MnM9LBQ3FC8", "UPr8L25LIP0", "iclr_2022_Da3ZcbjRWy", "P8GrUDefFet", "78lf5wePyJ", "78lf5wePyJ", "Fts9VfABN0D", "KLFEa6vdFHz", "MlXpC269cqs", "MlXpC269cqs", "CpZn39Ju4Jl", "iclr_2022_Da3ZcbjRWy", "UPr8L25LIP0", "udCML6EbS3", "xFmeE7i0tq4", "78lf5wePyJ", "78lf5wePyJ", ...
iclr_2022_rhDaUTtfsqs
Curriculum Learning: A Regularization Method for Efficient and Stable Billion-Scale GPT Model Pre-Training
Recent works have demonstrated great success in training high-capacity autoregressive language models (GPT, GPT-2, GPT-3) on a huge amount of unlabeled text corpus for text generation. Despite showing great results, autoregressive models are facing a growing training instability issue. Our study on GPT-2 models (117M and 1.5B parameters) show that larger model sizes, sequence lengths, batch sizes, and learning rates would lead to lower training stability and increasing divergence risks. To avoid divergence and achieve better generalization performance, one has to train with smaller batch sizes and learning rates, which leads to worse training efficiency and longer training time. To overcome this stability-efficiency dilemma, we present a study of a curriculum learning-based approach, which helps improves the pre-training convergence speed of autoregressive models. More importantly, we find that curriculum learning, as a regularization method, exerts a gradient variance reduction effect and enables to train autoregressive models with much larger batch sizes and learning rates without training instability, further improving the training speed. Our evaluations demonstrate that curriculum learning enables training GPT-2 models with 8x larger batch size and 4x larger learning rate, whereas the baseline approach struggles with training divergence. To achieve the same validation perplexity targets during pre-training, curriculum learning reduces the required number of tokens and wall clock time by up to 61% and 49%, respectively. To achieve the same or better zero-shot WikiText-103/LAMBADA evaluation results at the end of pre-training, curriculum learning reduces the required number of tokens and wall clock time by up to 54% and 70%, respectively.
Reject
This submission proposes a simple way to improve the stability of training GPT-2: Increase the sequence length of examples over the course of training. It is shown that this simple heuristic can result in using larger learning rates, therefore significantly speeding up convergence. Reviewers agreed that this was a simple and effective approach, but shared various concerns about the paper: - The paper focuses on GPT-2, while stability issues can arise in a much wider range of models. Additional experiments with other models (and ideally other codebases/training setups) would help verify that the proposed method is broadly applicable. - Better analysis of why using the sequence length as the difficulty metric would be helpful. What other criteria would be possible? Why is sequence length the best? I would suggest that the authors significantly expand the submission based on the above suggestions and resubmit.
train
[ "-kzeKZtKCN0", "k7kP-fHHkBW", "Nd0PXlECiv", "8aU5q7OulPy", "tVLY6wCW1-x", "zS9ShgtUHY", "uFaDiIgwTeZ", "K1EvI2kMaE", "CtQfCm3EVed", "fmKt64Aaa65", "kxCMC_9VEUT", "rW0y43vm-1g", "eDfNabIKir2", "pSUpW3rC54G", "NQyVGLz7AFl", "OWhWWKRuCsY", "k2xc4yD6LPA", "l0Oqxs-08Kb" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply and below are our replies to the comments.\n\n<Comment 1> \"What causes the training instabilities (spikes). Appendix A.5 briefly suggests that the spikes might be attributed to the longer sequences. This analysis should be expanded to describe exactly what is happening to the optimizatio...
[ -1, -1, -1, 5, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "k7kP-fHHkBW", "OWhWWKRuCsY", "tVLY6wCW1-x", "iclr_2022_rhDaUTtfsqs", "8aU5q7OulPy", "pSUpW3rC54G", "iclr_2022_rhDaUTtfsqs", "CtQfCm3EVed", "eDfNabIKir2", "8aU5q7OulPy", "8aU5q7OulPy", "8aU5q7OulPy", "l0Oqxs-08Kb", "uFaDiIgwTeZ", "uFaDiIgwTeZ", "k2xc4yD6LPA", "iclr_2022_rhDaUTtfsqs",...
iclr_2022__ixHFNR-FZ
Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization
Machine learning (ML) robustness and generalization are fundamentally correlated: they essentially concern about data distribution shift under adversarial and natural settings, respectively. Thus, it is critical to uncover their underlying connections to tackle one based on the other. On the one hand, recent studies show that more robust (adversarially trained) models are more generalizable to other domains. On the other hand, there lacks of theoretical understanding of such phenomenon and it is not clear whether there are counterexamples. In this paper, we aim to provide sufficient conditions for this phenomenon considering different factors that could affect both, such as the norm of last layer norm, Jacobian norm, and data augmentations (DA). In particular, we propose a general theoretical framework indicating factors that can be reformed as a function class regularization process, which could lead to the improvement of domain generalization. Our analysis, for the first time, shows that ``robustness" is actually not the causation for domain generalization; rather, robustness induced by adversarial training is a by-product of such function class regularization. We then discuss in details about different properties of DA and we prove that under certain conditions, DA can be viewed as regularization and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings, and show several counterexamples where robustness and generalization are negatively correlated when the sufficient conditions are not satisfied.
Reject
The paper tries to analyze the relationship between regularization, adversarial robustness, and transferability. Pros: - An interesting problem was tackled. Cons: - The main claim (Prop.3.1) is almost trivial. Prop. 3.1 shows that "relative" transferability is smaller for stronger regulariation, which is just a slight generalization of the triangler inequality ||YT = YS|| <= ||YT - Y|| + ||YS - Y|| for any Y in Fig.2. - Experiments show negative correaltion between the relative transferability and accuracy, which is trivial. Large regularization degrades the accuracy which increases the "relative transferability". "Absolute" transferability in Appendix doesn't show clear negative correlations. - Salmann et al. claimed that adversarially "trained" models transfer better, and did not claim that there are positive correlations between the transferability and robustness for general classifiers without adversarial training. So the finding in this paper is not surprising nor against Salmann et al. To prove that adversarial robustness is just a subproduct of regularization, the authors should show that the "absolute" transferability by adversarially trained classifier can be achieved by other regularization. Defining relative transferability is fine if it is just a decomposition to conduct an analysis of the absolute transferability. But no conclusion on the performance should be made from its analysis, because a trivial correlation will appear, i.e., (A-B) and B should be negatively correlated unless A strongly correlates to B. Also, this is highly misleading so that some reviewers seem to have misunderstood that the authors would have claimed that negative correlations between regularization and absolute transferability were observed in the original submission. Overall, the paper requires major revision.
train
[ "vCPDjKQIOUn", "12SSE3p9wIy", "Rq1MJczyHwc", "LhduYKXlKeg", "c2icHwiz4YE", "d6dhaTu_H5D", "17mZtFmNE_S", "mL0AVcgq6KE", "10yPRw4NZW0", "tg_G2jSM_Mn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The additional results looks good. I have no more major concerns. \nBut other reviewers seem have many comments. Please discuss with other reviewers. \nI set my rating as 6. ", "This paper discusses a very interesting question: what is the relationship between adversarial robustness and cross-domain transferabi...
[ -1, 6, 3, -1, -1, -1, -1, -1, 3, 5 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "mL0AVcgq6KE", "iclr_2022__ixHFNR-FZ", "iclr_2022__ixHFNR-FZ", "Rq1MJczyHwc", "LhduYKXlKeg", "tg_G2jSM_Mn", "10yPRw4NZW0", "12SSE3p9wIy", "iclr_2022__ixHFNR-FZ", "iclr_2022__ixHFNR-FZ" ]
iclr_2022__gZ8dG4vOr9
Pruning Compact ConvNets For Efficient Inference
Neural network pruning is frequently used to compress over-parameterized networks by large amounts, while incurring only marginal drops in generalization performance. However, the impact of pruning on networks that have been highly optimized for efficient inference has not received the same level of attention. In this paper, we analyze the effect of pruning for computer vision, and study state-of-the-art FBNetV3 family of models. We show that model pruning approaches can be used to further optimize networks trained through NAS (Neural Architecture Search). The resulting family of pruned models can consistently obtain better performance than existing FBNetV3 models at the same level of computation, and thus provide state-of-the-art results when trading off between computational complexity and generalization performance on the ImageNet benchmark. In addition to better generalization performance, we also demonstrate that when limited computation resources are available, pruning FBNetV3 models incur only a fraction of GPU-hours involved in running a full-scale NAS (Neural Architecture Search).
Reject
This paper presents an empirical study which shows that pruning FBNets with larger capacity results in a model with higher accuracy than one searched via neural architecture search. The below are pros and cons of the paper mentioned by the reviewers: Pros - The observation that optimized architectures such as FBNets can benefit from pruning is interesting. - The paper is well-written and easy to follow. Cons - It is trivially known that training larger model and then pruning it will yield a better performing model, than training a smaller model from scratch. - The authors do not propose a novel pruning technique for optimized CNN architectures, and use existing pruning techniques for all experiments. - The experimental validation is only done with FBNets on ImageNet, and it does not show when pruning starts to break down. All reviewers unanimously voted for rejection, especially since the main “findings” of this paper that compact architectures can be further pruned down for improved accuracy/efficiency tradeoff, and that pruning a larger compact model results in models that outperform smaller models trained from scratch, have been already shown in many of the previous works on neural pruning. In fact, compact networks such as MobileNets and EfficientNets are the standard architectures for measuring the effectiveness of pruning techniques, and thus the contribution of this work reduces down to showing that the same results can be obtained with FBNets. This could be of interest to some practitioners, but is definitely not sufficient to warrant publication.
train
[ "lZZEBzphCJ", "JRmV79wXBF4", "DxtjR0j6tf0", "30OJhEsf-dT", "DzKQDlaUcZu", "VSu_SP6SVys", "p790pHodnnw", "H_ipPyLeLpd", "UmeS_ecRxlJ", "U0p0aP6qjQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response. It is obvious that using fewer GPU hours directly translates into a lower cost. I am still wondering if what we obtain after the shorter time (a large but sparse model) is preferable to what we obtain after the longer time (a small but dense model). I am also inclined with ...
[ -1, -1, -1, -1, -1, -1, 3, 1, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "DxtjR0j6tf0", "30OJhEsf-dT", "U0p0aP6qjQ", "H_ipPyLeLpd", "p790pHodnnw", "UmeS_ecRxlJ", "iclr_2022__gZ8dG4vOr9", "iclr_2022__gZ8dG4vOr9", "iclr_2022__gZ8dG4vOr9", "iclr_2022__gZ8dG4vOr9" ]
iclr_2022_R9Ht8RZK3qY
FED-$\chi^2$: Secure Federated Correlation Test
In this paper, we propose the first secure federated $\chi^2$-test protocol, FED-$\chi^2$. We recast $\chi^2$-test as a problem of the second moment estimation and use stable projection to encode the local information in a short vector. Due to the fact that such encodings can be aggregated with summation, secure aggregation can smoothly be applied to conceal the individual updates. We formally establish the security guarantee of FED-$\chi^2$ by demonstrating that the joint distribution is hidden in a subspace containing exponentially possible distributions. Our evaluation results show that FED-$\chi^2$ achieves good accuracy with small client-side computation overhead. FED-$\chi^2$ performs comparably to the centralized $\chi^2$-test in several real-world case studies. The code for evaluation is in the supplementary material.
Reject
This work proposes a federated version of the classical $\chi^2$ correlation test. The key new step is the use of stable projection to reduce computational overheads associated with the use of secure multi-party protocols. Overall while the contribution is of interest the novelty is rather limited. I also consider the work to be somewhat outside of scope for ICLR. It would be more suitable for a security or statistics focused venue. Therefore I do not recommend acceptance.
train
[ "c2hduBGwHM9", "0ELRImWVJuT", "d4JtW4RUwAX", "dPWAvhOubc", "z6Zz6HSCbKj", "QswwrIx931S", "5SndMgYnL_Z", "Y4PaMWa2T6n", "L7XZHNCL5o", "Kk78YEQlN57", "sS_x1dHq73J", "xB8Fjcxri7Y", "2rodIyrtnyT", "hBhQWivbknu", "sREGrAMEZDs" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nAs the discussion period is ending soon, we would like to know whether the added experiments have addressed your questions. If they do, we would like to politely ask for an increase in score. Please let us know if you have any further concerns or suggestions, and we will gladly respond and exten...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 2 ]
[ "L7XZHNCL5o", "iclr_2022_R9Ht8RZK3qY", "0ELRImWVJuT", "0ELRImWVJuT", "sREGrAMEZDs", "Y4PaMWa2T6n", "L7XZHNCL5o", "2rodIyrtnyT", "sS_x1dHq73J", "sREGrAMEZDs", "hBhQWivbknu", "2rodIyrtnyT", "iclr_2022_R9Ht8RZK3qY", "iclr_2022_R9Ht8RZK3qY", "iclr_2022_R9Ht8RZK3qY" ]
iclr_2022_P0EholD6_G
On Hard Episodes in Meta-Learning
Existing meta-learners primarily focus on improving the average task accuracy across multiple episodes. Different episodes, however, may vary in hardness and quality leading to a wide gap in the meta-learner's performance across episodes. Understanding this issue is particularly critical in industrial few-shot settings, where there is limited control over test episodes as they are typically uploaded by end-users. In this paper, we empirically analyse the behaviour of meta-learners on episodes of varying hardness across three standard benchmark datasets: CIFAR-FS, mini-ImageNet, and tiered-ImageNet. Surprisingly, we observe a wide gap in accuracy of around $50\%$ between the hardest and easiest episodes across all the standard benchmarks and meta-learners. We additionally investigate various properties of hard episodes and highlight their connection to catastrophic forgetting during meta-training. To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning. We find that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes.
Reject
This paper explores the contrast in performance between easy and hard tasks (episodes) in few-shot image classification and propose mitigating strategies to avoid large performance gaps. None of the reviewers support the acceptance of this work, despite the authors' detailed rebuttals, with all reviewers confirming their preference for rejection following the author response. Issues raised included lack of clarity of writing and lack of sufficiently convincing experimental results. I unfortunately could not find a good reason to dissent from the reviewers majority opinion, and therefore also recommend rejection at this time.
train
[ "LblpkFXWPbC", "iuIBfHPIvUj", "SohJHQyOujW", "fJ7tFwXMAa0", "ZIe9Lhth2sc", "zTvmAPraWMR", "0PDmtloRDpp", "2FaE-phhwY6", "dI5uJEIy7B", "WiwB5jOIu0L", "9dW6n8nR2aO", "kL4jEYudC-X", "hn-Hr7j5nBE", "Rt4tYDn6fSH", "kDY4LSJM0QS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewers for addressing some of my concerns and appreciate the efforts they have put in the short timeframe. However, none of my concerns are clearly addressed at this point. I agree that it may require additional investigation but without it, I do not have enough evidence at this point to increase m...
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "kL4jEYudC-X", "iclr_2022_P0EholD6_G", "iclr_2022_P0EholD6_G", "WiwB5jOIu0L", "dI5uJEIy7B", "0PDmtloRDpp", "2FaE-phhwY6", "iuIBfHPIvUj", "kDY4LSJM0QS", "9dW6n8nR2aO", "SohJHQyOujW", "hn-Hr7j5nBE", "Rt4tYDn6fSH", "iclr_2022_P0EholD6_G", "iclr_2022_P0EholD6_G" ]
iclr_2022_demdsohU_e
Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics
Efficient model selection for identifying a suitable pre-trained neural network to a downstream task is a fundamental yet challenging task in deep learning. Current practice requires expensive computational costs in model training for performance prediction. In this paper, we propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training. Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections. Therefore, a converged neural network is associated with an equilibrium state of a networked system composed of those edges. To this end, we construct a network mapping $\phi$, converting a neural network $G_A$ to a directed line graph $G_B$ that is defined on those edges in $G_A$. Next, we derive a \textit{neural capacitance} metric $\beta_{\rm eff}$ as a predictive measure universally capturing the generalization capability of $G_A$ on the downstream task using only a handful of early training results. We carried out extensive experiments using 17 popular pre-trained ImageNet models and five benchmark datasets, including CIFAR10, CIFAR100, SVHN, Fashion MNIST and Birds, to evaluate the fine-tuning performance of our framework. Our neural capacitance metric is shown to be a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.
Reject
The paper proposes a method for inferring which of a set of pretrained neural networks, once fine-tuned on a transfer task, will generalize the best. This is accomplished by deriving a quantity based on a mean-field approximation of a dynamical system defined on the adjacency matrix of the weights of a neural network, known as the "neural capacitance". The model selection procedure involves attaching a fixed, randomly initialized network onto the outputs of the pretrained network and fine-tuning for a small number of iterations, and computing the metric; the fixed network is called the "neural capacitance probe" (NCP). Reviews, though low confidence, awarded borderline scores, and a central concern was clarity and motivation, in particular the role of the NCP. acZh, the highest confidence and most verbose reviewer, echoed these concerns along with specific criticisms, for example about the heavy reliance on Gao et al (2016) without elaboration. The authors have responded in considerable depth but unfortunately the reviewer has not acknowledged these responses. On the NCP, the authors note that this is an approximation to the ideal metric that they have empirically validated. Reading the updated draft, I find myself still concurring with reviewer acZh in large degree. The draft has improved with the noted additions, such as Appendix G devoted to an explanation of Gao et al (2016), but the presentation is still quite challenging to follow. I am left with fundamental questions about the soundness of the approximation being made, its wider applicability, and the many arbitrary decisions regarding the architecture of the NCP that appear out of nowhere. How sensitive is the procedure to these choices? Did the authors tune these architectural hyperparameters? Using what data? The table of results does not include units, and for a paper proposing a general purpose metric I'd ideally want to see a a robust rationale for hyperparameter selection of method-specific hyperparameters as well as a rigorous statistical treatment of the method's performance. Since it involves an approximation, a comparison to the "ideal" or "exact" procedure on a toy problem where the latter is feasible would strengthen the paper considerably. I do appreciate the breadth of architectures and datasets examined, but I believe the central focus of the paper should be explaining the mathematical motivation (perhaps at a higher level and deferring more detail to the appendix), why precisely it makes sense in the context of neural networks (also raised by acZh, with an answer provided that I believe partially addresses this) and justifying the concrete, approximate instantiation of the method involving the NCP and the hyperparameter selection and evaluation protocol that led you to the particular NCP employed. At a higher level, this is a very mathematically dense paper that relies considerably on concepts outside of what might be considered typical expertise in the ICLR community, reflected in the confidence scores of the reviewers. While I feel that the issues described above already preclude acceptance at this time, I believe it may be difficult to do the proposed method justice in the short conference paper format, and would suggest to the authors to consider a journal submission instead, where a didactic presentation can be given the full attention it deserves without the difficulty created by length constraints. Finally, I'd like to apologize to the authors for the non-responsiveness of the Area Chair. The original Area Chair was not able to complete their duty and I have been belatedly assigned this paper to evaluate it, and it is clear that not as much discussion took place as would have been ideal.
train
[ "rJ1OLE1V9mP", "XimJ8mOw4S", "150rD5ZJe0", "h8cgQeJkQdO", "D0VOOYC3kFK", "ycLpb0Uw_m", "2O7IqRZP_yF", "TXAGE6JkRZa", "16ncli_kixl", "8dCeAB-80ow", "BWTfIXAtbzq", "Y8ITgIKVEe0", "_ILC2wE4c6", "Mxb6rrcjy5l" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Area Chair and Reviewers,\n\nAs the discussion deadline is closing soon, we would like to follow up to ensure we have successfully conveyed the merits and main contributions of our work. We took the silence of the post-rebuttal discussion as a positive sign indicating our revised version and responses had ad...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "XimJ8mOw4S", "iclr_2022_demdsohU_e", "h8cgQeJkQdO", "D0VOOYC3kFK", "8dCeAB-80ow", "2O7IqRZP_yF", "TXAGE6JkRZa", "iclr_2022_demdsohU_e", "Mxb6rrcjy5l", "_ILC2wE4c6", "Y8ITgIKVEe0", "iclr_2022_demdsohU_e", "iclr_2022_demdsohU_e", "iclr_2022_demdsohU_e" ]
iclr_2022_TKrlyiqKWB
Prototype Based Classification from Hierarchy to Fairness
Artificial neural nets can represent and classify many types of high-dimensional data but are often tailored to particular applications -- e.g., for ``fair'' or ``hierarchical'' classification. Once an architecture has been selected, it is often difficult for humans to adjust models for a new task; for example, a hierarchical classifier cannot be easily transformed into a fair classifier that shields a protected field. Our contribution in this work is a new neural network architecture, the concept subspace network (CSN), which generalizes existing specialized classifiers to produce a unified model capable of learning a spectrum of multi-concept relationships. We demonstrate that CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence, may be transformed into hierarchical classifiers, or may even reconcile fairness and hierarchy within a single classifier. The CSN is inspired by and matches the performance of existing prototype-based classifiers that promote interpretability.
Reject
This paper extends prototypical classification networks to handle class hierarchies and fairness. New neural architecture is proposed and experimental results in support of it are presented. Unfortunately, reviewers found that paper in its current for is not sufficiently strong to be accepted at ICLR. Authors have made a significant attempt to clarify and improve the paper in their response. However, reviewers believe that contributions and motivation can be clarified further. We encourage authors to improve their work according to the specific suggestions made by the reviewers and resubmit.
test
[ "capRVEA7OY", "pZ6BSPeTtHC", "FKmhom76cF", "AB5WHNkNE0-", "yTnZdc2vk3", "YsFv8Mj1j-R", "jtYjIo2KP8D", "Da9gmiQI1J8", "PSW6JOIhJJ7", "0TzXQw227Fv", "7FV9U6_l0YI", "jgTHq9RmVYb" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Personally, I believe it would make the paper even stronger, since hierarchical classification can be seen as a particular case of hierarchical multi-label classification (HMC). \n\nFurther, these hierarchies can be quite deep (up to 13 levels), thus the authors could really show the power of their model here. \n...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "pZ6BSPeTtHC", "FKmhom76cF", "jtYjIo2KP8D", "yTnZdc2vk3", "jgTHq9RmVYb", "7FV9U6_l0YI", "0TzXQw227Fv", "PSW6JOIhJJ7", "iclr_2022_TKrlyiqKWB", "iclr_2022_TKrlyiqKWB", "iclr_2022_TKrlyiqKWB", "iclr_2022_TKrlyiqKWB" ]
iclr_2022_tge0BZv1Ay
PDQN - A Deep Reinforcement Learning Method for Planning with Long Delays: Optimization of Manufacturing Dispatching
Scheduling is an important component in Semiconductor Manufacturing systems, where decisions must be made as to how to prioritize the use of finite machine resources to complete operations on parts in a timely manner. Traditionally, Operations Research methods have been used for simple, less complex systems. However, due to the complexity of this scheduling problem, simple dispatching rules such as Critical Ratio, and First-In-First-Out, are often used in practice in the industry for these more complex factories. This paper proposes a novel method based on Deep Reinforcement Learning for developing dynamic scheduling policies through interaction with simulated stochastic manufacturing systems. We experiment with simulated systems based on a complex Western Digital semiconductor plant. Our method builds upon DeepMind’s Deep Q-network, and predictron methods to create a novel algorithm, Predictron Deep Q-network, which utilizes a predictron model as a trained planning model to create training targets for a Deep Q-Network based policy. In recent years, Deep Reinforcement Learning methods have shown state of the art performance on sequential decision-making processes in complex games such as Go. Semiconductor manufacturing systems, however, provide significant additional challenges due to complex dynamics, stochastic transitions, and long time horizons with the associated delayed rewards. In addition, dynamic decision policies need to account for uncertainties such as machine downtimes. Experimental results demonstrate that, in our simulated environments, the Predictron Deep Q-network outperforms the Deep Q-network, Critical Ratio, and First-In-First-Out dispatching policies on the task of minimizing lateness of parts.
Reject
This paper deals with solving the problem of scheduling machines in a semiconductor factory using an RL approach. As the different actions take a different amount of time to complete, the authors propose to use a predictron architecture to estimate the targets in DQN. The experimental results show that the proposed method outperforms the considered baselines on two scheduling problems. After reading the authors' feedback and discussing their concerns, the reviewers agree that this paper is still not ready for publication. In particular, the main issues are about the novelty/similarity with respect to related works, the lack of theoretical insights and formal definitions, the effectiveness of the presented benchmarking, lack of analysis of some unexpected results. I encourage the authors to take into consideration the concerns raised by the reviewers when they will work on the updated version of their paper.
train
[ "6_pCM22DTLX", "xKg9QZkwgBK", "oIx2Gj8_gY5", "n-KIeOGm294", "ehUDk_Z5fWq", "ZxS6XeOmZ69", "_gTmmqFq_1X", "cTD9nEWeLJ2", "i-SEQD2ke1r", "F4K7Fu8pmfi", "SagfGAQxSMY" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors study a deep reinforcement learning approach to the problem of scheduling machines in a semiconductor factory. The authors' main contribution is a deep Q-learning based system that uses a predictron to estimate the value function (instead of max_a Q(s',a)). The intuition behind the domain is that there...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_tge0BZv1Ay", "iclr_2022_tge0BZv1Ay", "i-SEQD2ke1r", "i-SEQD2ke1r", "iclr_2022_tge0BZv1Ay", "SagfGAQxSMY", "6_pCM22DTLX", "F4K7Fu8pmfi", "iclr_2022_tge0BZv1Ay", "iclr_2022_tge0BZv1Ay", "iclr_2022_tge0BZv1Ay" ]
iclr_2022_TWTTKlwrUP0
Generating High-Fidelity Privacy-Conscious Synthetic Patient Data for Causal Effect Estimation with Multiple Treatments
A causal effect can be defined as the comparison of outcomes from two or more alternative treatments. Knowing this treatment effect is critically important in healthcare because it makes it possible to identify the best treatment for a person when more than one option exists. In the past decade, there has been exponentially growing interest in the use of observational data collected as a part of routine healthcare practice to determine the effect of a treatment with causal inference models. Validation of these models, however, has been a challenge because the ground truth is unknown: only one treatment-outcome pair for each person can be observed. There have been multiple efforts to fill this void using synthetic data where the ground truth can be generated. However, to date, these datasets have been severely limited in their utility either by being modeled after small non-representative patient populations, being dissimilar to real target populations, or only providing known effects for two cohorts (treated vs control). In this work, we produced a large-scale and realistic synthetic dataset that supports multiple hypertension treatments, by modeling after a nationwide cohort of more than 250,000 hypertension patients' multi-year history of diagnoses, medications, and laboratory values. We designed a data generation process by combining an adapted ADS-GAN model for fictitious patient information generation and a neural network for treatment outcome generation. Wasserstein distance of 0.35 demonstrates that our synthetic data follows a nearly identical joint distribution to the patient cohort used to generate the data. Our dataset provides ground truth effects for about 30 hypertension treatments on blood pressure outcomes. Patient privacy was a primary concern for this study; the $\epsilon$-identifiability metric, which estimates the probability of actual patients being identified, is 0.008%, ensuring that our synthetic data cannot be used to identify any actual patients. Using our dataset, we tested the bias in causal effect estimation of three well-established models: propensity sore stratification, doubly robust approach (DR) with logistic regression, DR with random forest (RF) classification. Interestingly, we found that while the RF DR outperformed the logistic DR as expected, the best performance actually came from propensity score stratification, despite the theoretical strength of statistical properties of the DR family of models. We believe this dataset will facilitate the additional development, evaluation, and comparison of real-world causal models. The approach we used can be readily extended to other types of diseases in the clinical domain, and to datasets in other domains as well.
Reject
In this paper, the authors propose a method for generating high quality synthetic datasets, and use their methods to evaluate a variety of causal effect estimators. In general, the paper was not received very favorably by reviewers. The primary concerns were: (a) issues with built-in bias in the algorithm that generates synthetic data (due to collider stratification bias induced by conditioning on causally "downstream" variables, (b) issues with "replicating underlying counterfactuals," which indeed is a difficult problem, and (c) lack of "technical novelty." First, I am personally very sympathetic to what the authors are trying to do. Regardless of current reviewer reception, I think the causal inference community really needs more high quality benchmarks, and (semi)synthetic datasets, and validation approaches. I urge the authors to continue this line of work. That said, I think it is important (for causal benchmarks) to be clear about the distinction between the observed data distribution (e.g. p(C,A,Y) for the backdoor model), and the full data distribution (e.g. p(C, A, Y(0), Y(1)) for the backdoor model with a binary treatment). Generally what makes a benchmark interesting is preserving some features of the _full_ data distribution, and allowing "knobs" that make the problem easier and harder. Much of what the ACIC competition organizers did was provide such knobs. Mimicking features of just the observed data distribution, even if they are complicated, isn't enough to make a causal benchmark interesting, since the problem is all about how full and observed data relate. When revising the paper, please keep this difference in mind, and consider what features of p(C, A, Y(0), Y(1)) (or more complex versions of this) make for an interesting benchmark, while also generating p(C,A,Y) that "mimics observed data" in some way.
train
[ "Zai-TSxvKhe", "bd5pZakEC6A", "FRHJ2X4_4fI", "I2aOO4kwBxh", "BFalhtOvk6S", "kLHP4aSLLS9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We’d like to thank the reviewer for spending time on our paper and providing feedback. Here is our response to the concerns.\n\n### It is hard to guarantee that the generated treatment effects are close to the original treatment effects and counterfactuals. \n\nWe agree with the reviewer that there is no such gua...
[ -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, 3, 5, 4 ]
[ "kLHP4aSLLS9", "BFalhtOvk6S", "I2aOO4kwBxh", "iclr_2022_TWTTKlwrUP0", "iclr_2022_TWTTKlwrUP0", "iclr_2022_TWTTKlwrUP0" ]
iclr_2022_cVak2hs06z
Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations
Spurious correlations pose a fundamental challenge for building robust machine learning models. For example, models trained with empirical risk minimization (ERM) may depend on correlations between class labels and spurious features to classify data, even if these relations only hold for certain data groups. This can result in poor performance on other groups that do not exhibit such relations. When group information is available during training, Sagawa et al. (2019) have shown how to improve worst-group performance by optimizing the worst-group loss (GDRO). However, when group information is unavailable, improving worst-group performance is more challenging. For this latter setting, we propose Correct-N-Contrast (CNC), a contrastive learning method to train models more robust to spurious correlations. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by the alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving $7.7\%$ absolute lift in worst-group accuracy on the CelebA data set, and performs almost as well as GDRO trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods.
Reject
The paper worked on an important problem (robustness concerning spurious correlations) and proposed a useful method (achieving SOTA worst-group performance without the true group labels). However, the motivation part is weak so that it is unclear why to go from the theoretical/empirical observations to the proposed method (more specifically the contrastive learning part). The novelty is also not very strong as argued by reviewers (the real novelty was not highlighted by the authors and thus cannot easily be appreciated by readers). It is indeed a borderline case but seems to be below the bar of acceptance and the two reviewers staying on the positive sides would not like to fight for it. Since there is still room for improvement, we hope the paper would benefit from a cycle of revisions for a re-submission and the improved version would be accepted in the near future. By the way, what GgTx suggested is not really an out-of-scope study, as far as I understood. The authors certainly think that the paper/method has been clearly motivated. This reviewer was asking for a strong motivation, namely, what is missing or what is wrong in existing methods or the SOTA method so that we need/have to apply the proposed method? Without clarifying this point, the paper/method is partially but not fully motivated and the method may look like another alternative though it should be a better one. Instead of showing the better performance, the reviewer would like to see the conceptual advantage of the proposal by understanding what is missing/wrong in the current SOTA method. Therefore, I think this is a great question for the authors to maximize the impact of their work in the end.
train
[ "_i8LwK-3Uv8", "CVVRjsaUN4B", "frdI1yOYcLF", "rfOvGo4VyL9", "1CI80mc3aZV", "YWmCdOWOyt4", "zR1gll5GX0w", "D11_c3NKws0", "f_Jx_CiwIaJ", "b3MVYoTRIWz", "dnPPcCAap4P", "nBtyV0Yr7Ez", "O5zTGGhp6CT", "WmiE_-5taG", "Uc4M0qGVQtv", "HJrzxCt1wci", "p68VPbgmva", "KoMlTVKs4wh", "V6vCqBpTQ3p...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " Thanks for your positive feedback and suggested experiments! (also apologies for this delayed comment). We were happy to add both the comparison to SupCon and the alignment loss, which we agree provides further insight into CNC's benefits over alternative approaches.", " Thanks for your response and detailed f...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "YWmCdOWOyt4", "rfOvGo4VyL9", "iclr_2022_cVak2hs06z", "f_Jx_CiwIaJ", "b3MVYoTRIWz", "2qnQ1Gr2dN", "D11_c3NKws0", "f_Jx_CiwIaJ", "dnPPcCAap4P", "O5zTGGhp6CT", "aGmAGX7z_Me", "iclr_2022_cVak2hs06z", "FyN3zOBoVq4", "iclr_2022_cVak2hs06z", "P3aLdJlgK14", "p68VPbgmva", "Y1u2IccxqwC", "V...
iclr_2022_B6YDcqpMk30
PRIMA: Planner-Reasoner Inside a Multi-task Reasoning Agent
In multi-task reasoning (MTR), an agent can solve multiple tasks via (first-order) logic reasoning. This capability is essential for human-like intelligence due to its strong generalizability and simplicity for handling multiple tasks. However, a major challenge in developing effective MTR is the intrinsic conflict between reasoning capability and efficiency. An MTR-capable agent must master a large set of "skills'' to perform diverse tasks, but executing a particular task at the inference stage requires only a small subset of immediately relevant skills. How can we maintain broad reasoning capability yet efficient specific-task performance? To address this problem, we propose a Planner-Reasoner framework capable of state-of-the-art MTR capability and high efficiency. The Reasoner models shareable (first-order) logic deduction rules, from which the Planner selects a subset to compose into efficient reasoning paths. The entire model is trained in an end-to-end manner using deep reinforcement learning, and experimental studies over various domains validate its effectiveness.
Reject
Reviews for this paper were mixed (6,6,6,5) with one review (Z6sN) being somewhat uninformative. During the rebuttal, some reviewers raised their scores to 6 but overall there was not strong excitement among the reviewers, AC, and SAC. From fresh readings (by SAC and researchers with relevant expertise), this paper’s technical approach looks reasonable but feels quite incremental (novelty is not high) and the experimental results are conducted on a small scale problem where up to 100% success rate is achievable by baseline methods. Therefore, the practical significance of this approach for real-world problems with complex and noisy environments is quite unclear. Overall, the paper looks below the ICLR acceptance threshold. For improvement, we suggest providing evidence/demonstration that this method can successfully tackle more challenging real-world problems.
train
[ "amgzGgmXl98", "t9KaSRojRj", "7Idu0ACGmGa", "8fky1MMOK1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents PRIMA (“Planner-Reasoner Inside a Multi-Task Reasoning Agent”), a multi-task reasoning model that can be applied to several tasks that require first order logical reasoning. Notably, this paper seeks to address a problem that plagues multi-task learning more generally — the tradeoff between cro...
[ 6, 5, 6, 6 ]
[ 3, 3, 3, 3 ]
[ "iclr_2022_B6YDcqpMk30", "iclr_2022_B6YDcqpMk30", "iclr_2022_B6YDcqpMk30", "iclr_2022_B6YDcqpMk30" ]
iclr_2022_KjR-3lBYB3y
Learning an Object-Based Memory System
A robot operating in a household makes observations of multiple objects as it moves around over the course of days or weeks. The objects may be moved by inhabitants, but not completely at random. The robot may be called upon later to retrieve objects and will need a long-term object-based memory in order to know how to find them. In this paper, we combine some aspects of classic techniques for data-association filtering with modern attention-based neural networks to construct object-based memory systems that consume and produce high-dimensional observations and hypotheses. We perform end-to-end learning on labeled observation trajectories to learn both the internal transition and observation models. We demonstrate the system's effectiveness on a sequence of problem classes of increasing difficulty and show that it outperforms clustering-based methods, classic filters, and unstructured neural approaches.
Reject
This paper proposed a long-term object-based memory system for robots. The proposed method builds on existing ideas of data association filters and neural-net attention mechanisms to learn transition and observation models of objects from labelled trajectories. The proposed method was compared with baseline algorithms in a set of experiments. The initial reviews raised multiple concerns about the paper. Reviewers nrGQ and V7qP commented on the conceptual gap between the problem proposed in the introduction and the extent of the experiments. Reviewer qPet understood the paper to be a form of object re-identification and was concerned about the limited comparisons with related work. The author response clarified their goal of estimating the states of the objects in the world, which they state is different from the goals of long-term tracking and object reidentification mentioned by the reviewers. The authors also clarified the relationship to other work in slot-attention and data association filters. The ensuing discussion among the reviewers indicated that the paper's contribution remained unclear even after the author response. Two reviewers noted the paper did not clearly communicate the problem being solved (all reviewers had a different view of the problem in the paper). These reviewers wanted a better motivation for the problem being addressed in this paper. The third reviewer remained unconvinced that the problem in the paper was different from long-term object tracking. Three knowledgeable reviewers indicate reject as the contributions of the paper were unclear to all of them. The paper is therefore rejected.
train
[ "vHEz5iJv5bV", "ywwAeTwPipl", "-2Vwe1nCkz6", "YmoWh-dt6vW", "O2eHuzzJ1q", "fot_Rmla4wU", "r4KIWdwvKZx", "QsUUE40_mrN", "0HYizEfXH1O", "DV9PiDttwTS", "wDXGeqyaAbO", "2abXt1vH_OL", "3E9DdyFP1jL", "yi_Ztw6qkhW", "LyrpwgQLLcB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper defines an entity-monitoring problem where the goal is to identify the distinct objects see in an episode where the agent/model moves through the scene/observes partial state. The paper proposes the OBM-Net model architecture to address this problem and identify all the distinct objects observed over eac...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_KjR-3lBYB3y", "YmoWh-dt6vW", "wDXGeqyaAbO", "O2eHuzzJ1q", "DV9PiDttwTS", "vHEz5iJv5bV", "yi_Ztw6qkhW", "LyrpwgQLLcB", "iclr_2022_KjR-3lBYB3y", "LyrpwgQLLcB", "yi_Ztw6qkhW", "vHEz5iJv5bV", "iclr_2022_KjR-3lBYB3y", "iclr_2022_KjR-3lBYB3y", "iclr_2022_KjR-3lBYB3y" ]
iclr_2022_3AkuJOgL_X
Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning
Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without requiring raw data to be shared. One major challenge of FL comes from heterogeneity in users, which may have distributionally different (or \emph{non-iid}) data and varying computation resources. Just like in centralized learning, FL users also desire model robustness against malicious attackers at test time. Whereas adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges, as many users may have very limited training data as well as tight computational budgets, to afford the data-hungry and costly AT. In this paper, we study a novel learning setting that propagates adversarial robustness from high-resource users that can afford AT, to those low-resource users that cannot afford it, during the FL process. We show that existing FL techniques cannot effectively propagate adversarial robustness among \emph{non-iid} users, and propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics. We demonstrate the rationality and effectiveness of our method through extensive experiments. Especially, the proposed method is shown to grant FL remarkable robustness even when only a small portion of users afford AT during learning.
Reject
This paper considered the computational budgets of adversarial training in the context of Federated Learning and studied the propagation of adversarial robustness from affordable parties to low-resource parties. Although the authors conducted the extensive experiments to show the effectivenss of FedRBN, there are still important concerns from the reviewers, (1) The novelty is marginal compared to FedBN, DBN and previous insights, which moves the similar framework to adversarial robustness and changed the rules, especially given the competitive ICLR. More theorectical novelty will be preferred. (2) Many technical details are not well explained and some parts need to be improved, which make the reviewers not well convinced about FedRBN. Given above points, I will recommend rejection and encourage the authors to improve the paper in the future.
train
[ "GsybNcZ5Mv6", "OqSFl2swazN", "0aWjpEkSOTN", "SBQ0sUFcCF", "i7QA3mLD417", "EI3btPyWQ6", "WoVMcYh9RTj", "oUSmEIn5huH", "XZBCTVa2UT", "CJGaB0UbiBY", "cfe9j5aS2b", "xPLD5oWiUwk", "RWUwtmZnY6p", "8kYY8GuFcwV", "Ju3A4zF9QZ2", "lYZJ7mwpJS5", "rzoSOMRqyw4", "1hvvaEsCy4-", "8jj26aP2tyU",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Adversarial training (AT) is good for defending against attacks but is costly for low-resource FL users. The authors want to find a good way to propagate adversarial robustness from high-resource users to low-resource ones. They proposes federated robust batch-normalization (FedRBN) which enables all users to enjo...
[ 8, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_3AkuJOgL_X", "GsybNcZ5Mv6", "iclr_2022_3AkuJOgL_X", "p8VgHQblHN", "1hvvaEsCy4-", "iclr_2022_3AkuJOgL_X", "XZBCTVa2UT", "iclr_2022_3AkuJOgL_X", "8kYY8GuFcwV", "1hvvaEsCy4-", "oUSmEIn5huH", "p8VgHQblHN", "oUSmEIn5huH", "oUSmEIn5huH", "8jj26aP2tyU", "GsybNcZ5Mv6", "1hvvaEsCy4...
iclr_2022_RRj7DcsPjT
Revisiting Layer-wise Sampling in Fast Training for Graph Convolutional Networks
To accelerate the training of graph convolutional networks (GCN), many sampling-based methods have been developed for approximating the embedding aggregation. Among them, a layer-wise approach recursively performs importance sampling to select neighbors jointly for existing nodes in each layer. This paper revisits the approach from a matrix approximation perspective. We identify two issues in the existing layer-wise sampling methods: sub-optimal sampling probabilities and the approximation bias induced by sampling without replacement. We thus propose remedies to address these issues. The improvements are demonstrated by extensive analyses and experiments on common benchmarks.
Reject
The reviewers think the proposed method is well motivated and interesting. However, the novelty needs to be improved. At the moment, the paper seems to be a minor improvement over existing works.
train
[ "TIBlQM9B2s2", "KB0JQSGYbRy", "QvI-sOZCugr", "eX4URDdlm7g", "X9zstr7RlLn", "wTCGnGjMY-", "569hpMjci8C", "0bwEBaHLv3K", "zlWv9I_dWjH", "gG47PkpwLLU", "Q8s3p80WP0y" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ### 1: The matrix approximation error of FastGCN and LADIES on ogbn-proteins\n\nThe discussion is spared since those two methods are existing methods, not the method of interest. To address your concern, we would like to briefly discuss the phenomenon here and add the discussion to our paper’s next version. \n\nI...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 5 ]
[ "KB0JQSGYbRy", "Q8s3p80WP0y", "gG47PkpwLLU", "zlWv9I_dWjH", "0bwEBaHLv3K", "569hpMjci8C", "iclr_2022_RRj7DcsPjT", "iclr_2022_RRj7DcsPjT", "iclr_2022_RRj7DcsPjT", "iclr_2022_RRj7DcsPjT", "iclr_2022_RRj7DcsPjT" ]
iclr_2022_ZaI7Rd11G4S
Embedding Compression with Hashing for Efficient Representation Learning in Graph
Graph neural networks (GNNs) are deep learning models designed specifically for graph data, and they typically rely on node features as the input node representation to the first layer. When applying such type of networks on graph without node feature, one can extract simple graph-based node features (e.g., number of degrees) or learn the input node representation (i.e., embeddings) when training the network. While the latter approach, which trains node embeddings, more likely leads to better performance, the number of parameters associated with the embeddings grows linearly with the number of nodes. It is therefore impractical to train the input node embeddings together with GNNs within graphics processing unit (GPU) memory in an end-to-end fashion when dealing with industrial scale graph data. Inspired by the embedding compression methods developed for natural language processing (NLP) models, we develop a node embedding compression method where each node is compactly represented with a bit vector instead of a float-point vector. The parameters utilized in the compression method can be trained together with GNNs. We show that the proposed node embedding compression method achieves superior performance compared to the alternatives.
Reject
This paper studies the embedding compression problem related to GNNs and graph representation. A two-stage method is proposed to generate the compressed embeddings: firstly, it encodes each node into its composite code with hashing; secondly, it uses a MLP module to decode the embedding for the node. Experiments are performed to evaluate the compression effect with both pretrained graph embeddings and node classification tasks with GraphSage. The paper considers hashing/compressing the rows/columns of adjacency matrices and using the compressed rows of the adjacency matrices as node features. The adjacency matrices are intrinsically redundant. Therefore, it is unclear whether the achieved compression rate is significant, especially when applied to settings with known node features. Some reviewers pointed out existing methods on learning-to-hash methods, which train a GNN-based encoder to compress the hidden representation/embedding, are relevant. Although the authors claim that in their scenario, the goal is to efficiently compress the input feature/embedding without any embedding/encoder pre-training step, it is unclear how the proposed method compares with the learning-to-hash methods when considering the adjacency matrices as the auxiliary information. The dependence on the number of nodes is also a concern in terms of scalability, as we know the bottleneck of scalability in GNNs is the number of nodes. The authors use the adjacency matrix of the input graph as the auxiliary information in the paper, which only considers local structure information. The reviewers are curious whether this approach would work for tasks in which global graph structure information is required. On a minor note, the reviewers also think that the paper would be stronger if the authors provide more principled guidance on how to select the code cardinality c and the code length m.
train
[ "iw7AhrdtkZj", "JM1bzorSE2f", "r6N4ArTurC0", "6nlOpSjcua", "HGc-0y7XXs1", "spPnomVggmv", "Qp2DqZi27yB", "AFGrsIo3Ac7", "4Xg3L93YBHf", "cpdPjBcTC1U" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their additional clarifications", " Thanks for the suggestion. Please see our response below:\n\n### Response to Weakness and Suggestion\n> 1. The main technical novelty seems the direct hashing of the adjacency matrix, which is graph specific. When pre-trained embeddings are given, the ...
[ -1, -1, -1, -1, -1, -1, 3, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "6nlOpSjcua", "cpdPjBcTC1U", "4Xg3L93YBHf", "4Xg3L93YBHf", "AFGrsIo3Ac7", "Qp2DqZi27yB", "iclr_2022_ZaI7Rd11G4S", "iclr_2022_ZaI7Rd11G4S", "iclr_2022_ZaI7Rd11G4S", "iclr_2022_ZaI7Rd11G4S" ]
iclr_2022_tJCwZBHm-jW
Image2Point: 3D Point-Cloud Understanding with 2D Image Pretrained Models
3D point-clouds and 2D images are different visual representations of the physical world. While human vision can understand both representations, computer vision models designed for 2D image and 3D point-cloud understanding are quite different. Our paper explores the potential for transferring between these two representations by empirically investigating the feasibility of the transfer, the benefits of the transfer, and shedding light on why the transfer works. We discovered that we can indeed use the same architecture and pretrained weights of a neural net model to understand both images and point-clouds. Specifically, we can transfer the pretrained image model to a point-cloud model by \textit{inflating} 2D convolutional filters to 3D and then \textbf{f}inetuning the \textbf{i}mage-\textbf{p}retrained models (FIP). We discover that, surprisingly, models with minimal finetuning efforts --- only on input, output, and optionally batch normalization layers, can achieve competitive performance on 3D point-cloud classification, beating a wide range of point-cloud models that adopt task-specific architectures and use a variety of tricks. When finetuning the whole model, the performance further improves significantly. Meanwhile, we also find that FIP improves data efficiency, achieving up to 10.0 points top-1 accuracy gain on few-shot classification. It also speeds up the training of point-cloud models by up to 11.1x to reach a target accuracy.
Reject
This paper proposes to transfer the image-pretrained model to a point cloud model by inflating 2D convolutional filters to 3D convolutional filters and finetuning the inflated image-pretrained model, so that 3D point cloud tasks can benefit from 2D image pretraining. Extensive experiments are conducted to validate the effectiveness of the proposed method. Even though the performance gain using the 2D pre-training is notable, the novelty of the paper is limited since inflating 2D model to 3D video action recognition has been studied, and theoretical understanding of the proposed model is lacking. During the rebuttal period, the authors addressed most of the reviewers’ concerns by conducting additional experiments. Even though the performance is compelling, all reviewers agree that the novelty for the paper is limited and the discussion on why this method work is not convincing. Meanwhile, one reviewer points out that some claims made by authors are not well supported. Besides, one reviewer points out that the paper might have a broader impact in a computer vision conference but only provide a limited contribution to the ICLR community. After an internal discussion with reviewers. the AC agrees with the reviewers on their judgments and recommends rejecting the paper because of the limited novelty of the paper.
train
[ "qSxlT6S-p13", "uglJT9PfIGh", "sEMRjWm8_K", "ZvrD5C3_Nn", "1zCA49f-mw7", "z1SBMPf8Gw_", "H5DHmJb_F9c", "432OYtI4w6q", "JUcguLmu0f", "z8XTKdCUrsC", "NuId-k4ajky", "zaI6sgGyO8z", "DzGJGbMzfya" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper proposed a pipeline for transferring convolutional network weights that are pre-trained on 2D images to 3D convolution networks. The proposed approach is to inflate the 2D kernels to 3D kernels similar to video models. The experiments are conducted on 2D image datasets such as ImageNet, and 3D datasets ...
[ 6, 6, -1, -1, 6, 6, 6, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, -1, -1, 5, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_tJCwZBHm-jW", "iclr_2022_tJCwZBHm-jW", "JUcguLmu0f", "1zCA49f-mw7", "iclr_2022_tJCwZBHm-jW", "iclr_2022_tJCwZBHm-jW", "iclr_2022_tJCwZBHm-jW", "1zCA49f-mw7", "uglJT9PfIGh", "z1SBMPf8Gw_", "qSxlT6S-p13", "H5DHmJb_F9c", "H5DHmJb_F9c" ]