paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_Ltkwl64I91
Invariance-Guided Feature Evolution for Few-Shot Learning
Few-shot learning (FSL) aims to characterize the inherent visual relationship between support and query samples which can be well generalized to unseen classes so that we can accurately infer the labels of query samples from very few support samples. We observe that, in a successfully learned FSL model, this visual relationship and the learned features of the query samples should remain largely invariant across different configurations of the support set. Driven by this observation, we propose to construct a feature evolution network with an ensemble of few-shot learners evolving along different configuration dimensions. We choose to study two major parameters that control the support set configuration: the number of labeled samples per class (called shots) and the percentage of training samples (called partition) in the support set. Based on this network, we characterize and track the evolution behavior of learned query features across different shots-partition configurations, which will be minimized by a set of invariance loss functions during the training stage. Our extensive experimental results demonstrate that the proposed invariance-guided feature evolution (IGFE) method significantly improves the performance and generalization capability of few-shot learning and outperforms the state-of-the-art methods by large margins, especially in cross-domain classification tasks for generalization capability test. For example, in the cross-domain test on the fine-grained CUB image classification task, our method has improved the classification accuracy by more than 5%.
Reject
Three out of four of the reviewers are leaning (weakly or strongly) towards rejecting this paper. Unfortunately, the authors only responded to the concerns of the most positive reviewer, making it difficult to disregard the concerns from the three more negative reviewers. I also took a look at the paper myself, and share a number of the reviewers' concerns. First, the proposed method appears to be performing transductive inference for its predictions, while many baselines it compares with rely on inductive inference. Transductive inference is generally known to outperform inductive inference, therefore some of the improvements in accuracy can potentially be accounted to that. The authors did mention in their one author response that they generated results in the inductive setting and still saw an improvement, however the submission was not updated with details around that new experiment, making it hard to rely on it. Second, the paper is using a 224x224 resolution for images, while the original mini-ImageNet benchmark (and the majority of baselines evaluated on it) assume a 84x84 resolution. Here too, using the former resolution is known to outperform the latter. Third, I too found the paper to lack clarity at a number of places in the writing. I also notice that the final predictions is made following the averaging of features from two models (A and B, as in Eq. 5). This is a form of model ensembling, which generally is a principle know to help improve generalization. It seems appropriate to wonder whether the baselines are worse partly due to not relying on any ensembling at all. Finally, I've found a recent method from ICJAI 2021 (Cross-Domain Few-Shot Classification via Adversarial Task Augmentation) which appears to beat the proposed method in the cross-domain setting for the majority of domains. Given the above, and the lack of rebuttals to the reviewers with the most concerns, I'm afraid I must recommend this paper be rejected at this time.
train
[ "rlTz3Zj9CJx", "6aC73Q9Klx", "x1bui7hkK4d", "n-5P49WdPSs", "d3DNZNgdq72" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your positive review of our paper and your valuable comments for us to revise and improve the paper!\nWe will carefully follow your comments to improve our final paper. Here are our detailed responses to your comments:\n\n$\\textbf{1. For your positive comments.}$ Thank you for your enc...
[ -1, 5, 6, 3, 5 ]
[ -1, 4, 5, 4, 4 ]
[ "x1bui7hkK4d", "iclr_2022_Ltkwl64I91", "iclr_2022_Ltkwl64I91", "iclr_2022_Ltkwl64I91", "iclr_2022_Ltkwl64I91" ]
iclr_2022_JeSIUeUSUuR
Variability of Neural Networks and Han-Layer: A Variability-Inspired Model
What makes an artificial neural network easier to train or to generalize better than its peers? We introduce a notion of variability to view such issues under the setting of a fixed number of parameters which is, in general, a dominant cost-factor. Experiments verify that variability correlates positively to the number of activations and negatively to a phenomenon called Collapse to Constants, which is related but not identical to vanishing gradient. Further experiments on stylized problems show that variability is indeed a key performance indicator for fully-connected neural networks. Guided by variability considerations, we propose a new architecture called Householder-absolute neural layers, or Han-layers for short, to build high variability networks with a guaranteed immunity to gradient vanishing or exploding. On small stylized models, Han-layer networks exhibit a far superior generalization ability over fully-connected networks. Extensive empirical results demonstrate that, by judiciously replacing fully-connected layers in large-scale networks such as MLP-Mixers, Han-layers can greatly reduce the number of model parameters while maintaining or improving generalization performance. We will also briefly discuss current limitations of the proposed Han-layer architecture.
Reject
The reviewers generally agreed that the ideas presented in the paper are interesting and novel. However, all reviewers also agreed that the paper is quite preliminary in its current form: the particular approach, while sensible, appears to be somewhat heuristic, and the evaluations are not as complete as necessary to fully evaluate the proposed approach. Generally, my sense is that there is something quite interesting in this work, but the present paper is too preliminary for publication. I would encourage the authors to take the reviewer comments into account and improve the work into a more complete submission for a future venue.
train
[ "9nxs6zSUC9", "VXow_V-BU1i", "-Kw0V0UUn55", "E0ob9AsHBEg", "empXH8V9CZ", "-d49cD4LKAD", "5Odo-dhQKie", "EmTCeZp1GbZ", "xqrKSHdH2c5", "y5A_YqPvazs", "yvXyouzFUiJ", "niLgkCFKQMF", "N8jdmxuusp0", "e3jQGIjaHau" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the reply.\n\nI recognize that the ideas in this paper are interesting and could be inspiring, but there are necessary improvements which should not be left as future work / to other people. Thus, I am keeping my original ratings.", " Thanks for your useful suggestions of our manuscript....
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "xqrKSHdH2c5", "-Kw0V0UUn55", "EmTCeZp1GbZ", "empXH8V9CZ", "5Odo-dhQKie", "niLgkCFKQMF", "yvXyouzFUiJ", "e3jQGIjaHau", "N8jdmxuusp0", "iclr_2022_JeSIUeUSUuR", "iclr_2022_JeSIUeUSUuR", "iclr_2022_JeSIUeUSUuR", "iclr_2022_JeSIUeUSUuR", "iclr_2022_JeSIUeUSUuR" ]
iclr_2022_XyVXPuuO_P
Meta-Learning an Inference Algorithm for Probabilistic Programs
We present a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. Our meta-algorithm takes a training set of probabilistic programs that describe models with observations, and attempts to learn an efficient method for inferring the posterior of a similar program. A key feature of our approach is the use of what we call a white-box inference algorithm that extracts information directly from model descriptions themselves, given as programs. Concretely, our white-box inference algorithm is equipped with multiple neural networks, one for each type of atomic command, and computes an approximate posterior of a given probabilistic program by analysing individual atomic commands in the program using these networks. The parameters of these networks are then learnt from a training set by our meta-algorithm. We empirically demonstrate that the learnt inference algorithm generalises well to programs that are new in terms of both parameters and structures, and report cases where our approach has advantages over alternative approaches such as HMC in terms of test-time efficiency. The overall results show the promise as well as remaining challenges of our approach.
Reject
The paper presents a meta-algorithm for learning a posterior-inference algorithm for restricted probabilistic programs. While the reviews agree that this is a very interesting research direction, they also reveal that there are several questions still open. One reviewer points out that there learning to infer should take both the time for learning+inference and the generalization to other programs into account, i.e., what happens if the program is too different from the training set? Is benefit than vanishing? Moreover, as pointed out by another review, recursion as well as while loops are not yet supported. Also, the relation to IC needs some further clarification. These issues show that the paper is not yet ready for publication at ICLR. However we would like to encourage the authors to improve the work and submit it to one of the next AI venues.
train
[ "gKllDx4vtr3", "edRVeQj0Bmx", "pd6XappVy10", "iRQfy1pv23l", "RAXLc8GZzy", "9Wiab7HRVGc", "T-HuJ0gO239", "OmUIjdQuT-r", "MXt1h2BBnf6" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The author response, \"The main difference is that IC typically assumes a single model, while INFER assumes a class of models, not a single one, where models in the class may have different structures.\" is not convincing and doesn't address my concern. IC applies to general PPLs in which a single model can handl...
[ -1, -1, -1, -1, -1, 6, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, 2, 3, 5, 4 ]
[ "pd6XappVy10", "OmUIjdQuT-r", "MXt1h2BBnf6", "T-HuJ0gO239", "iclr_2022_XyVXPuuO_P", "iclr_2022_XyVXPuuO_P", "iclr_2022_XyVXPuuO_P", "iclr_2022_XyVXPuuO_P", "iclr_2022_XyVXPuuO_P" ]
iclr_2022_4KOJ5XJ_z5W
Improving State-of-the-Art in One-Class Classification by Leveraging Unlabeled Data
Recent advances in One-Class (OC) classification combine the ability to learn exclusively from positive examples with the expressive power of deep neural networks. A cornerstone of OC methods is to make assumptions regarding negative distribution, e.g., that negative data are scattered uniformly or concentrated in the origin. An alternative approach employed in Positive-Unlabeled (PU) learning is to additionally leverage unlabeled data to approximate negative distribution more precisely. In this paper, our goal is to find the best ways to utilize unlabeled data on top of positive data in different settings. While it is reasonable to expect that PU algorithms outperform OC algorithms due to access to more data, we find that the opposite can be true if unlabeled data is unreliable, i.e. contain negative examples that are either too few or sampled from a different distribution. As an alternative to using existing PU algorithms, we propose to modify OC algorithms to incorporate unlabeled data. We find that such PU modifications can consistently benefit even from unreliable unlabeled data if they satisfy a crucial property: when unlabeled data consists exclusively of positive examples, the PU modification becomes equivalent to the original OC algorithm. Our main practical recommendation is to use state-of-the-art PU algorithms when unlabeled data is reliable and to use PU modifications of state-of-the-art OC algorithms that satisfy the formulated property otherwise. Additionally, we make a progress towards distinguishing the cases of reliable and unreliable unlabeled data using statistical tests.
Reject
This paper provides empirical results for one-class classification problems. The studied problem is important and the reviewers admire the challenge of this paper. However, the empirical results are not still insightful enough to provide practical recommendations. Some of the questions raised by the reviewers could potentially be answered by the authors, but we did not receive any feedback unfortunately. Given that there is essentially no technical novelty in this paper, it cannot be accepted for ICLR.
train
[ "8bLM6sqU80A", "16Zf0kzJ4le", "cwTaU7rNhAK", "jtf6bcUsMhY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The work presents an empirical study of extending one-class (OC) classification to PU learning in order to leverage unlabeled data more effectively, especially when the unlabeled data contains only a small proportion of positive samples. Five OC classification models are used and evaluated on a few classification ...
[ 3, 5, 5, 3 ]
[ 4, 3, 4, 3 ]
[ "iclr_2022_4KOJ5XJ_z5W", "iclr_2022_4KOJ5XJ_z5W", "iclr_2022_4KOJ5XJ_z5W", "iclr_2022_4KOJ5XJ_z5W" ]
iclr_2022_e_D6AmszH4P
ViViT: Curvature access through the generalized Gauss-Newton's low-rank structure
Curvature in form of the Hessian or its generalized Gauss-Newton (GGN) approximation is valuable for algorithms that rely on a local model for the loss to train, compress, or explain deep networks. Existing methods based on implicit multiplication via automatic differentiation or Kronecker-factored block diagonal approximations do not consider noise in the mini-batch. We present ViViT, a curvature model that leverages the GGN's low-rank structure without further approximations. It allows for efficient computation of eigenvalues, eigenvectors, as well as per-sample first- and second-order directional derivatives. The representation is computed in parallel with gradients in one backward pass and offers a fine-grained cost-accuracy trade-off, which allows it to scale. As examples for ViViT's usefulness, we investigate the directional first- and second-order derivatives during training, and how noise information can be used to improve the stability of second-order methods.
Reject
A method for efficient exact computation of the generalized Gauss-Newton matrix is given. Using this method the authors provide several empirical observations of first and second order statistics of neural networks during training. Additionally the authors use to tool to propose a new damping technique that some reviewers found particularly interesting. Reviewers noted that the low rank decomposition the authors provide is not new, and has been used in prior work, although the trick may not be widely known within the deep learning community. As such novelty is not a strength of the work, and reviewers suggested the authors could strengthen the work with a convincing demonstration that the method can be made to work at scale, as well as providing more detailed run time and memory comparisons with other approaches to calculating the GGN matrix. Although the authors agreed with reviewer suggestions, the paper was not updated during the rebuttal period. As such I recommend the authors resubmit with the proposed revisions.
test
[ "yTT5aSXXyxu", "JlmOIGjMY3", "OfQg0Ef2DWf", "TDgykBndRNu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper describes a way of leveraging the low-rank structure of the generalized Gauss-Newton (GGN) matrix, and a library that extends BackPACK's features in order to efficiently compute:\n - the spectrum (eigenvalues, eigenvectors) of the GGN\n - per sample directional derivatives and curvature of the GGN\n\nTh...
[ 5, 6, 5, 5 ]
[ 4, 4, 2, 4 ]
[ "iclr_2022_e_D6AmszH4P", "iclr_2022_e_D6AmszH4P", "iclr_2022_e_D6AmszH4P", "iclr_2022_e_D6AmszH4P" ]
iclr_2022_UXwlFxVWks
Divergent representations of ethological visual inputs emerge from supervised, unsupervised, and reinforcement learning
Artificial neural systems trained using reinforcement, supervised, and unsupervised learning all acquire internal representations of high dimensional input. To what extent these representations depend on the different learning objectives is largely unknown. Here we compare the representations learned by eight different convolutional neural networks, each with identical ResNet architectures and trained on the same family of egocentric images, but embedded within different learning systems. Specifically, the representations are trained to guide action in a compound reinforcement learning task; to predict one or a combination of three task-related targets with supervision; or using one of three different unsupervised objectives. Using representational similarity analysis, we find that the network trained with reinforcement learning differs most from the other networks. Through further analysis using metrics inspired by the neuroscience literature, we find that the model trained with reinforcement learning has a high-dimensional representation wherein individual images are represented with very different patterns of neural activity. These representations seem to arise in order to guide long-term behavior and goal-seeking in the RL agent. Our results provide insights into how the properties of neural representations are influenced by objective functions and can inform transfer learning approaches.
Reject
This paper explores the representations that are learned by the same network, on the same data, but with different objectives/tasks (RL, supervised, unsupervised). Though the reviewers were positive about some aspects of the paper, the reviews were generally low (3,3,3,6) and indicated rejection. The principle recurring theme in the reviews as to why this was a rejection was the lack of clear motivations/implications. The authors decided not to submit an updated version of the paper. As such, this was a reject decision.
val
[ "V43T2DK0FY", "qpe5ryPDfiZ", "qqo8c099RjX", "4-EMiO__CQ", "l8V0NP1aw_H" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper compares the representations learned by otherwise identical networks (up to the output layer) for different tasks: supervised (4 tasks), unsupervised (autoencoders, vanilla and variational), self-supervised predictive coding, untrained (randomly initialized), and one RL policy network. They are all trai...
[ 3, -1, 3, 3, 6 ]
[ 4, -1, 3, 4, 3 ]
[ "iclr_2022_UXwlFxVWks", "iclr_2022_UXwlFxVWks", "iclr_2022_UXwlFxVWks", "iclr_2022_UXwlFxVWks", "iclr_2022_UXwlFxVWks" ]
iclr_2022_F9McnN1dITx
Evolving Neural Update Rules for Sequence Learning
We consider the problem of searching, end to end, for effective weight and activation update rules governing online learning of a recurrent network on problems of character sequence memorisation and prediction. We experiment with a number of functional forms and find that the performance depends on them significantly. We find update rules that allow us to scale to a much larger number of recurrent units and much longer sequence lengths than has been achieved with this approach previously. We also find that natural evolution strategies significantly outperforms meta-gradients on this problem, aligning with previous studies suggesting that such evolutionary strategies are more robust than gradient back-propagation over sequences with thousands(s) of steps.
Reject
The paper addresses an interesting problem, namely how to evolve effective weight and activation update rules for online learning of a recurrent network. The work focuses on two specific tasks: character sequence memorisation and prediction. Two approaches based on meta-gradients and evolutionary strategies are explored. Unfortunately the paper is missing some important related works. Moreover, presentation needs to be improved, as well as experimental assessment should be expanded both in terms of tasks and in terms of comparable models presented in the literature.
train
[ "zFkwWdqOsVu", "0Yb7hHGrGEl", "4PXyTXOIzJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents an approach that trains local Hebbian learning rules to allow a neural network to perform reasonably well in two types of problems: sequence memorization and prediction. Two approaches to train these models are compared, which are based on meta-gradients and evolutionary strategies. The evolved...
[ 3, 3, 5 ]
[ 3, 3, 3 ]
[ "iclr_2022_F9McnN1dITx", "iclr_2022_F9McnN1dITx", "iclr_2022_F9McnN1dITx" ]
iclr_2022_CC-BbehJKTe
Building the Building Blocks: From Simplification to Winning Trees in Genetic Programming
Genetic Programming (GP) represents a powerful paradigm in diverse real-world applications. While GP can reach optimal (or at least ``good-enough'') solutions for many problems, such solutions are not without deficiencies. A frequent issue stems from the representation perspective where GP evolves solutions that contain unnecessary parts, known as program bloat. This paper first investigates a combination of deterministic and random simplification to simplify the solutions while having a (relatively) small influence on the solution fitness. Afterward, we use the solutions to extract their subtrees, which we denote as winning trees. The winning trees can be used to initialize the population for the new GP run and result in improved convergence and fitness, provided some conditions on the size of solutions and winning trees are fulfilled. To experimentally validate our approach, we consider several synthetic benchmark problems and real-world symbolic regression problems.
Reject
This paper explores the hypothesis that bloat can be prevented in Genetic Programming by identifying "winning subtrees" from simplified solutions, and use these to seed new GP runs. This idea is connected with the lottery ticket hypothesis in deep learning. Reviewer are unanimous that the paper as it stands is not ready to publish. One big issue is that the empirical results are not particularly good. Another is that the conceptual foundations of the paper, in particular the parallell to the lottery ticket hypothesis, might be flawed. Nevertheless, there is much interesting research to do in this direction.
val
[ "ZYc-jA5p0b4", "TtNceR_H2RP", "XhkuHYuYRkO", "DI3Zy6zOkdQ", "IsihpgDCLMN" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewers for their effort and comments. We agree with most of the comments and will continue research to improve our work.", "Inspired by (Frankle & Carbin, 2019), the paper proposes to find winning trees from a set of GP solutions as initialisation for another GP procedure.\nWinning...
[ -1, 3, 1, 3, 3 ]
[ -1, 3, 5, 4, 5 ]
[ "iclr_2022_CC-BbehJKTe", "iclr_2022_CC-BbehJKTe", "iclr_2022_CC-BbehJKTe", "iclr_2022_CC-BbehJKTe", "iclr_2022_CC-BbehJKTe" ]
iclr_2022_RGrj2uWTLWY
PI-GNN: Towards Robust Semi-Supervised Node Classification against Noisy Labels
Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels. However, it is expensive to maintain the label quality for real-world graph datasets, which presents huge challenges for the learning algorithm to keep a good generalization ability. In this paper, we propose a novel robust learning objective dubbed pairwise interactions (PI) for the model, such as Graph Neural Network (GNN) to combat against noisy labels. Unlike classic robust training approaches that operate on the pointwise interactions between node and class label pairs, PI explicitly forces the embeddings for node pairs that hold a positive PI label to be close to each other, which can be applied to both labeled and unlabeled nodes. We design several instantiations for the PI labels based on the graph structure as well as node class labels, and further propose a new uncertainty-aware training technique to mitigate the negative effect of the sub-optimal PI labels. Extensive experiments on different datasets and GNN architectures demonstrate the effectiveness of PI, which also brings a promising improvement over the state-of-the-art methods.
Reject
The paper studies the noisy labels problem in semi-supervised node classification and proposes a method that leverages pairwise interactions to explicitly force the embeddings for certain node pairs to be close to each other leading to better robustness. The reviewers agreed the proposed method is promising. However, the reviewers also had concerns about the novelty, and that certain aspects of the method could be justified better and the experiments should consider larger scale settings to make the paper more convincing. These were the key reasons for rejection.
train
[ "B8FS2zQhsjV", "pPsOTq2CNE0", "oFHARtESNUi", "uni3TJPTpOg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on semi-supervised node classification with noisy node labels. The authors propose a novel learning objective called the pairwise interactions, which encourages node pairs holding positive PI labels to have close node embeddings. Extensive experiments show promising results. Strengths:\n\n1. The...
[ 5, 5, 5, 3 ]
[ 4, 3, 4, 4 ]
[ "iclr_2022_RGrj2uWTLWY", "iclr_2022_RGrj2uWTLWY", "iclr_2022_RGrj2uWTLWY", "iclr_2022_RGrj2uWTLWY" ]
iclr_2022_wIK1fWFXvU9
Understanding the Interaction of Adversarial Training with Noisy Labels
Noisy labels (NL) and adversarial examples both undermine trained models, but interestingly they have hitherto been studied independently. A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i.e., find an adversarial example in its proximity) is an effective measure of the robustness of this point. Given that natural data are clean, this measure reveals an intrinsic geometric property---how far a point is from its nearest class boundary. Based on this breakthrough, in this paper, we figure out how AT would interact with NL. Firstly, we find if a point is too close to its noisy-class boundary (e.g., one step is enough to attack it), this point is likely to be mislabeled, which suggests to adopt the number of PGD steps as a new criterion for sample selection to correct NL. Secondly, we confirm that AT with strong smoothing effects suffers less from NL (without NL corrections) than standard training, which suggests that AT itself is an NL correction. Hence, AT with NL is helpful for improving even the natural accuracy, which again illustrates the superiority of AT as a general-purpose robust learning criterion.
Reject
All reviewers agree that the proposed idea looks interesting but the paper is seriously lacking in the definition of its scope: there is no quantitative result, experiments are quite limited, and there is not enough discussion of the limitations. With more work this could become a very interesting paper.
val
[ "UIKVfg6I8Nv", "-Yi6NQ7dNjJ", "PwJCYW0kkRt", "rU4Rdxfy9C" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This submission empirically studies the efficacy of adversarial training for mitigating the effect of label noise in training data. Their findings are as follows:\n1) \"Smoothing effect\" of adversarial training:\n\ta) on a 2-dimensional synthetic binary classification dataset where two points are incorrectly labe...
[ 5, 6, 5, 3 ]
[ 3, 3, 4, 5 ]
[ "iclr_2022_wIK1fWFXvU9", "iclr_2022_wIK1fWFXvU9", "iclr_2022_wIK1fWFXvU9", "iclr_2022_wIK1fWFXvU9" ]
iclr_2022_62r41yOG5m
Inducing Reusable Skills From Demonstrations with Option-Controller Network
Humans can decompose previous experiences into skills and reuse them to enable fast learning in the future. Inspired by this process, we propose a new model called Option-Controller Network (OCN), which is a bi-level recurrent policy network composed of a high-level controller and a pool of low-level options. The options are disconnected from any task-specific information to model task-agnostic skills. The controller use options to solve a given task, and it calls one option at a time and waits until the option return. With the isolation of information and the synchronous calling mechanism, we can impose a division of works between the controller and options in an end-to-end training regime. In experiments, we first perform behavior cloning from unstructured demonstrations coming from different tasks. We then freeze the learned options and learn a new controller with an RL algorithm to solve a new task. Extensive results on discrete and continuous environments show that OCN can jointly learn to decompose unstructured demonstrations into skills and model each skill with separate options. The learned options provide a good temporal abstraction, allowing OCN to quickly transfer to tasks with a novel combination of learned skills even with sparse reward, while previous methods either suffer from the delayed reward problem due to the lack of temporal abstraction or a complicated option controlling mechanism that increases the complexity of exploration.
Reject
Description of paper content: The paper describes a technique to learn option policies using behavioral cloning and then recombine them using a high-level controller trained by RL. The underlying options are frozen. The method is tested in two published environments: a discrete grid world environment and a continuous action space robot. It is compared to three baselines. Summary of paper discussion: All reviewers moved to reject based on a lack of novelty and a lack of significant empirical results. No rebuttals were provided.
train
[ "QPCUPk2sa6", "HiyCacDAp38", "l8SAMYCNCJ0", "V_hnchSCi7c", "UZ0S-wUVswj", "9fWGwpYq_H1", "R9Xvl1tmi2R", "P7aHX1oW8KX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Given missing author response and reviewers' consensus on insufficient clarity and weak empirical results, I would maintain my original rating, weak rejection.", " Since there lacks of the author's response, I will stick to my initial score of 5. I hope the authors improve the paper for a future venue.", " Gi...
[ -1, -1, -1, -1, 5, 5, 3, 3 ]
[ -1, -1, -1, -1, 4, 5, 4, 5 ]
[ "9fWGwpYq_H1", "UZ0S-wUVswj", "P7aHX1oW8KX", "R9Xvl1tmi2R", "iclr_2022_62r41yOG5m", "iclr_2022_62r41yOG5m", "iclr_2022_62r41yOG5m", "iclr_2022_62r41yOG5m" ]
iclr_2022_CSw5zgTjXyb
Learning to Collaborate
In this paper, we focus on effective learning over a collaborative research network involving multiple clients. Each client has its own sample population which may not be shared with other clients due to privacy concerns. The goal is to learn a model for each client, which behaves better than the one learned from its own data, through secure collaborations with other clients in the network. Due to the discrepancies of the sample distributions across different clients, it is not necessarily that collaborating with everyone will lead to the best local models. We propose a learning to collaborate framework, where each client can choose to collaborate with certain members in the network to achieve a ``collaboration equilibrium", where smaller collaboration coalitions are formed within the network so that each client can obtain the model with the best utility. We propose the concept of benefit graph which describes how each client can benefit from collaborating with other clients and develop a Pareto optimization approach to obtain it. Finally the collaboration coalitions can be derived from it based on graph operations. Our framework provides a new way of setting up collaborations in a research network. Experiments on both synthetic and real world data sets are provided to demonstrate the effectiveness of our method.
Reject
The paper proposes a model of agent collaboration to improve outcomes for any participating agent in a setting where every agent does not always benefit from collaborating with all other agents. The reviewers did find some of the theoretical results interesting, however, in its current (revised) form, they still argued during the discussion post-rebuttal that: (i) the game theoretic formulation of this problem is not entirely new and has been studied in various forms before and (ii) the particular application of the results to federated learning comes after making various (questionable) assumptions. I would encourage the authors to take into account (i-ii) for preparing a revised version of their paper and resubmit to another conference.
train
[ "ADG_aV3YASW", "-jmUu5l7FkJ", "eNhJB3Ji4Qp", "PmSr2jkHc1m", "Ez6Y3pxi9C", "rlChbT2vp_H", "fxrWsDSSv-Z", "VE_lM9J4PeU", "qM4PFu5t5-t", "XF2N4KThG7U", "QfIxYVNwvnT", "n6HuoObCZnF", "pj_nF08B1fY", "NcLSxkX7iho", "M3VhosW8iIM" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your constructive comments on our work. We have tried our best to address the concerns. Is there any unclear point so that we should/could further clarify?", " Thanks very much for your constructive comments on our work. We have tried our best to address the concerns. Is there any unclear p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 2 ]
[ "NcLSxkX7iho", "pj_nF08B1fY", "n6HuoObCZnF", "n6HuoObCZnF", "NcLSxkX7iho", "iclr_2022_CSw5zgTjXyb", "n6HuoObCZnF", "n6HuoObCZnF", "NcLSxkX7iho", "M3VhosW8iIM", "pj_nF08B1fY", "iclr_2022_CSw5zgTjXyb", "iclr_2022_CSw5zgTjXyb", "iclr_2022_CSw5zgTjXyb", "iclr_2022_CSw5zgTjXyb" ]
iclr_2022_AS0dhAKIYA0
Interpretable Semantic Role Relation Table for Supporting Facts Recognition of Reading Comprehension
The current Machine Reading Comprehension (MRC) model has poor interpretability. Interpretable semantic features can enhancethe interpretability of the model. Semantic role labeling (SRL) captures predicate-argument relations, such as "who did what to whom," which are critical to comprehension and interpretation. To enhance the interpretability of the model, we propose the semantic role relation table, which represents the semantic relation of the sentence itself and the semantic relations among sentences. We use the name of entities to integrate into the semantic role relation table to establish the semantic relation between sentences. This paper makes the fi rst attempt to utilize contextual semantic's explicit relation to the recognition supporting fact of reading comprehension. We have established nine semantic relationtables between target sentence, question, and article. Then we take each semantic relationship table's overall semantic role relevance and each semantic role relevance as important judgment information. Detailed experiments on HotpotQA, a challenging multi-hop MRC data set, our method achieves better performance. With few training data sets, the model performance is still stable.
Reject
All reviewers agree that this paper does not meet the bar for ICLR. The reviewers provide detailed feedback to the authors on how to improve the writing as well as the overall content of the paper.
train
[ "mNUtj_XeGw", "EVzqH4ayCKx", "6O4txYUBGps", "LCglX6h_UJH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a model architecture for reading comprehension based on a large number of binary features corresponding to propositional overlap (in terms of SRL and, as far as I can tell, token surface forms or lemmas) between sentences in a passage and a reading comprehension question. These features are arra...
[ 1, 3, 1, 5 ]
[ 4, 4, 4, 3 ]
[ "iclr_2022_AS0dhAKIYA0", "iclr_2022_AS0dhAKIYA0", "iclr_2022_AS0dhAKIYA0", "iclr_2022_AS0dhAKIYA0" ]
iclr_2022_QvTH9nN2Io
Relative Entropy Gradient Sampler for Unnormalized Distributions
We propose a relative entropy gradient sampler (REGS) for sampling from unnormalized distributions. REGS is a particle method that seeks a sequence of simple nonlinear transforms iteratively pushing the initial samples from a reference distribution into the samples from an unnormalized target distribution. To determine the nonlinear transforms at each iteration, we consider the Wasserstein gradient flow of relative entropy. This gradient flow determines a path of probability distributions that interpolates the reference distribution and the target distribution. It is characterized by an ODE system with velocity fields depending on the density ratios of the density of evolving particles and the unnormalized target density. To sample with REGS, we need to estimate the density ratios and simulate the ODE system with particle evolution. We propose a novel nonparametric approach to estimating the logarithmic density ratio using neural networks. Extensive simulation studies on challenging multimodal 1D and 2D distributions and Bayesian logistic regression on real datasets demonstrate that the REGS outperforms the state-of-the-art sampling methods included in the comparison.
Reject
The paper proposes a sampling technique for unnormalized distributions. The main idea is to gradually transform particles by following the gradient flow of the relative entropy in the Wasserstein space of probability distributions. The paper tackles an important problem and provides an interesting new perspective. However, even putting aside the concerns on the theoretical analysis raised by the reviewers, the experimental evaluations does not seem sufficient to demonstrate the benefits of the proposed approach.
train
[ "JU7oxJq9l3L", "p9c2whKx1Jb", "M7GeCW9RDfg", "dxzNIx6l0K5", "z8EF2HWYz-b", "rZkSENZ06RT", "GzTQVf4m7_a", "sgwbWHGkf5U", "NqpO1ndVYGD", "GkCJ1aCrhAE", "AprGtP9wKaV", "zfqaImku6o", "aYbIHqkaSk1", "_l7F_PfUPhk" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I read the author's response.\n\nUnfortunately, I still have to stick with my initial assessment of rejecting the paper. The issue that I have is that after all the modifications the authors said they will carry, the paper will be too different from the originally submitted one. Thus, I am required to review a pa...
[ -1, -1, -1, 1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "sgwbWHGkf5U", "z8EF2HWYz-b", "GkCJ1aCrhAE", "iclr_2022_QvTH9nN2Io", "rZkSENZ06RT", "GzTQVf4m7_a", "AprGtP9wKaV", "_l7F_PfUPhk", "aYbIHqkaSk1", "zfqaImku6o", "dxzNIx6l0K5", "iclr_2022_QvTH9nN2Io", "iclr_2022_QvTH9nN2Io", "iclr_2022_QvTH9nN2Io" ]
iclr_2022_Rh3khfuQUYk
Iterative Decoding for Compositional Generalization in Transformers
Deep learning models do well at generalizing to in-distribution data but struggle to generalize compositionally, i.e., to combine a set of learned primitives to solve more complex tasks. In particular, in sequence-to-sequence (seq2seq) learning, transformers are often unable to predict even marginally longer examples than those seen during training. This paper introduces iterative decoding, an alternative to seq2seq learning that (i) improves transformer compositional generalization and (ii) evidences that, in general, seq2seq transformers do not learn iterations that are not unrolled. Inspired by the idea of compositionality---that complex tasks can be solved by composing basic primitives---training examples are broken down into a sequence of intermediate steps that the transformer then learns iteratively. At inference time, the intermediate outputs are fed back to the transformer as intermediate inputs until an end-of-iteration token is predicted. Through numerical experiments, we show that transfomers trained via iterative decoding outperform their seq2seq counterparts on the PCFG dataset, and solve the problem of calculating Cartesian products between vectors longer than those seen during training with 100% accuracy, a task at which seq2seq models have been shown to fail. We also illustrate a limitation of iterative decoding, specifically, that it can make sorting harder to learn on the CFQ dataset.
Reject
All reviewers raise issues with the proposed method and whether it is a) applicable to non-synthetic tasks/datasets; b) how the input could be broken down into intermediate subproblems in a principled way and whether this would substantially make the proposal slower than the vanilla encoder/decoder framework; c) awareness of previous work. It is a same the authors did not provide a response, however the reviewers have provided useful feedback they could use to improve their submission.
train
[ "BImZlrLdOIb", "EFKYgjWbzfT", "OcK10eIlZmV", "WCn08KKJayy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces iterative decoding to improve the compositional generalization ability of seq2seq models. Iterative decoding predicts a series of intermediate outputs from an input, and then adapting these outputs into intermediate inputs that are fed back to the model until a sequence containing an end-of-i...
[ 3, 3, 5, 3 ]
[ 4, 4, 5, 4 ]
[ "iclr_2022_Rh3khfuQUYk", "iclr_2022_Rh3khfuQUYk", "iclr_2022_Rh3khfuQUYk", "iclr_2022_Rh3khfuQUYk" ]
iclr_2022_s6roE3ZocH1
Genetic Algorithm for Constrained Molecular Inverse Design
A genetic algorithm is suitable for exploring large search spaces as it finds an approximate solution. Because of this advantage, genetic algorithm is effective in exploring vast and unknown space such as molecular search space. Though the algorithm is suitable for searching vast chemical space, it is difficult to optimize pharmacological properties while maintaining molecular substructure. To solve this issue, we introduce a genetic algorithm featuring a constrained molecular inverse design. The proposed algorithm successfully produces valid molecules for crossover and mutation. Furthermore, it optimizes specific properties while adhering to structural constraints using a two-phase optimization. Experiments prove that our algorithm effectively finds molecules that satisfy specific properties while maintaining structural constraints.
Reject
The paper describes a genetic algorithm for molecular optimization under constraints. The aim is to generate molecules with better properties while close to an initial lead molecule. The proposed approach is a two-stage one. The first stage aims to satisfy constraints and searches for feasible molecules that are similar to the lead. The second stage optimizes the molecular property. The method is evaluated on logP optimization task, with minor improvement over previous work. The reviewers point out the following strengths and weaknesses: Strengths: - Molecular optimization under structural constraints is an important research direction. - Comprehensive related work section. Weaknesses: - Lack of novelty because it is a standard application of genetic algorithm. - The results show that the proposed method did not outperform existing baselines. - The main claim of the paper (benefit of two-stage procedure) is not supported by ablation study. - The authors only conduct experiments on improving LogP, which is a benchmark that is too easy and not challenging. - The objective function and cross-over operation are the same or very similar to previous work. - The experimental evaluation is limited, and the overall setting is not very relevant to real-world tasks. Overall, all reviewers vote for rejection. It is clear that the paper needs more work before it can be published.
train
[ "6ZCdYnOz6fy", "3o7QaXDQOUB", "WIzbjU6G6XO", "Ek4HbkzKh5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a genetic algorithm framework for molecular optimization. It produces valid molecules through the crossover and mutation operations along with appropriate fitness functions. The evaluation was done in one of the property optimization tasks. \n \nStrength:\n1. The whole framework well adapts exi...
[ 3, 3, 3, 3 ]
[ 4, 5, 4, 4 ]
[ "iclr_2022_s6roE3ZocH1", "iclr_2022_s6roE3ZocH1", "iclr_2022_s6roE3ZocH1", "iclr_2022_s6roE3ZocH1" ]
iclr_2022_f4c4JtbHJ7B
Pixab-CAM: Attend Pixel, not Channel
To understand the internal behaviors of convolution neural networks (CNNs), many class activation mapping (CAM) based methods, which generate an explanation map by a linear combination of channels and corresponding weights, have been proposed. Previous CAM-based methods have tried to define a channel-wise weight that represents the importance of a channel for the target class. However, these methods have two common limitations. First, all pixels in the channel share a single scalar value. If the pixels are tied to a specific value, some of them are overestimated. Second, since the explanation map is the result of a linear combination of channels in the activation tensor, it is inevitably dependent on the activation tensor. To address these issues, we propose gradient-free Pixel-wise Ablation-CAM (Pixab-CAM), which utilizes pixel-wise weights rather than channel-wise weights to break the link between pixels in a channel. In addition, in order not to generate an explanation map dependent on the activation tensor, the explanation map is generated only with pixel-wise weights without linear combination with the activation tensor. In this paper, we also propose novel evaluation metrics to measure the quality of explanation maps using an adversarial attack. We demonstrate through experiments the qualitative and quantitative superiority of Pixab-CAM.
Reject
This work received borderline rates with slight preference to rejection. The main concerns range from writing, novelty to empirical evaluations. Given that no authors’ responses are submitted, we have decided to reject this work.
train
[ "0uQweSYxzsF", "DEyHycrS5hy", "erTlIqe6sgC", "iW0mMxoGDv1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an ablation-based method to produce class activation maps, as well as two new evaluation procedures based on adversarial examples. Strengths:\n\nMany of the ideas in the paper sound interesting.\n\nWeaknesses:\n\nThe paper is not written or structured clearly. The authors make many claims that...
[ 3, 5, 6, 3 ]
[ 4, 4, 4, 4 ]
[ "iclr_2022_f4c4JtbHJ7B", "iclr_2022_f4c4JtbHJ7B", "iclr_2022_f4c4JtbHJ7B", "iclr_2022_f4c4JtbHJ7B" ]
iclr_2022_Rupm2vTg1pe
The Infinite Contextual Graph Markov Model
The Contextual Graph Markov Model is a deep, unsupervised, and probabilistic model for graphs that is trained incrementally on a layer-by-layer basis. As with most Deep Graph Networks, an inherent limitation is the lack of an automatic mechanism to choose the size of each layer's latent representation. In this paper, we circumvent the problem by extending the Contextual Graph Markov Model with Hierarchical Dirichlet Processes. The resulting model for graphs can automatically adjust the complexity of each layer without the need to perform an extensive model selection. To improve the scalability of the method, we introduce a novel approximated inference procedure that better deals with larger graph topologies. The quality of the learned unsupervised representations is then evaluated across a set of eight graph classification tasks, showing competitive performances against end-to-end supervised methods. The analysis is complemented by studies on the importance of depth, hyper-parameters, and compression of the graph embeddings. We believe this to be an important step towards the theoretically grounded and automatic construction of deep probabilistic architectures for graphs.
Reject
This paper extends the Contextual Graph Markov Model, a deep unsupervised probabilistic approach. The key idea is to leverage Hierarchical Dirichlet Processes to automatically determine each layer's latent representation's size. The paper conducts experiments on graph classification tasks to show the superiority of the proposed method. Strength * A new method is proposed. * The proposed method appears to be sound. * Experiments are conducted to demonstrate the effectiveness. Weakness * The novelty and significance of the work are not enough. * The improvements on existing methods are not significant. * The proposed method is also not so general. ----------- After rebuttal Reviewer ynws, who gave the highest score, says “I agree with the overall review of the paper by other reviewers. The proposed method is limited to the CGMM model and not generic enough to extend to other more popular graph neural networks. The improvements don't seem to be significant enough as well.”
train
[ "kZ88NqefVPG", "tZNapdW0XXi", "4fV2M0P4PS8", "4-nIba52Hko" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper is an extension of the Contextual Graph Markov Model, a deep unsupervised probabilistic approach for modeling graph data. The key idea is to leverage Hierarchical Dirichlet Processes, which enables the proposed approach to automatically choose the size of each layer’s latent representation. The authors ...
[ 5, 8, 5, 5 ]
[ 4, 3, 3, 2 ]
[ "iclr_2022_Rupm2vTg1pe", "iclr_2022_Rupm2vTg1pe", "iclr_2022_Rupm2vTg1pe", "iclr_2022_Rupm2vTg1pe" ]
iclr_2022_SC6JbEviuD0
White Paper Assistance: A Step Forward Beyond the Shortcut Learning
The promising performances of CNNs often overshadow the need to examine whether they are doing in the way we are actually interested. We show through experiments that even over-parameterized models would still solve a dataset by recklessly leveraging spurious correlations, or so-called ``shortcuts’’. To combat with this unintended propensity, we borrow the idea of printer test page and propose a novel approach called White Paper Assistance. Our proposed method is two-fold; (a) we intentionally involves the white paper to detect the extent to which the model has preference for certain characterized patterns and (b) we debias the model by enforcing it to make a random guess on the white paper. We show the consistent accuracy improvements that are manifest in various architectures, datasets and combinations with other techniques. Experiments have also demonstrated the versatility of our approach on imbalanced classification and robustness to corruptions.
Reject
This paper studies the "shortcut" learning phenomenon in CNNs and proposes a simple and effective strategy (white paper) to alleviate specific shortcut patterns (e.g. "black squares" in the image). The proposed scheme is verified empirically and shown to improve over some existing solutions. All reviewers appreciate the simplicity of the idea, which allows its quick implementation and reproduciblity. However, reviewers y5Su and C42n believe the notion of shortcuts as studied in this paper are not only very limited, but also artificial. Consequently, they raise doubts about practical relevance/significance of the method for real world datasets with natural shortcuts. Based on these concerns, I suggest authors to identify a real setting (non-artificial data) where, alongside their synthetic shortcuts, can show the practical effectiveness of the proposed can.
train
[ "BWfaJIBE5Np", "OVkxziY_iDY", "JFVqnzV7w0L", "mmcqaLTf11M", "H6F0Sv9EMMe", "nsRqSCkrX9a", "D1zbSJ9pDST", "oDkfIEY8ZJ", "yATAvuHKXq", "VeFrOiYAaAb", "hojS7FzpmZ", "J8eQiDdph6", "KQU0uyDG8WA" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the explanations, I strongly recommend adding these to the main paper. In general, it is good practice to avoid umbrella statements like \"laborious and expensive\" and replace them with explicit statements like the ones you've provided in your response. I've read the other reviews as well, and whil...
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 8, 1 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "yATAvuHKXq", "mmcqaLTf11M", "nsRqSCkrX9a", "oDkfIEY8ZJ", "iclr_2022_SC6JbEviuD0", "D1zbSJ9pDST", "H6F0Sv9EMMe", "KQU0uyDG8WA", "J8eQiDdph6", "hojS7FzpmZ", "iclr_2022_SC6JbEviuD0", "iclr_2022_SC6JbEviuD0", "iclr_2022_SC6JbEviuD0" ]
iclr_2022_Zca3NK3X8G
WaveCorr: Deep Reinforcement Learning with Permutation Invariant Policy Networks for Portfolio Management
The problem of portfolio management represents an important and challenging class of dynamic decision making problems, where rebalancing decisions need to be made over time with the consideration of many factors such as investors’ preferences, trading environment, and market conditions. In this paper, we present a new portfolio policy network architecture for deep reinforcement learning (DRL) that can exploit more effectively cross-asset dependency information and achieve better performance than state-of-the-art architectures. In doing so, we introduce a new form of permutation invariance property for policy networks and derive general theory for verifying its applicability. Our portfolio policy network, named WaveCorr, is the first convolutional neural network architecture that preserves this invariance property when treating asset correlation information. Finally, in a set of experiments conducted using data from both Canadian (TSX) and American stock markets (S\&P 500), WaveCorr consistently outperforms other architectures with an impressive 3\%-25\% absolute improvement in terms of average annual return, and up to more than 200\% relative improvement in average Sharpe ratio. We also measured an improvement of a factor of up to 5 in the stability of performance under random choices of initial asset ordering and weights. The stability of the network has been found as particularly valuable by our industrial partner.
Reject
This paper proposes an architecture of a policy network (WaveCorr) that is particularly effective for portfolio management tasks. A key observation that leads to the design of WaveCorr is that the dependency across asset should be treated differently from the dependency across time. The proposed WaveCorr has the property that it is "permutation invariant" with respect to assets, which means that the class of functions that can be represented by WaveCorr is invariant to permutation of assets. WaveCorr is shown to achieve the state-of-the-art performance in a portfolio management task. A major point of discussion was the definition of "permutation invariance". The reviewers and AC understood the difference between the permutation invariance defined in this paper and that studied in the prior work (the output of a network is insensitive to the permutation of the particular values of the input). With the definition in this paper, however, a fully connected layer is permutation invariant, but the Corr layer proposed in the paper appears to have more structure. It is unclear exactly what properties of the Corr layer leads to the performance improvement.
train
[ "-5lv17XqT-", "pSEkAq-58JK", "n_cRYxyNR9e", "SBO0rizagF9", "qF4tl5I1ZJJ", "CotRr1rxBbC", "F4tQxxkd9iQ", "DprW4EezHD", "WyD_i3w-wwG", "z7njwL3vXf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. After reading the response, the reviewer still has concerns about Definition 3.1. It is unclear to the reviewer what's the main difference between the proposed \"PI property\" and the \"permutation equivariance\" property [a, b], which can be achieved by graph nets with parameter sharing ...
[ -1, -1, -1, -1, -1, -1, 8, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "SBO0rizagF9", "iclr_2022_Zca3NK3X8G", "z7njwL3vXf", "WyD_i3w-wwG", "DprW4EezHD", "F4tQxxkd9iQ", "iclr_2022_Zca3NK3X8G", "iclr_2022_Zca3NK3X8G", "iclr_2022_Zca3NK3X8G", "iclr_2022_Zca3NK3X8G" ]
iclr_2022_auLXcGlEOZ7
On Margin Maximization in Linear and ReLU Networks
The implicit bias of neural networks has been extensively studied in recent years. Lyu and Li [2019] showed that in homogeneous networks trained with the exponential or the logistic loss, gradient flow converges to a KKT point of the max margin problem in the parameter space. However, that leaves open the question of whether this point will generally be an actual optimum of the max margin problem. In this paper, we study this question in detail, for several neural network architectures involving linear and ReLU activations. Perhaps surprisingly, we show that in many cases, the KKT point is not even a local optimum of the max margin problem. On the flip side, we identify multiple settings where a local or global optimum can be guaranteed. Finally, we answer a question posed in Lyu and Li [2019] by showing that for non-homogeneous networks, the normalized margin may strictly decrease over time.
Reject
This paper studies margin maximization in linear and ReLu networks. The reviewers appreciate the technical contributions of this paper, especially the simple counterexamples. However, reviewers also found the new results seem not to give enough conceptual insights or an important "main result". The meta reviewer agrees and thus decides to reject this paper.
test
[ "9fxI_4HBRft", "oum_q9dXB-E", "yxKeY-t73At", "yUDwGfe-frW", "SHda-G5eNV", "OWXJJwRMGQX", "lxTcThjrKMp", "OVcyDBW5tf0", "HDWUYVlyhN", "duG_4kxyOqq" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks.\n\nIn the conclusion section we will include a discussion on the possible ways to circumvent our negative results and on their implications.\n\nRegarding the proof techniques: First, in our positive results on depth-2 networks (Theorems 4.2 and 4.3), we show that the solution is a KKT point not only for t...
[ -1, -1, 6, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 2, 3, 4 ]
[ "oum_q9dXB-E", "OWXJJwRMGQX", "iclr_2022_auLXcGlEOZ7", "duG_4kxyOqq", "yxKeY-t73At", "HDWUYVlyhN", "OVcyDBW5tf0", "iclr_2022_auLXcGlEOZ7", "iclr_2022_auLXcGlEOZ7", "iclr_2022_auLXcGlEOZ7" ]
iclr_2022_Ybx635VOYoM
ContraQA: Question Answering under Contradicting Contexts
With a rise in false, inaccurate, and misleading information in propaganda, news, and social media, real-world Question Answering (QA) systems face the challenges of synthesizing and reasoning over contradicting information to derive correct answers. This urgency gives rise to the need to make QA systems robust to misinformation, a topic previously unexplored. We study the risk of misinformation to QA models by investigating the behavior of the QA model under contradicting contexts that are mixed with both real and fake information. We create the first large-scale dataset for this problem, namely ContraQA, which contains over 10K human-written and model-generated contradicting pairs of contexts. Experiments show that QA models are vulnerable under contradicting contexts brought by misinformation. To defend against such a threat, we build a misinformation-aware QA system as a counter-measure that integrates question answering and misinformation detection in a joint fashion.
Reject
This paper tackles a really interesting and realistic problem: how does contradictory (potentially) fake information affect QA systems? The authors try to approach this problem by building a new dataset, starting with the widely used SQuAD and adding contradictory information. This is quite interesting, but the rest of the paper does not follow through. Reviewers ask a critical question: how would you distinguish the information that is fake, as opposed to valid, truthful information? Without this distinction, how would you train a language model to detect the fakeness and answer the question using the valid information? Unfortunately, the authors did not reply to this critical question, so it is difficult to judge the validity and contributions of this paper. There are also serious ethical implications which are discussed in the ethics review.
train
[ "vC_KHAwCG8", "EfSoh87ouv_", "qhtcZVcC4RQ", "3H_24Hc0HmT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper addresses the problem of deriving the correct answer when contradicting examples are presented to the model. First, it introduces a dataset for the task. The dataset, ContraQA is built on SQuAD, and it contains contradicting contexts produced by humans and neural-models. Then, it presents a model for ge...
[ 3, 6, 5, 3 ]
[ 4, 3, 4, 4 ]
[ "iclr_2022_Ybx635VOYoM", "iclr_2022_Ybx635VOYoM", "iclr_2022_Ybx635VOYoM", "iclr_2022_Ybx635VOYoM" ]
iclr_2022_QJb1-8NH2Ux
Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them
Making classifiers robust to adversarial examples is challenging. Thus, many defenses tackle the seemingly easier task of \emph{detecting} perturbed inputs. We show a barrier towards this goal. We prove a general \emph{hardness reduction} between detection and classification of adversarial examples: given a robust detector for attacks at distance $\epsilon$ (in some metric), we show how to build a similarly robust (but inefficient) \emph{classifier} for attacks at distance $\epsilon/2$---and vice-versa. Our reduction is computationally inefficient, and thus cannot be used to build practical classifiers. Instead, it is a useful sanity check to test whether empirical detection results imply something much stronger than the authors presumably anticipated. To illustrate, we revisit $14$ empirical detector defenses published over the past years. For $12/14$ defenses, we show that the claimed detection results imply an inefficient classifier with robustness far beyond the state-of-the-art--- thus casting some doubts on the results' validity. Finally, we show that our reduction applies in both directions: a robust classifier for attacks at distance $\epsilon/2$ implies an inefficient robust detector at distance $\epsilon$. Thus, we argue that robust classification and robust detection should be regarded as (near)-equivalent problems.
Reject
The paper investigates a very interesting problem of the connections between adversarial detection and adversarial classification. Theoretically, the authors show that one can always (ideally) construct a robust classifier from a robust detector that has equivalent robustness, and vice versa. This theorem is only correct without considering the computational complexity. However, the authors did not provide any approximate results of the reduction steps to verify the feasibility of the theorems in practice, which is the main concern of all reviewers. So we can say the paper is a reminder to the community we need to be careful about the detection results but did not provide any evidence to say they are overclaimed (only a conjecture based on the theorem in the paper) which greatly limits the contribution of the paper. Due to the competitiveness of ICLR, I cannot recommend accepting it.
train
[ "WoVOuT-dFCS", "20z8cRvUTtB", "JnSY77fP0Pv", "WOy8ZYHr6e0", "ipt_YUrnu1t", "WXEvpj3wUu", "dk8LBvSDBWO", "oRMHOMFCEXh", "gU6ymEMYP7u", "ocfqWypsXsF", "YluL2uuGPW7", "1N0oc9qvI7F", "LtW3YqERIMj", "fz25MFRr4vi", "cIauEO8rYBT", "SNLLzT5vbTO", "mOZMCz1Eug4", "Je3wt8pi8FU", "ZY58kIgoV1...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " We disagree that our reduction is not useful because it is not polynomial time.\n\nIn machine learning, the data complexity of a learning algorithm is typically a lot more important than its computational complexity.\n(i.e., we know that we can learn any function given infinite *data*, but given a finite training...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "20z8cRvUTtB", "ZY58kIgoV1g", "WOy8ZYHr6e0", "ipt_YUrnu1t", "WXEvpj3wUu", "ocfqWypsXsF", "oRMHOMFCEXh", "Je3wt8pi8FU", "1N0oc9qvI7F", "YluL2uuGPW7", "mOZMCz1Eug4", "fz25MFRr4vi", "Je3wt8pi8FU", "SNLLzT5vbTO", "ZY58kIgoV1g", "-AKwa3ObrRh", "b4pIDiam8E", "W5_32_mJ6Y3", "lxldCBocsw"...
iclr_2022_NB0czpQ3-m
RoMA: a Method for Neural Network Robustness Measurement and Assessment
Neural network models have become the leading solution for various tasks, such as classification, language processing, protein folding, and others. However, their reliability is heavily plagued by adversarial inputs: small input perturbations that cause the model to produce erroneous output, thus impairing the model’s robustness. Adversarial inputs can occur naturally when the system’s environment behaves randomly, even in the absence of a malicious adversary, and are thus a severe cause for concern when attempting to deploy neural networks within critical systems. In this paper, we present a new statistical method, called Robustness Measurement and Assessment (RoMA), which can accurately measure the robustness of a neural network model. Specifically, RoMA determines the probability that a random input perturbation might cause misclassification. The method allows us to provide formal guarantees regarding the expected number of errors a trained model will have after deployment. Our approach can be implemented on large-scale, black-box neural networks, which is a significant advantage compared to recently proposed verification methods. We apply our approach in two ways: comparing the robustness of different models, and measuring how a model’s robustness is affected by the scale of adversarial perturbation. One interesting insight obtained through this work is that, in a classification network, different output labels can exhibit very different robustness levels. We term this phenomenon Categorial Robustness. Our ability to perform risk and robustness assessments on a categorial basis opens the door to risk mitigation, which may prove to be a significant step towards neural network certification in safety-critical applications.
Reject
This paper introduces a technique to measure the *expected* robustness of a neural network by measuring the probability random input perturbations will cause the model to make a mistake. The reviewers are not convinced by the results in this paper. The methods are not carefully evaluated against prior work, and it is not exactly clear what lesson one can draw from the resulting statistical evaluation. The experimental setup is not clearly explained in several places, making the paper difficult to fully follow. Since the authors do not respond to the reviewer concerns, there was no opportunity to address these concerns.
train
[ "rxrKnq9vvBg", "hw6uhS2Xwa2", "bYCEwIL_Y0y", "nsNyaZYTyP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Present a method to measure the expected robustness of a neural network model, by determining the probability that a random input perturbation might cause misclassification, providing formal guarantees regarding the expected frequency of errors that a trained model will encounter after deployment. The method can ...
[ 3, 5, 3, 3 ]
[ 2, 4, 5, 4 ]
[ "iclr_2022_NB0czpQ3-m", "iclr_2022_NB0czpQ3-m", "iclr_2022_NB0czpQ3-m", "iclr_2022_NB0czpQ3-m" ]
iclr_2022_vnENCLwVBET
OUMG: Objective and Universal Metric for Text Generation with Guiding Ability
Existing evaluation metrics for text generation rely on comparing candidate sentences to reference sentences. Some text generation tasks, such as story generation and poetry generation, have no fixed optimal answer and cannot match a corresponding reference for each sentence. Therefore, there is a lack of an objective and universal evaluation metric. To this end, we propose OUMG, a general metric that does not depend on reference standards. We train a discriminator to distinguish between human-generated and machine-generated text, which is used to score the sentences generated by the model. These scores reflect how similar the sentences are to human-generated texts. The capability of the discriminator can be measured by its accuracy, so it avoids the subjectivity of human judgments. Furthermore, the trained discriminator can also guide the text generation process to improve model performance. Experiments on poetry generation demonstrate that OUMG can objectively evaluate text generation models without reference standards. After combining the discriminator with the generation model, the original model can produce significantly higher quality results.
Reject
The authors propose a reference-less metric for evaluating NLG systems by training a discriminator which distinguishes between human-generated and machine-generated text. The main concerns raised by the reviewers were (i) lack of clarity in certain portions of the paper (ii) lack of demonstration of the "universal" applicability of the proposed metric (only evaluated for poetry generation) (iii) lack of clear guidelines on how to use the proposed metric in a reproducible manner (iv) lack of details about what exactly does the proposed metric capture and look for in the generated text. The authors did not respond to the specific queries of the reviewer and agreed that more work is needed on their part.
val
[ "Yw7ECF1gD3d", "wHqCD_QIvz9", "rKCqGIr1_XR", "MYk0_hwTJR", "-YJVckZa0gM", "2hrIFgCQXCS", "BbyvIWKcjX", "s1S8kM40rw", "3je11_PnF2", "4MqnIcixIwi" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your comments. we are sorry to have some writing omissions in the paper, which will be fixed in the next version. In this paper, we try to propose a universal and objective text generation metric. The reason why we choose the Chinese poetry generation task is that the existing text genera...
[ -1, -1, -1, -1, -1, 3, 3, 3, 1, 1 ]
[ -1, -1, -1, -1, -1, 3, 4, 5, 4, 4 ]
[ "4MqnIcixIwi", "3je11_PnF2", "s1S8kM40rw", "BbyvIWKcjX", "2hrIFgCQXCS", "iclr_2022_vnENCLwVBET", "iclr_2022_vnENCLwVBET", "iclr_2022_vnENCLwVBET", "iclr_2022_vnENCLwVBET", "iclr_2022_vnENCLwVBET" ]
iclr_2022_8wI4UUN5RxC
Variational Inference via Resolution of Singularities
Predicated on the premise that neural networks are best viewed as singular statistical models, we set out to propose a new variational approximation for Bayesian neural networks. The approximation relies on a central result from singular learning theory according to which the posterior distribution over the parameters of a singular model, following an algebraic-geometrical transformation known as a desingularization map, is asymptotically a mixture of standard forms. From here we proceed to demonstrate that a generalized gamma mean-field variational family, following desingularization, can recover the leading order term of the model evidence. Affine coupling layers are employed to learn the unknown desingularization map, effectively rendering the proposed methodology a normalizing flow with the generalized gamma as the source distribution.
Reject
The paper proposes a variational inference based on singular learning theory (SLT), where the resolution of singularity is learned by normalizing flow so that the latent distribution is factorized. Pros: - A unique idea to use SLT for variational inference. Cons (only serious concerns): - Goal is unclear. The authors say that they propose variational inference based on SLT. But apparently, they propose it not as an alternative to the state-of-the-art variational inference for neural networks (if so the experiments shown are far from the acceptable level). The authors must clearly say for what purpose they propose a new method. I would guess the proposed method is for analyzing singular models to compute their RLCT. In that case, the authors should compare with existing methods for evaluating RLCT, e.g., MCMC based methods: K. Nagata and S. Watanabe, "Exchange Monte Carlo Sampling From Bayesian Posterior for Singular Learning Machines," in IEEE Transactions on Neural Networks, vol. 19, no. 7, pp. 1253-1266, July 2008, doi: 10.1109/TNN.2008.2000202. and discuss pros and cons of the proposed method. For DNN, you should use the state-of-the-art MCMC sampling methods like Wenzel, F., Roth, K., Veeling, B., Swiatkowski, J., Tran, L., Mandt, S., Snoek, J., Salimans, T., Jenatton, R. &amp; Nowozin, S.. (2020). How Good is the Bayes Posterior in Deep Neural Networks Really?. <i>Proceedings of the 37th International Conference on Machine Learning</i>, in <i>Proceedings of Machine Learning Research</i> 119:10248-10259 Available from https://proceedings.mlr.press/v119/wenzel20a.html. as a baseline. Approximating the posterior with normalizing flow can be another baseline. - Large n issue. SLT can be seen as a generalization of the asymptotic learning theory for the regular model, where the model complexity is represented by the parameter dimension d, and "asymptotic" means n >> d. Watanabe revealed that the model complexity cannot be represented by d in singular models, and therefore the definition of "asymptotic" is not as clear as the regular case. But it is known that typical neural networks are overparameterized and can achieve zero training error. I have seen no work arguing that SLT holds in this regime. If the authors insist that their method is applicable to deep neural networks, they should cite references where it would be proved that SLT holds in the overparameterized regime or prove it by themselves. There are many more concerns including those pointed out by reviewers, and the paper is not ready for publication.
train
[ "EAzSivdvb4j", "qIXuqkB1VHp", "h_3otvDQdRT", "_nrWKTjFPur", "78BDZoV4sTf", "70xXLI-tECgl", "FNe6v0mlfqX", "euXaZAB7lRw", "KEZW7Di3A1o", "zyE-SQpU0L_", "0VnMJgByXB7", "OZVlgFkECaI", "E9x6qlkT2F", "3f-F0OFCpPb", "qTg4xTPQeCI", "DliZKdJA81", "7gwVYaDdYxS", "TyP1xBsSXEX", "NAUO_ccC1k...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Thank you for the explanation.\nI suggest adding your argumentation as a footnote or in the appendix.", " *The current version needs to have a better summary of the contribution. In particular, it needs to connect back the original problem of variational inference: tightening the gap. The authors' comment in \"...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "h_3otvDQdRT", "_nrWKTjFPur", "78BDZoV4sTf", "pvCYZVGDEK", "7gwVYaDdYxS", "FNe6v0mlfqX", "euXaZAB7lRw", "TyP1xBsSXEX", "DliZKdJA81", "3f-F0OFCpPb", "3f-F0OFCpPb", "qTg4xTPQeCI", "iclr_2022_8wI4UUN5RxC", "_XTS7wsgLy-", "NAUO_ccC1k_", "LBHn1YtOR_T", "SVcBbaOO-L4", "LBHn1YtOR_T", "E...
iclr_2022_8qWazUd8Jm
How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating and Auditing Generative Models
Devising domain- and model-agnostic evaluation metrics for generative models is an important and as yet unresolved problem. Most existing metrics, which were tailored solely to the image synthesis setup, exhibit a limited capacity for diagnosing the modes of failure of generative models across broader application domains. In this paper, we introduce a 3-dimensional evaluation metric, ($\alpha$-Precision, $\beta$-Recall, Authenticity), that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion. Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample-level and distribution-level diagnoses of model fidelity and diversity. We introduce generalization as an additional dimension for model performance that quantifies the extent to which a model copies training data---a crucial performance indicator when modeling sensitive data with requirements on privacy. The three metric components correspond to (interpretable) probabilistic quantities, and are estimated via sample-level binary classification. The sample-level nature of our metric inspires a novel use case which we call model auditing, wherein we judge the quality of individual samples generated by a (black-box) model, discarding low-quality samples and hence improving the overall model performance in a post-hoc manner.
Reject
The authors propose a new set of metrics for evaluation of generative models based on the well-established precision-recall framework, and an additional dimension quantifying the degree of memorization. The authors evaluated the proposed approach in several settings and compared it to a subset of the classic evaluation measures in this space. The reviewers agreed that this is an important and challenging problem relevant to the generative modeling community at large. The paper is well-written and the proposed method and motivation are clearly explained. The initial reviews were borderline, and after the discussion phase we have 2 borderline accepts, one strong accept, and one strong reject. After reading the manuscript, the rebuttal, and the discussion, I feel that the work should not be accepted on the grounds of insufficient empirical validation. Establishing a new evaluation metric is a very challenging task -- one needs to demonstrate the pitfalls of existing metrics, as well as how the new metric is capturing the missing dimensions in a thorough empirical validation. While the former was somewhat shown in this work (and in many other works), the latter was not fully demonstrated. The primary reason is the use of a non-standard benchmark to evaluate the utility of the proposed metrics. I agree that covering a broader set of tasks and models makes sense in general, but it shouldn’t be done at the cost of existing, well-understood benchmarks. I expected to see a thorough comparison with [1], one of the most practical metrics used today which can be easily extended to all settings considered in this work (notwithstanding the drawbacks outlined in [2]). What are the additional insights? What is [1] failing to capture in practical instances? Does the rank correlation change with respect to modern models across classic datasets (beyond MNIST and CIFAR10)? This would remove confounding variables and significantly strengthen the paper. My final assessment is that this work is borderline, but below the acceptance bar for ICLR. I strongly suggest the authors to showcase the additional improvements over methods such as [1] in practical and well-understood settings commonly used to benchmark generative models (e.g. on images). The experiments suggested by the reviewers are a step in the right direction, but not sufficient. [1] Improved Precision and Recall Metric for Assessing Generative Models. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, Timo Aila. NeurIPS ’19 [2] Evaluating generative models using divergence frontiers. Josip Djolonga, Mario Lučić, Marco Cuturi, Olivier Frederic Bachem, Olivier Bousquet, Sylvain Gelly. AISTATS ‘20
test
[ "1D4Z6hM_tJD", "T3D-mCoOpzo", "wRHEoE1FxaC", "W87Lrv5bSPn", "Qv5PP23upcw", "Y8hUB1nYakw", "v9OweDW_4IK", "JfKC_zL5HVR", "oHKYgyZi5B2", "6HV7oueO223" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors’ careful response! After reading the response and the other reviews, I will have to **keep my “reject” score**. Since this score differs from the others, I am trying to articulate my opinions in a clear logic.\n\nIn fact, none of the critical concerns in my initial review is addressed prope...
[ -1, 3, -1, 8, -1, -1, -1, -1, 6, 6 ]
[ -1, 5, -1, 4, -1, -1, -1, -1, 4, 3 ]
[ "wRHEoE1FxaC", "iclr_2022_8qWazUd8Jm", "T3D-mCoOpzo", "iclr_2022_8qWazUd8Jm", "JfKC_zL5HVR", "oHKYgyZi5B2", "6HV7oueO223", "W87Lrv5bSPn", "iclr_2022_8qWazUd8Jm", "iclr_2022_8qWazUd8Jm" ]
iclr_2022_hxitw01k_Ql
How memory architecture affects learning in a simple POMDP: the two-hypothesis testing problem
Reinforcement learning is generally difficult for partially observable Markov decision processes (POMDPs), which occurs when the agent's observation is partial or noisy. To seek good performance in POMDPs, one strategy is to endow the agent with a finite memory, whose update is governed by the policy. However, policy optimization is non-convex in that case and can lead to poor training performance for random initialization. The performance can be empirically improved by constraining the memory architecture, then sacrificing optimality to facilitate training. Here we study this trade-off in a two-hypothesis testing problem, akin to the two-arm bandit problem. We compare two extreme cases: (i) the random access memory where any transitions between $M$ memory states are allowed and (ii) a fixed memory where the agent can access its last $m$ actions and rewards. For (i), the probability $q$ to play the worst arm is known to be exponentially small in $M$ for the optimal policy. Our main result is to show that similar performance can be reached for (ii) as well, despite the simplicity of the memory architecture: using a conjecture on Gray-ordered binary necklaces, we find policies for which $q$ is exponentially small in $2^m$, i.e. $q\sim\alpha^{2^m}$ with $\alpha < 1$. In addition, we observe empirically that training from random initialization leads to very poor results for (i), and significantly better results for (ii) thanks to the constraints on the memory architecture.
Reject
All reviewers agreed that the contribution is too limited for the paper to be published. I encourage the authors to take the reviews into account when improving their work.
train
[ "ZTW6uq3SM9", "dvN9wErhQvq", "NbtZAZt8TpZ", "oxhONg98Kpb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper attempts to investigate how memory architecture affects learning performance of POMDP agents. It focuses on a very simple two-arm bandit problem with two hypotheses for their probabilities. Two memory structures are considered: random access memory and memento memory. For each memory structure, one polic...
[ 3, 3, 3, 5 ]
[ 3, 3, 3, 2 ]
[ "iclr_2022_hxitw01k_Ql", "iclr_2022_hxitw01k_Ql", "iclr_2022_hxitw01k_Ql", "iclr_2022_hxitw01k_Ql" ]
iclr_2022_xp2D-1PtLc5
ClsVC: Learning Speech Representations with two different classification tasks.
Voice conversion(VC) aims to convert one speaker's voice to generate a new speech as it is said by another speaker. Previous works focus on learning latent representation by applying two different encoders to learn content information and timbre information from the input speech respectively. However, whether they apply a bottleneck network or vector quantify technology, it is very difficult to perfectly separate the speaker and the content information from a speech signal. In this paper, we propose a novel voice conversion framework, 'ClsVC', to address this problem. It uses only one encoder to get both timbre and content information by dividing the latent space. Besides, some constraints are proposed to ensure the different part of latent space only contains separating content and timbre information respectively. We have shown the necessity to set these constraints, and we also experimentally prove that even if we change the division proportion of latent space, the content and timbre information will be always well separated. Experiments on the VCTK dataset show ClsVC is a state-of-the-art framework in terms of the naturalness and similarity of converted speech.
Reject
This paper proposes a voice conversion framework, ClsVC, which is based on disentanglement of speaker and content information in some latent space. The authors introduce two classification constraints (a common speaker classifier and an adversarial classifier) to improve the separation of the two embeddings. Experimental results are reported on a few voice conversion tasks with objective and subjective scores. Reviewers have reservation about the novelty of the work which is not considered overwhelmingly significant given existing techniques. The theory and arguments on the claimed effectiveness of the disentanglement of speaker and content also raise concerns, which need to be further verified. The experimental results need to be more convincing. Lastly, the exposition needs significant improvements. The authors' rebuttal answers some of the comments but a few major concerns still stand. This paper can not be accepted given its current form.
train
[ "WnmotFgmGQ7", "FPD0silAfZ2", "Tzd88cv89Ew", "dFn4uuvlfin", "Vmb6Wl2Krd5", "N0JrSKsuaIP" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for providing feedback about the reviews. However as I have said in my review, the proposed method of the paper is actually quite straightforward and simple. The novelty of the method is limited. And the technical errors of the paper are not addressed by the feedback. Hence, my recommendatio...
[ -1, -1, 3, 3, 3, 5 ]
[ -1, -1, 4, 4, 4, 3 ]
[ "dFn4uuvlfin", "Tzd88cv89Ew", "iclr_2022_xp2D-1PtLc5", "iclr_2022_xp2D-1PtLc5", "iclr_2022_xp2D-1PtLc5", "iclr_2022_xp2D-1PtLc5" ]
iclr_2022_yXBb-0cPSKO
Regularized-OFU: an efficient algorithm for general contextual bandit with optimization oracles
In contextual bandit, one major challenge is to develop theoretically solid and empirically efficient algorithms for general function classes. We present a novel algorithm called \emph{regularized optimism in face of uncertainty (ROFU)} for general contextual bandit problems. It exploits an optimization oracle to calculate the well-founded upper confidence bound (UCB). Theoretically, for general function classes under very mild assumptions, it achieves a near-optimal regret bound $\Tilde{O}(\sqrt{T})$. Practically, one great advantage of ROFU is that the optimization oracle can be efficiently implemented with low computational cost. Thus, we can easily extend ROFU for contextual bandits with deep neural networks as the function class, which outperforms strong baselines including the UCB and Thompson sampling variants.
Reject
This paper tackles the contextual bandit problem with general function classes and introduces a novel algorithm called regularized optimism in face of uncertainty (ROFU). Although this is an important and relevant problem, the theoretical contribution is rather weak due to the strong assumptions, which also results in a lack of consistency with the motivation and the empirical settings. Moreover, although experimental results suggest that the proposed ROFU method may have potential, the empirical contribution is unclear as the paper currently lacks a comparison with appropriate previous work. All these concerns were raised in the reviews, but unfortunately, none were addressed in the rebuttal phase.
train
[ "r2QqPSmF7H6", "gTcuXHpdJy", "RhRh9gd_nuu", "nVE8A2B49T_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a contextual bandit algorithm called regularized optimism in the face of uncertainty (ROFU) which is designed for general function classes. The algorithm is claimed to achieve near-optimal regret \"for general functions under very mild assumptions\". However, I find this claim misleading. The op...
[ 3, 5, 3, 5 ]
[ 4, 4, 4, 4 ]
[ "iclr_2022_yXBb-0cPSKO", "iclr_2022_yXBb-0cPSKO", "iclr_2022_yXBb-0cPSKO", "iclr_2022_yXBb-0cPSKO" ]
iclr_2022_6LHiNULIeiC
SOInter: A Novel Deep Energy-Based Interpretation Method for Explaining Structured Output Models
We propose a novel interpretation technique to explain the behavior of structured output models, which learn mappings between an input vector to a set of output variables simultaneously. Because of the complex relationship between the computational path of output variables in structured models, a feature can affect the value of output through other ones. We focus on one of the outputs as the target and try to find the most important features utilized by the structured model to decide on the target in each locality of the input space. In this paper, we assume an arbitrary structured output model is available as a black-box and argue how considering the correlations between output variables can improve the explanation performance. The goal is to train a function as an interpreter for the target output variable over the input space. We introduce an energy-based training process for the interpreter function, which effectively considers the structural information incorporated into the model to be explained. The effectiveness of the proposed method is confirmed using a variety of simulated and real data sets.
Reject
This paper proposes a method for interpreting structured output model. All the reviews are negative. The reviewers find the paper difficult to read, and lacking in novelty, technical contribution and empirical evaluation.
train
[ "ONjYlqEdryB", "HDHe7EfVV4T", "WcfuXSoISFk", "GzNwyjU5RSc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper works on a novel problem of interpreting and explaining structured output models. The paper utilizes an energy based model to account for correlations between structured outputs and learns an interpretability block which given as input an image learns to mask it such that the energy based model would ass...
[ 3, 3, 3, 3 ]
[ 4, 3, 4, 2 ]
[ "iclr_2022_6LHiNULIeiC", "iclr_2022_6LHiNULIeiC", "iclr_2022_6LHiNULIeiC", "iclr_2022_6LHiNULIeiC" ]
iclr_2022_7kqWcX_r2w
Meta Attention For Off-Policy Actor-Critic
Off-Policy Actor-Critic methods can effectively exploit past experiences and thus they have achieved great success in various reinforcement learning tasks. In many image-based and multi-source tasks, attention mechanism has been employed in Actor-Critic methods to improve their sampling efficiency. In this paper, we propose a meta attention method for state-based reinforcement learning tasks, which combines attention mechanism and meta-learning based on the Off-Policy Actor-Critic framework. Unlike previous attention-based work, our meta attention method introduces attention in the actor and the critic of the typical Actor-Critic framework rather than in multiple pixels of an image or multiple information sources. In contrast to existing meta-learning methods, the proposed meta-attention approach is able to function in both the gradient-based training phase and the agent's decision-making process. The experimental results demonstrate the superiority of our meta-attention method in various continuous control tasks, which are based on the Off-Policy Actor-Critic methods including DDPG, TD3, and SAC.
Reject
The manuscript proposes a meta-attention based mechanism for improving off-policy actor-critic algorithms. Instead of introducing attention into networks at the level of pixels or multiple sources of information, this work focuses on using attention between features from the actor network (which become queries and values) and features from the critic network (which act as keys). Attention produces new features that are given as input to the action net, enabling it to potentially improve it's action selection. The attention is trained using a meta-learning objective that encourages outputting features that help other parts of the architecture to learn. Reviewer note that there are some positive features of the paper. It is relatively well written and the figures are useful for spelling out the approach. In addition, so felt that the basic idea is an interesting one. However, there was general agreement that the manuscript is not ready for publication. Most reviewers noted that, the newly proposed architecture and learning rules were not well motivated by the manuscript. Why was this particular approach pursued? Is there any better theoretical justification than can be offered? These questions on their own are not problematic. However, the empirical work does not robustly demonstrate that the algorithm yields a clear performance gain over baseline actor-critic methods in the literature. Most of the tasks are relatively simple and those gains that are observed are marginal. This is especially difficult for the manuscript given the increased compute and complexity of implementation required by the method. Finally, several reviewers were concerned about the presentation of some of the technical aspects of the works, potentially making it difficult to replicate important aspects of the work. In sum, the manuscript is not ready for publication.
train
[ "vCh_G6OQZfs", "MCK-RuccSkd", "9hPQAcNCUX", "lD7mVIw5-d" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a new architecture that adds attention to actor-critic RL methods. This new architecture is evaluated on several Roboschool tasks. The main weakness of this work is its clarity. This work would severely benefit from adding high-level motivation / justification behind the design choices of the pr...
[ 3, 3, 5, 5 ]
[ 3, 4, 4, 3 ]
[ "iclr_2022_7kqWcX_r2w", "iclr_2022_7kqWcX_r2w", "iclr_2022_7kqWcX_r2w", "iclr_2022_7kqWcX_r2w" ]
iclr_2022_bi9j5yi-Vrv
A General Theory of Relativity in Reinforcement Learning
We propose a new general theory measuring the relativity between two arbitrary Markov Decision Processes (MDPs) from the perspective of reinforcement learning (RL). Considering two MDPs, tasks such as policy transfer, dynamics modeling, environment design, and simulation to reality (sim2real), etc., are all closely related. The proposed theory deeply investigates the connection between any two cumulative expected returns defined on different policies and environment dynamics, and the theoretical results suggest two new general algorithms referred to as Relative Policy Optimization (RPO) and Relative Transition Optimization (RTO), which can offer fast policy transfer and dynamics modeling. RPO updates the policy using the \emph{relative policy gradient} to transfer the policy evaluated in one environment to maximize the return in another, while RTO updates the parameterized dynamics model (if there exists) using the \emph{relative transition gradient} to reduce the gap between the dynamics of the two environments. Then, integrating the two algorithms offers the complete algorithm Relative Policy-Transition Optimization (RPTO), in which the policy interacts with the two environments simultaneously, such that data collections from the two environments, policy and transition updates are all completed in a closed loop to form a principled learning framework for policy transfer. We demonstrate the effectiveness of RPO, RTO and RPTO in the OpenAI gym's classic control tasks by creating policy transfer problems.
Reject
The paper did not strike any reviewer as a critical addition to the literature, including various concerns regarding (1) the use of the general theory of relativity, (2) some components are well known in the past works.
train
[ "642pQXXN9Vb", "ithbqBaUCWZ", "t8BPAM59ZvO", "530lVQFXOFE", "IM5ZejIRuwX", "_d-r9YU7yd8", "tOZpsRaNDCw", "G3apA7TLzsG", "aH_lSSv6eO4" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Question 1**: about the Performance Difference Lemma.\n\n**Answer**: Thanks for letting us know the existence of the Performance Difference Lemma. We were unaware of this and had biased assessment of our contribution. Please refer to the overall reply at the beginning.\n\n**Questions 2 and 3**: about the estima...
[ -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, 4, 2, 4, 4 ]
[ "G3apA7TLzsG", "aH_lSSv6eO4", "tOZpsRaNDCw", "iclr_2022_bi9j5yi-Vrv", "_d-r9YU7yd8", "iclr_2022_bi9j5yi-Vrv", "iclr_2022_bi9j5yi-Vrv", "iclr_2022_bi9j5yi-Vrv", "iclr_2022_bi9j5yi-Vrv" ]
iclr_2022_8kpSWDgzsh0
Network Learning in Quadratic Games from Fictitious Plays
We study the ability of an adversary learning the underlying interaction network from repeated fictitious plays in linear-quadratic games. The adversary may strategically perturb the decisions for a set of action-compromised players, and observe the sequential decisions from a set of action-leaked players. Then the question lies in whether such an adversary can fully re-construct, or effectively estimate the underlying interaction structure among the players. First of all, by drawing connections between this network learning problem in games and classical system identification theory, we establish a series of results characterizing the learnability of the interaction graph from the adversary's point of view. Next, in view of the inherent stability and sparsity constraints for the network interaction structure, we propose a stable and sparse system identification framework for learning the interaction graph from full player action observations. We also propose a stable and sparse subspace identification framework for learning the interaction graph from partially observed player actions. Finally, the effectiveness of the proposed learning frameworks is demonstrated in numerical examples.
Reject
In this paper, the authors consider linear quadratic network games (also known as graphical games) and they discuss a number of conditions and procedures to learn the underlying graph of the game from observations of best-response trajectories (or possibly infinite sets thereof) in the game. The reviewers' initial assessment was overall negative, with two reviewers recommending rejection and one giving a borderline positive recommendation. The authors' rebuttal did not address the concerns of the reviewers recommending rejection, and the authors did not provide a revised paper for the reviewers to see how the authors would implement the suggested changes, so the overall negative assessment remained. After my own reading of the paper, I concur with the majority view that the paper has several weaknesses that do not make it a good fit for ICLR (especially regarding the lack of precision in the theorems and the statement of the relevant assumptions), so I am recommending rejection.
test
[ "n5eukSDBC-", "RcHma6esYwb", "HOFg2zRxP0j", "Z8cU67XPIXL", "zfRp7XnW7YH", "b0clx28ulSn", "Z3y1jy8_GPN", "wWDG6G9AMdR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors consider quadratic network games with payoff functions of the form\n$$\nJ_i(x_i;x_{-i}) = \\alpha_i x_i - x_i^2/2 + \\sum\\nolimits_{j=1}^{n} x_i x_j\n$$\nwhere $\\alpha_i>0$ denotes the marginal benefit of the $i$-th player from playing $x_i \\in \\mathbb{R}$, and $g_{ij}$ is a matrix o...
[ 3, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2022_8kpSWDgzsh0", "HOFg2zRxP0j", "iclr_2022_8kpSWDgzsh0", "iclr_2022_8kpSWDgzsh0", "iclr_2022_8kpSWDgzsh0", "iclr_2022_8kpSWDgzsh0", "iclr_2022_8kpSWDgzsh0", "iclr_2022_8kpSWDgzsh0" ]
iclr_2022_rwEv1SklKFt
Poisoned classifiers are not only backdoored, they are fundamentally broken
Under a commonly-studied backdoor poisoning attack against classification models, an attacker adds a small trigger to a subset of the training data, such that the presence of this trigger at test time causes the classifier to always predict some target class. It is often implicitly assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger. In this paper, we show empirically that this view of backdoored classifiers is incorrect. We describe a new threat model for poisoned classifier, where one without knowledge of the original trigger, would want to control the poisoned classifier. Under this threat model, we propose a test-time, human-in-the-loop attack method to generate multiple effective alternative triggers without access to the initial backdoor and the training data. We construct these alternative triggers by first generating adversarial examples for a smoothed version of the classifier, created with a procedure called Denoised Smoothing, and then extracting colors or cropped portions of smoothed adversarial images with human interaction. We demonstrate the effectiveness of our attack through extensive experiments on high-resolution datasets: ImageNet and TrojAI. We also compare our approach to previous work on modeling trigger distributions and find that our method are more scalable and efficient in generating effective triggers. Last, we include a user study which demonstrates that our method allows users to easily determine the existence of such backdoors in existing poisoned classifiers. Thus, we argue that there is no such thing as a secret backdoor in poisoned classifiers: poisoning a classifier invites attacks not just by the party that possesses the trigger, but from anyone with access to the classifier.
Reject
The authors claim that backdoored classifiers are "fundamentally broken" by demonstrating that other backdoors can be generated for such classifiers without the knowledge of the original backdoors. The proposed method, however, requires manual intervention and is not justified by theoretical arguments. Numerous questions asked by the reviewers were not addressed in the rebuttal period.
train
[ "WPFuqK5hBE", "7Hozps1DtB", "haeQnkLSMhl", "5PP0kZFSMd", "Zy6ZXqnAtg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a test-time, human-in-the-loop attack method to generate backdoor triggers under a specific threat model and argued that anyone with access to the classifier could reconstruct the triggers.\n\nSpecifically, the authors consider a threat model where the backdoor triggers (attack) are only allowe...
[ 3, 5, 3, 5, 5 ]
[ 4, 4, 4, 4, 4 ]
[ "iclr_2022_rwEv1SklKFt", "iclr_2022_rwEv1SklKFt", "iclr_2022_rwEv1SklKFt", "iclr_2022_rwEv1SklKFt", "iclr_2022_rwEv1SklKFt" ]
iclr_2022_z2B0JJeNdvT
Distributed Zeroth-Order Optimization: Convergence Rates That Match Centralized Counterpart
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed with gradients replaced by the estimates. For optimization of large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed zeroth-order algorithms. It is important to understand the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, especially for multi-agent systems with time-varying communication networks. In this paper, we establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost in distributed algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
Reject
The paper is about a topic that has been extensively studied for more than a decade, hence a very precise discussion of prior work as well as the new insights is absolutely necessary. Unfortunately both are lacking at this stage, thus the paper cannot be accepted.
train
[ "0UaxOS63wNg", "EAVEvwO6lry", "qGY0aRJEPqs", "H251C_YV7e", "wSwQe5M0gU1", "jsCjQvKUdr9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the time in evaluating our work and the in-depth comments.\n\nWe will further highlight the contributions and technical difficulties of the distributed zeroth-order algorithms, as compared to the first-order algorithms. In fact, this paper aims to fill the gap between centralized and dis...
[ -1, -1, -1, 6, 3, 5 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "jsCjQvKUdr9", "wSwQe5M0gU1", "H251C_YV7e", "iclr_2022_z2B0JJeNdvT", "iclr_2022_z2B0JJeNdvT", "iclr_2022_z2B0JJeNdvT" ]
iclr_2022_viWF5cyz6i
An Efficient and Reliable Tolerance-Based Algorithm for Principal Component Analysis
Principal component analysis (PCA) is an important method for dimensionality reduction in data science and machine learning. But, it is expensive for large matrices when only a few principal components are needed. Existing fast PCA algorithms typically assume the user will supply the number of components needed, but in practice, they may not know this number beforehand. Thus, it is important to have fast PCA algorithms depending on a tolerance. For $m\times n$ matrices where a few principal components explain most of the variance in the data, we develop one such algorithm that runs in $O(mnl)$ time, where $l\ll \min(m,n)$ is a small multiple of the number of principal components. We provide approximation error bounds that are within a constant factor away from optimal and demonstrate its utility with data from a variety of applications.
Reject
The reviewers recommended rejection. There was no reply from the authors. The main weaknesses are: - No experiment on real-life dataset (only simulated) - Unsubstantiated claims about the literature - No discussion on the time complexity - Incremental contribution
train
[ "iQmFiTg6Btg", "QPYo2aFZevM", "pjH3TWh8Lc7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose an approximate algorithm for PCA on large data. The proposed algorithm computes only the principle components associated with the singular values larger than a given threshold. \nThere’s extensive work on scaling up PCA such as the randomized algorithm Halko et., al. (2010). In t...
[ 3, 5, 5 ]
[ 3, 3, 3 ]
[ "iclr_2022_viWF5cyz6i", "iclr_2022_viWF5cyz6i", "iclr_2022_viWF5cyz6i" ]
iclr_2022_lNreaMZf9X
Learning Dynamics Models for Model Predictive Agents
Model-Based Reinforcement Learning involves learning a dynamics model from data, and then using this model to optimise behaviour, most often with an online planner. Much of the recent research along these lines presents a particular set of design choices, involving problem definition, model learning and planning. Given the multiple contributions, it is difficult to evaluate the effects of each. This paper sets out to disambiguate the role of different design choices for learning dynamics models, by comparing their performance to planning with a ground-truth model -- the simulator. First, we collect a rich dataset from the training sequence of a model-free agent on 5 domains of the DeepMind Control Suite. Second, we train feed-forward dynamics models in a supervised fashion, and evaluate planner performance while varying and analysing different model design choices, including ensembling, stochasticity, multi-step training and timestep size. Besides the quantitative analysis, we describe a set of qualitative findings, rules of thumb, and future research directions for planning with learned dynamics models. Videos of the results are available at https://sites.google.com/view/learning-better-models.
Reject
The paper studies the effect of different design choices related to learning a dynamics model. The reviewers uniformly agree that the topic of the paper, systematically studying different design choices, is important. Furthermore, the paper is very well written. However, there are a number of weaknesses as well, that limit the relevance of this work. Arguably, the main weakness is that the results are inconclusive: there is no single design choice that is better, a conclusion that provides little guidance for researchers working in this space. Another weakness is that the study focuses on only 4 domains. And while performing such a study on a much broader set of domains can be prohibitively expensive, that doesn't take away from the fact that it is hard to draw strong conclusions from such a small set of tasks. For these reasons, I recommend rejection.
train
[ "jW4jWjX6j9Q", "ijUWMa31AEt", "btNkd74yq1", "YUV7RzAOBKC", "Fvf1Iy9xPG", "M8iYqjMKOt", "qI4w6a3zLhW", "4n1uLNq6B2G", "adj2MsWa_W5", "_ZXvkRb-t8K", "RWz9pne-iXG", "uYU6cEQ-p8J", "vdPBk68qPZO", "3o4K9Uj6PW2", "XH9WHbeNOSb", "XRr3hCyCsxq" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the positive words. We will update the figure caption and make the title more indicative once, we can update the paper again. ", " Firstly, I'd like to thank the authors for their detailed response. While I don’t disagree with the goal of the paper (a call to action to better understand model learnin...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 3, 4 ]
[ "btNkd74yq1", "XH9WHbeNOSb", "Fvf1Iy9xPG", "qI4w6a3zLhW", "M8iYqjMKOt", "RWz9pne-iXG", "_ZXvkRb-t8K", "adj2MsWa_W5", "iclr_2022_lNreaMZf9X", "XRr3hCyCsxq", "3o4K9Uj6PW2", "vdPBk68qPZO", "iclr_2022_lNreaMZf9X", "iclr_2022_lNreaMZf9X", "iclr_2022_lNreaMZf9X", "iclr_2022_lNreaMZf9X" ]
iclr_2022_7TFcl1Xkr7
Interactive Model with Structural Loss for Language-based Abductive Reasoning
The abductive natural language inference task ($\alpha$NLI) is proposed to infer the most plausible explanation between the cause and the event. In the $\alpha$NLI task, two observations are given, and the most plausible hypothesis is asked to pick out from the candidates. Existing methods model the relation between each candidate hypothesis separately and penalize the inference network uniformly. In this paper, we argue that it is unnecessary to distinguish the reasoning abilities among correct hypotheses; and similarly, all wrong hypotheses contribute the same when explaining the reasons of the observations. Therefore, we propose to group instead of ranking the hypotheses and design a structural loss called "joint softmax focal loss" in this paper. Based on the observation that the hypotheses are generally semantically related, we have designed a novel interactive language model aiming at exploiting the rich interaction among competing hypotheses. We name this new model for $\alpha$NLI: Interactive Model with Structural Loss (IMSL). The experimental results show that our IMSL has achieved the highest performance on the RoBERTa-large pretrained model, with ACC and AUC results increased by about 1% and 5% respectively.
Reject
The paper studies improving model for abductive natural language inference task. Specifically, they introduce information interaction layers and the joint softmax focal loss. On positive notes, their method shows persuasive empirical gains. However, reviewers found (1) the technical novelty of the approach to be limited (reviewer croc, 3Vwo, W1Sp), (2) approaches (especially focal loss) not well motivated (reviewer hk5y), (3) there are limited take-away from the paper (reviewer imYG, hk5y) and (4) claims not well supported and experimental details missing (reviewer hk5y). The reviewers further provided detailed comments that would be helpful for authors to improve the paper. Because of such limitations, in its current form, the paper is not ready for publication.
train
[ "PdvTVd44Hsl", "mWW_23plCDa", "Tk3zY5sQiPW", "NLkRVBD-CEc", "9maE3AcPa0M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new model architecture, along with a new loss, to solve the abductive natural language inference (aNLI) task. Specifically, this work hypothesizes that models benefit from the interaction between multiple choice options, and thus introducing a “information interaction layer” to allow the inte...
[ 3, 3, 3, 3, 3 ]
[ 4, 4, 3, 4, 3 ]
[ "iclr_2022_7TFcl1Xkr7", "iclr_2022_7TFcl1Xkr7", "iclr_2022_7TFcl1Xkr7", "iclr_2022_7TFcl1Xkr7", "iclr_2022_7TFcl1Xkr7" ]
iclr_2022_8Wdj6IJsSyJ
Fully differentiable model discovery
Model discovery aims at autonomously discovering differential equations underlying a dataset. Approaches based on Physics Informed Neural Networks (PINNs) have shown great promise, but a fully-differentiable model which explicitly learns the equation has remained elusive. In this paper we propose such an approach by integrating neural network-based surrogates with Sparse Bayesian Learning (SBL). This combination yields a robust model discovery algorithm, which we showcase on various datasets. We then identify a connection with multitask learning, and build on it to construct a Physics Informed Normalizing Flows (PINFs). We present a proof-of-concept using a PINF to directly learn a density model from single particle data. Our work expands PINNs to various types of neural network architectures, and connects neural network-based surrogates to the rich field of Bayesian parameter inference.
Reject
The reviewers are in consensus that this manuscript falls just short of the bar. I recommend that the authors take their recommendations into consideration in revising their manuscript, with a particular focus on comparison to the state of the art.
train
[ "yllfgAJCXM", "6Ul5iU_twf1", "xtwrAB05Kzb", "3HAX1ZE2Nur", "M7Fl_qGU38C", "8t1nb4Isn-", "fZf-n0SvRYV", "FSrNt8HEQpe", "YZzlxIpgEet", "oH4o1aFOgQG", "dHPiCOehmRv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes to combine sparse-Bayesian learning (SBL) with physics informed neural networks (PINNs) to achieve feature/basis selection in learning PDEs. This work proposes to denoise noisy data with PINNs, and use SBL on (fixed basis expansions of) the denoised data to provide differentiable basis selection...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2022_8Wdj6IJsSyJ", "fZf-n0SvRYV", "YZzlxIpgEet", "YZzlxIpgEet", "8t1nb4Isn-", "dHPiCOehmRv", "yllfgAJCXM", "oH4o1aFOgQG", "iclr_2022_8Wdj6IJsSyJ", "iclr_2022_8Wdj6IJsSyJ", "iclr_2022_8Wdj6IJsSyJ" ]
iclr_2022_3z9RnbAS49
A Theoretical and Empirical Model of the Generalization Error under Time-Varying Learning Rate
Stochastic gradient descent is commonly employed as the most principled optimization algorithm for deep learning, and the dependence of the generalization error of neural networks on the given hyperparameters is crucial. However, the case in which the batch size and learning rate vary with time has not yet been analyzed, nor the dependence of them on the generalization error as a functional form for both the constant and time-varying cases has been expressed. In this study, we analyze the generalization bound for the time-varying case by applying PAC-Bayes and experimentally show that the theoretical functional form for the batch size and learning rate approximates the generalization error well for both cases. We also experimentally show that hyperparameter optimization based on the proposed model outperforms the existing libraries.
Reject
This paper derives a PAC-Bayes generalization bound for SGD and uses the results to postulate a functional form for the generalization error as a function of the ratio of the learning rate to the batch size. This functional form is then leveraged to develop a kernel function GP hyperparameter optimization. The reviewers favorably viewed the novel PAC-Bayes bound, but were not convinced by the subsequent analysis. In particular, the reviewers expressed some skepticism about the soundness and generality of the proposed functional form, and were unconvinced that the method would be useful in practice. As such, I cannot recommend the paper for acceptance.
train
[ "jxmOFMSaEZT", "g4jv_SHWUlP", "0_T3B-TfKPu", "caey54TWIK", "NqGTK6G1FbD", "f6Xv0t1NdJE", "0yKnMouCJpJ", "omzQ-uCj9j6", "_34-fg9sVCI" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response. However, I am leaving my score unchanged at weak reject. \n\nThe main issues with this work are the theoretical soundness and the disconnection between the theory and practice. Details like the failure to account for time-varying A and the discretization with ...
[ -1, -1, -1, -1, -1, 5, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "NqGTK6G1FbD", "_34-fg9sVCI", "0yKnMouCJpJ", "omzQ-uCj9j6", "f6Xv0t1NdJE", "iclr_2022_3z9RnbAS49", "iclr_2022_3z9RnbAS49", "iclr_2022_3z9RnbAS49", "iclr_2022_3z9RnbAS49" ]
iclr_2022_9kBDWEmA6i
When high-performing models behave poorly in practice: periodic sampling can help
Training a deep neural network (DNN) for breast cancer detection from medical images suffers from the (hopefully) low prevalence of the pathology. For a sensible amount of positive cases, images must be collected from numerous places resulting in large heterogeneous datasets with different acquisition devices, populations, cancer incidences. Without precaution, this heterogeneity may result in a DNN biased by latent variables a priori independent of the pathology. This may be dramatic if this DNN is used inside a software to help radiologists to detect cancers. This work mitigates this issue by acting on how mini-batches for Stochastic Gradient Descent (SGD) algorithms are constructed. The dataset is divided into homogeneous subsets sharing some attributes (\textit{e.g.} acquisition device, source) called Data Segments (DSs). Batches are built by sampling each DS periodically with a frequency proportional to the rarest label in the DS and by simultaneously preserving an overall balance between positive and negative labels within the batch. Periodic sampling is compared to balanced sampling (equal amount of labels within a batch, independently of DS) and to balanced sampling within DS (equal amount of labels within a batch and each DS). We show, on breast cancer prediction from mammography images of various devices and origins, that periodic sampling leads to better generalization than other sampling strategies.
Reject
This submission has been withdrawn. The reviews are of good quality. The authors should consider writing two separate papers: one about the problem and solution from an ML perspective, and the other about the application to radiology. Papers that provide a new method in the context of a single application domain run the risk of making a contribution to neither, and of being evaluated by reviewers who are not experts in both.
train
[ "6PCH9TRgFy", "Z5OMZfGYd2s", "o1JkIgyRkO", "FSebk-LbUp-", "bWZxqR1anIx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new sampling procedure, which works by dividing the dataset into homogeneous subsets and sampling each subset periodically with a frequency proportional to the rarest label in the subset while preserving class balance within each batch. On the task of breast cancer prediction from mammography ...
[ 3, 5, 6, 5, 5 ]
[ 4, 4, 4, 4, 4 ]
[ "iclr_2022_9kBDWEmA6i", "iclr_2022_9kBDWEmA6i", "iclr_2022_9kBDWEmA6i", "iclr_2022_9kBDWEmA6i", "iclr_2022_9kBDWEmA6i" ]
iclr_2022_WtPHnvDUk5X
GANet: Glyph-Attention Network for Few-Shot Font Generation
Font generation is a valuable but challenging task, it is time consuming and costly to design font libraries which cover all glyphs with various styles. The time and cost of such task will be greatly reduced if the complete font library can be generated from only a few custom samples. Inspired by font characteristics and global and local attention mechanism Wang et al. (2018), we propose a glyph-attention network (GANet) to tackle this problem. Firstly, a content encoder and a style encoder are trained to extract features as keys and values from a content glyph set and a style glyph set, respectively. Secondly, a query vector generated from a single glyph sample by the query encoder is applied to draw out proper features from the content and style (key, value) pairs via glyph-attention modules. Next, a decoder is used to recover a glyph from the queried features. Lastly, Adversarial losses Goodfellow et al. (2014) with multi-task glyph discriminator are employed to stablize the training process. Experimental results demonstrate that our method is able to create robust results with superior fidelity. Less number of samples are needed and better performance is achieved when compared to the other state-of-the-art few-shot font generation methods, without utilizing supervision on locality such as component, skeleton, or strokes, etc.
Reject
This paper proposes a framework for few-shot font generation. Reviewers thought that the model was well-suited to the task. They were split on clarity, with some saying that the paper was easy to follow and others saying that it lacked sufficient detail. Overall reviewers found the technical novelty limited, saying that the approach was a “transformer variant”, while it was the first to apply a Transformer-like model on few-shot font generation, there wasn’t sufficient novelty in this task to have broad appeal to the ICLR community. The reviewers also pointed out some deficiency in the evaluation, concerning the chosen metrics (multiple reviewers requesting fidelity metrics) and missing baselines. Some reviewers posed questions to the authors but the authors did not respond to the reviews. There is a clear consensus to reject the paper.
train
[ "PV5Z-3ej-ue", "lDF2yc5qGrb", "cR-S6vaD9xf", "W_6-jS257Fx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this submission, the authors proposed a framework for few-shot font generation where the key components are content and style glyph-attention modules. They demonstrated the effectiveness of the proposed framework on a newly-collected dataset by comparing the IoU of different methods. Strengths:\n1.\tThe propose...
[ 3, 5, 5, 5 ]
[ 4, 5, 4, 5 ]
[ "iclr_2022_WtPHnvDUk5X", "iclr_2022_WtPHnvDUk5X", "iclr_2022_WtPHnvDUk5X", "iclr_2022_WtPHnvDUk5X" ]
iclr_2022_gKprVaCyQmA
There are free lunches
No-Free-Lunch Theorems state that the performance of all algorithms is the same when averaged over all possible tasks. It has been argued that the necessary conditions for NFL are too restrictive to be found in practice. There must be some information for a set of tasks that ensures some algorithms perform better than others. In this paper we propose a novel idea, "There are free lunches" (TAFL) Theorem, which states that some algorithms can achieve the best performance in all possible tasks, in the condition that tasks are given in a specific order. Furthermore, we point out that with the number of solved tasks increasing, the difficulty of solving a new task decreases. We also present an example to explain how to combine the proposed theorem and the existing supervised learning algorithms.
Reject
As the reviewers say, the subject matter of this paper is important, and of interest to the ICLR audience (I discount tzHo's suggestion that the paper is more suited to other venues). However, there are three primary reasons this paper should not be published as is: 1. A theoretical paper *must* be precise, accurate, and clear. The reviewers universally consider the notation ambiguous, and the theorem unproved because of this ambiguity. 2. The leap from solving a series of tasks optimally to having solved the composition optimally is indeed poorly argued in the paper, and is not resolved by the discussion. 3. I would also strongly recommend showing a less trivial example. It does not need to be "real world", but it should address numerically the specific doubt of BuW: the relationship of OTE of the composed model to OTE of the subtask models. In summary: TaFL may be true, but this paper does not show it to be true; or conversely, TaFL may be false, in which case publishing this paper would be a grave error. The authors should use the reviewers reports to clarify and strengthen the argument. This does include showing numerical results, because inspection of the code generating such results can often aid reviewers and readers in judging the truth of the theoretical claims, and in finding subtle missteps in the derivations.
train
[ "F0_4JwvYAve", "3GPkW6VYbzf", "i_8glHwRE_", "OD3feVVbxmg", "mKGBZOg5Kxn", "gamzKZNELHL", "HrRjhwrHmq-", "cOJI2SZ5Zvg" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi,\n\nas far as i understood your paper, your proposal would not consider the case 2, but instead a case 3 where you would then proceed to generate a new task sequence d_21,d_22...d_2n, each solvable by L_a with error e_21,e_22 ... e2n. And I say: there is no guarantee that the final composition of models has sm...
[ -1, -1, -1, -1, 3, 5, 3, 1 ]
[ -1, -1, -1, -1, 2, 2, 3, 3 ]
[ "3GPkW6VYbzf", "i_8glHwRE_", "OD3feVVbxmg", "cOJI2SZ5Zvg", "iclr_2022_gKprVaCyQmA", "iclr_2022_gKprVaCyQmA", "iclr_2022_gKprVaCyQmA", "iclr_2022_gKprVaCyQmA" ]
iclr_2022_k6F-4Bw7LpV
Distributional Generalization: Structure Beyond Test Error
Classifiers in machine learning are often reduced to single dimensional quantities, such as test error or loss. Here, we initiate a much richer study of classifiers by considering the entire joint distribution of their inputs and outputs. We present both new empirical behaviors of standard classifiers, as well as quantitative conjectures which capture these behaviors. Informally, our conjecture states: the output distribution of an interpolating classifier matches the distribution of true labels, when conditioned on certain subgroups of the input space. For example, if we mislabel 30% of dogs as cats in the train set of CIFAR-10, then a ResNet trained to interpolation will in fact mislabel roughly 30% of dogs as cats on the *test set* as well, while leaving other classes unaffected. This conjecture has implications for the theory of overparameterization, scaling limits, implicit bias, and statistical consistency. Further, it can be seen as a new kind of generalization, which goes beyond measuring single-dimensional quantities to measuring entire distributions.
Reject
The paper proposes a new perspective on the generalization performance of interpolating classifiers based on the entire joint distribution of their inputs and outputs. It conjectures that, when conditioned on certain subgroups, the output distribution matches the distribution of true labels. The conjecture is investigated empirically on a number of datasets and models, and proved to hold for a simple nearest neighbor model. This paper generated varying responses from the reviewers and a detailed discussion. One main concern focused on whether the feature calibration conjecture is actually surprising, given standard expectations about generalization from learning theory. Indeed, from the discussion and the paper itself, it seems the authors conceived of classical generalization as a statement about whether train performance $\approx$ test performance, whereas one reviewer remarked that "what it really talks about is concentration of measure." I agree with the importance of this distinction in general, though it is perhaps less relevant in the current setting of modern interpolating classifiers, for which so little about generalization is understood in the first place. In particular, the empirical observations of varied forms of good generalization behavior for overparameterized models are likely to be interesting to the community, regardless of whether this behavior might be expected in the large sample limit. As such, this is a very borderline paper, with many good arguments both for and against acceptance. After a detailed discussion among the chairs, it was decided that the current version is just shy of the acceptance threshold, but I would strongly encourage the authors to address the main reviewer concerns and resubmit a revised manuscript to a future venue.
train
[ "qh9to_mqccW", "0X3KTZ45ayq", "OiIHvhkzfX", "4UNIZ4H_R8p", "edEpwWZSJiZ", "Mwb1mFV19Eg", "_ISMlAvFU-f", "1HtSnLBHNf5", "bxKQ36VMHji", "P2immbPJdPa", "FVvxD9264Es", "vGMrHFxOMp", "etd--DYMJG6", "hpiBxofkbAE", "EpFD4Lkm0C", "ZaRRGKciyMR", "Xc2erX96zMr", "GlUGXU2AXzF" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_rev...
[ " Thanks for the response!\nI'm not sure if Figure 6 is the best example to demonstrate this point. It is not fair to show CIFAR-10 test error as an example of high generalization gap in the context of Figure 6 as the classification problem on which DG is being evaluated on only has two classes (as opposed to 10 cl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "OiIHvhkzfX", "Mwb1mFV19Eg", "4UNIZ4H_R8p", "_ISMlAvFU-f", "bxKQ36VMHji", "_ISMlAvFU-f", "1HtSnLBHNf5", "hpiBxofkbAE", "etd--DYMJG6", "FVvxD9264Es", "vGMrHFxOMp", "EpFD4Lkm0C", "Xc2erX96zMr", "ZaRRGKciyMR", "GlUGXU2AXzF", "iclr_2022_k6F-4Bw7LpV", "iclr_2022_k6F-4Bw7LpV", "iclr_2022...
iclr_2022_UoNqm70g9HY
Equivalent Distance Geometry Error for Molecular Conformation Comparison
\textit{Straight-forward} conformation generation models, which generate 3-D structures directly from input molecular graphs, play an important role in various molecular tasks with machine learning, such as 3D-QSAR and virtual screening in drug design. However, existing loss functions in these models either cost overmuch time or fail to guarantee the equivalence during optimization, which means treating different items unfairly, resulting in poor local geometry in generated conformation. So, we propose \textbf{E}quivalent \textbf{D}istance \textbf{G}eometry \textbf{E}rror (EDGE) to calculate the differential discrepancy between conformations where the essential factors of three kinds in conformation geometry (i.e. bond lengths, bond angles and dihedral angles) are equivalently optimized with certain weights. And in the improved version of our method, the optimization features minimizing linear transformations of atom-pair distances within 3-hop. Extensive experiments show that, compared with existing loss functions, EDGE performs effectively and efficiently in two tasks under the same backbones.
Reject
This paper proposes a new loss function for molecular conformation comparison to be used in generation tasks. All reviewers found the research topic is interesting, but the work lacks in multiple aspects. Major concerns include limited contributions and novelty, lack of comparison with prior methods, limited improvements, writing and clarity, etc. The authors did not provide any response during discussion. Given the consistency and extent of concerns, and lack of response, I recommend this paper be rejected at this time.
train
[ "oogePMM_2xh", "pS3alm-bWI", "uldIH6tSZM7", "3S3jx36rgv-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors proposed a new method for molecular conformation generation. The authors choose to use bond lengths $d$, bond angles $\\phi$ and dihedral angles $\\psi$ for molecular conformation generation. The authors use Taylor expansion and multiplier truncation to improve models. The authors conduct experiments ...
[ 5, 5, 3, 3 ]
[ 3, 4, 5, 5 ]
[ "iclr_2022_UoNqm70g9HY", "iclr_2022_UoNqm70g9HY", "iclr_2022_UoNqm70g9HY", "iclr_2022_UoNqm70g9HY" ]
iclr_2022_luO6l9cP6b6
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models
There is growing evidence that pretrained language models improve task-specific fine-tuning even where the task examples are radically different from those seen in training. What is the nature of this surprising cross-domain transfer? We offer a partial answer via a systematic exploration of how much transfer occurs when models are denied any information about word identity via random scrambling. In four classification tasks and two sequence labeling tasks, we evaluate LSTMs using GloVe embeddings, BERT, and baseline models. Among these models, we find that only BERT shows high rates of transfer into our scrambled domains, and for classification but not sequence labeling tasks. Our analyses seek to explain why transfer succeeds for some tasks but not others, to isolate the separate contributions of pretraining versus fine-tuning, to show that the fine-tuning process is not merely learning to unscramble the scrambled inputs, and to quantify the role of word frequency. These findings help explain where and why cross-domain transfer occurs, which can guide future studies and practical fine-tuning efforts.
Reject
This work performs an analysis of the generalization ability of pre-trained models under the condition of vocabulary scrambling. The paper is well written and easy to understand. However, a full story and investigation into the cause of the observed transfer under word scrambling is lacking. For example, do more powerful models transfer less because pre-training is more effective? While the effect described in the paper is interesting, it lacks a solid connection to important areas such as adversarial attacks, cross-lingual domain shifts and doesn't seem to have any effect on the application of fine-tuning to pre-trained models. The experimental section could also be improved with comparisons to other more recent models such as RoBERTa and GPT-2. Even though these more recent models are still Transformer-based models it can help answer the question if more powerful models transfer less under word scrambling as raised by reviewer 6a8W. The results on LSTMs seem to imply this case. We thank the authors for including additional results on DeBERTa but this was insufficient to change the authors' opinion of the value of the paper.
train
[ "5JNRgRXFiYU", "-dApi3rHc64", "tQGOcvAQDDw", "LfG5HKZqOYN", "RVqg8JMwYHX", "rLPeDmGpYz", "SByaWu7pLAw", "DU2H9phJxKf", "3_zwWcnoXFW", "ZbGlMKJ20oN", "MM5PKnVVAjG", "EutFMldbS4Q", "gdPlFy2IVE", "xKIpVpLyXSU", "bgofww0YUq", "4OSgvF1DYrj", "N6WTofcCYvz", "VESIIO2BqMT", "xcJDXLWOfA2"...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " > the contribution is quite niche and focused.\n\nOur view is that pretraining + fine-tuning is arguably the most mainstream thing one can do right now, throughout many parts of AI, and so anything we can do to better understand the nature and limits of that method is fundamentally important.", " > For point 4,...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "tQGOcvAQDDw", "tQGOcvAQDDw", "N6WTofcCYvz", "iclr_2022_luO6l9cP6b6", "iclr_2022_luO6l9cP6b6", "EutFMldbS4Q", "gdPlFy2IVE", "xKIpVpLyXSU", "4OSgvF1DYrj", "4OSgvF1DYrj", "bgofww0YUq", "YTrF85GtnNa", "YTrF85GtnNa", "WwI4V7qFf16", "xcJDXLWOfA2", "GDqRu4g7Fsq", "v0MV94nFjfz", "XlpgD1is...
iclr_2022_0VezzBzLmBr
Online Tuning for Offline Decentralized Multi-Agent Reinforcement Learning
Offline reinforcement learning could learn effective policies from a fixed dataset, which is promising in real-world applications. However, in offline decentralized multi-agent reinforcement learning, due to the discrepancy between the behavior policy and learned policy, the transition dynamics in offline experiences do not accord with the transition dynamics in online execution, which creates severe errors in value estimates, leading to uncoordinated and suboptimal policies. One way to overcome the transition bias is to bridge offline training and online tuning. However, considering both deployment efficiency and sample efficiency, we could only collect very limited online experiences, making it insufficient to use merely online data for updating the agent policy. To utilize both offline and online experiences to tune the policies of agents, we introduce online transition correction (OTC) to implicitly correct the biased transition dynamics by modifying sampling probabilities. We design two types of distances, i.e., embedding-based and value-based distance, to measure the similarity between transitions, and further propose an adaptive rank-based prioritization to sample transitions according to the transition similarity. OTC is simple yet effective to increase data efficiency and improve agent policies in online tuning. Empirically, we show that OTC outperforms baselines in a variety of tasks.
Reject
## A Brief Summary of the Paper In offline decentralized MARL, the discrepancy between the offline data and the interactions of agents in the environment can cause discrepancy and as a result, the policies will perform suboptimally. This issue in ORL is known as extrapolation error. This paper is trying to address the issue of extrapolation error with offline decentralized agents. It is possible to alleviate this problem by combining offline RL with online fine-tuning. This paper introduces Online Transition Correction (OTC) approach to address this problem which aims to correct the biased transition dynamics with a form of importance sampling based on embedding and value-based distance metrics. ## Summary of the Reviews Below I will outline some important concerns raised by the reviewers. ### Reviewer T8NR **Pros:** - Interesting and practical setting. - Extensive experiments to evaluate OTC. **Cons:** - Lack of enough discussions about closely related works, for example, the MABCQ algorithm. - Computational cost of OTC. - Experiments: comparisons against baselines (MA-ICQ), Large variance in Figure 4. Small-scale (only two agent) setting, can it scale to more agents? ### Reviewer 2xuS **Pros:** - The bias of transition dynamics in offline decentralized MARL problem is interesting. **Cons** - Key baselines such as [[BREMEN]] and [[MUSBO]] that looks into the deployment constrained ORL is not compared against in this paper. - The proposed OTC algorithm can easily be applied to Single agent settings. MARL vs SARL comparisons would be interesting. - Large computational cost incurred from OTC. Because of the search procedure of finding most similar examples to the examples in the dataset. ### Reviewer a2PP **Pros:** - Well-written. **Cons:** - Analysis on the behavior of the policy, in particular on novelty seeking during the online finetuning phase. - The missing details of the transition function. - Compute constraints and budget. - Unclear experimental protocol: states vs observations... - Missing baselines. ### Reviewer ZgL6 **Cons** - Lack of novelty. - Limited experimental results: transfer learning scenarios, lack of experiments on multiagent environments such as Starcraft II. ## Key Takeaways and Thoughts Overall, the authors did a good job addressing the concerns raised by the reviewers. For example, the authors ran additional experiments and compared single-agent BCQ with and without OTC on some D4RL tasks. The authors gave detailed responses to the questions related to the computational cost. However, the initial submission version of this paper feels rushed as it is submitted to the conference. I would recommend the authors, go through the reviews carefully and address the points raised by the reviewers carefully in a future version of this paper. As it stands now, it is difficult to evaluate the results reported by the authors during the rebuttal, due to the lack of clarity about their experimental details. I think the writing can be improved further. There are several typos in the paper, and most reviewers were confused about the novelty of the paper. I would recommend the authors to provide more detailed discussions about the differences from other similar approaches in the literature. Also, justify the selected experimental protocol better.
train
[ "cnv-OcXJYKn", "K8qHzf117H3", "vkyPZDUeYBu", "4uRn_hCEzER", "XpFfVZfysBZ", "BdooVgugUxH", "9EupkgkAxx", "eUG7B_QYb_l", "Rz44_w41MOB", "OOwxwocDK4d", "c8VJm5vl3I", "SuMlThIlTBe", "LYa2KPTFaM6", "Z4GzDjgr0y" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Since OTC corrects the bias between offline and online transition dynamics, it will be more effective when the bias is larger. However, when we split D4RL datasets into individual datasets for each agent, the behavior policy $\\pi_{\\beta_j}$ in the dataset $i$ is the same as the $\\pi_{\\beta_j}$ in the dataset ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 5 ]
[ "Z4GzDjgr0y", "SuMlThIlTBe", "iclr_2022_0VezzBzLmBr", "XpFfVZfysBZ", "BdooVgugUxH", "LYa2KPTFaM6", "SuMlThIlTBe", "iclr_2022_0VezzBzLmBr", "Z4GzDjgr0y", "c8VJm5vl3I", "iclr_2022_0VezzBzLmBr", "iclr_2022_0VezzBzLmBr", "iclr_2022_0VezzBzLmBr", "iclr_2022_0VezzBzLmBr" ]
iclr_2022_DFYtZFo_1u
Federated Inference through Aligning Local Representations and Learning a Consensus Graph
Machine learning is faced with many data challenges when applied in practice. Among them, a notable barrier is that data are distributed and sharing is unrealistic for volume and privacy reasons. Federated learning is a recent formalism to tackle this challenge, so that data owners can develop a common model jointly but use it separately. In this work, we consider a less addressed scenario where a datum consists of multiple parts, each of which belongs to a separate owner. In this scenario, joint efforts are required not only in learning but also in inference. We study \emph{federated inference}, which allows each data owner to learn its own model that captures local data characteristics and copes with data heterogeneity. On the top is a federation of the local data representations, performing global inference that incorporates all distributed parts collectively. To enhance this local--global framework, we propose aligning the ambiguous data representations caused by arbitrary arrangement of neurons in local neural network models, as well as learning a consensus graph among data owners in the global model to improve performance. We demonstrate effectiveness of the proposed framework on four real-life data sets including power grid systems and traffic networks.
Reject
The paper proposes to compute local representations on device, which are then shared between clients using an alignment mechanism. Reviewers did appreciate the value of the topic and several contributions, but unfortunately consensus is that it remains below the bar, even after the discussion phase. Concerns remained on privacy and motivational positioning with FL, and lack of simpler baselines, even after the author feedback. We hope the detailed feedback helps to strengthen the paper for a future occasion.
test
[ "OcIkUmSA3Y", "bO8bOm4LssV", "F4Zts9dcBRS", "4fEY86y4lb", "3F0vQ_WWTW7", "-t4jVv97kD6", "gatTiRUvd1y", "QKpnPGk7M3M", "oHRaRPO4F78", "BGPv_nzWnOE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new problem, called federated inference, and proposed a framework of solutions including local representation alignment and learning a consensus graph. The technical contribution mainly comes from a theorem (Theorem 1) on the convergence of learning permutation matrices, and a new approach ...
[ 3, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "iclr_2022_DFYtZFo_1u", "4fEY86y4lb", "BGPv_nzWnOE", "oHRaRPO4F78", "QKpnPGk7M3M", "OcIkUmSA3Y", "iclr_2022_DFYtZFo_1u", "iclr_2022_DFYtZFo_1u", "iclr_2022_DFYtZFo_1u", "iclr_2022_DFYtZFo_1u" ]
iclr_2022_YqHW0o9wXae
Assisted Learning for Organizations with Limited Imbalanced Data
We develop an assisted learning framework for assisting organization-level learners to improve their learning performance with limited and imbalanced data. In particular, learners at the organization level usually have sufficient computation resource, but are subject to stringent collaboration policy and information privacy. Their limited imbalanced data often cause biased inference and sub-optimal decision-making. In our assisted learning framework, an organizational learner purchases assistance service from a service provider and aims to enhance its model performance within a few assistance rounds. We develop effective stochastic training algorithms for assisted deep learning and assisted reinforcement learning. Different from existing distributed algorithms that need to frequently transmit gradients or models, our framework allows the learner to only occasionally share information with the service provider, and still achieve a near-oracle model as if all the data were centralized.
Reject
The paper proposed a novel assisted learning scenario which would likely be useful for organizational level learners (i.e. learners with sufficient computational resources but limited and imbalance data). The paper is generally well presented, but there are shared concerns amongst the reviewers in the significance of technical contributions: (1) Due to the asymptotic nature of the consistency results, the technical strength is not strongly supported with the existing theoretical analysis. (2) Although the problem setup is novel and seems interesting, the practical significance of the results is not well supported without a concrete real-world application. (3) There are a few clarity issues raised in the reviews, which suggest that the paper could benefit from a major revision to address the above concerns.
train
[ "uw0xp57G6bL", "Nx_i9cF2Gt", "voU1gS2iSHu", "6h5rpivIBla", "2Y2DgKg1B8r", "1cUiUkOqa7J", "4b8XTdZOzD", "jxjdKpgUtN" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have successfully addressed all the comments below. All the related discussions will be incorporated into the revised paper.", " **Q4.** Could you elaborate on how you did federated learning in the baseline? I wonder how the action for each round was defined in federated learning and whether the comparison w...
[ -1, -1, -1, -1, -1, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_YqHW0o9wXae", "voU1gS2iSHu", "jxjdKpgUtN", "4b8XTdZOzD", "1cUiUkOqa7J", "iclr_2022_YqHW0o9wXae", "iclr_2022_YqHW0o9wXae", "iclr_2022_YqHW0o9wXae" ]
iclr_2022_LUpE0A3Q-wz
On Convergence of Federated Averaging Langevin Dynamics
We propose a federated averaging Langevin algorithm (FA-LD) for uncertainty quantification and mean predictions with distributed clients. In particular, we generalize beyond normal posterior distributions and consider a general class of models. We develop theoretical guarantees for FA-LD for strongly log-concave distributions with non-i.i.d data and study how the injected noise and the stochastic-gradient noise, the heterogeneity of data, and the varying learning rates affect the convergence. Such an analysis sheds light on the optimal choice of local updates to minimize communication cost. Important to our approach is that the communication efficiency does not deteriorate with the injected noise in the Langevin algorithms. In addition, we examine in our FA-LD algorithm both independent and correlated noise used over different clients. We observe that there is also a trade-off between federation and communication cost there. As local devices may become inactive in the federated network, we also show convergence results based on different averaging schemes where only partial device updates are available.
Reject
This paper proposes a federated averaging Langevin dynamics (FA-LD) for numerical mean prediction with uncertainty quantification under the setting of federated learning. Convergence analysis for the proposed method under the smoothness and strong-convex assumptions is also provided, and the results are summarized in Theorems 5.7-5.10, each of which bounds the Wasserstein-2 distance $W_2(\mu_k,\pi)$ between the model distribution $\mu_k$ and the target distribution $\pi$ under different settings. This paper received 5 reviews in total, with scores 6, 5, 3, 5, and 3. Some reviewers evaluated positively the novelty of the idea of using the Langevin dynamics in the federated setting, which I would also like to acknowledge. Upon reading the paper by myself, however, I find that the mathematical formulations are in some places not correct. What I think problematic is the third equation in equation (3): The right-hand side is a function of $N$ variables $\\\{\theta_k^c\\\}$, and they undergo different local updates at different clients when $k\not\equiv 0\mod K$ (i.e., the synchronization does not take place). Also $\nabla\tilde{f}^c$ is in general a nonlinear function of its argument. Therefore, the right-hand side cannot be written in general as a function of a single variable $\theta_k$ which is defined as $\theta_k=\sum_{c=1}^Np_c\theta_k^c$, making this equation incorrect. This problem would affect various parts of the arguments to follow in this paper, such as the first two equations in equation (16) on page 14, the two inline equations just after equation (16), equation (18), the second equality in the inline equation in page 15, line 1, and the third line in equation (25) on page 18, to mention a few. Thus I have to question the validity of the theoretical development in this paper. Another point I would like to mention is that I did not understand the definition of Schemes I and II in Section 5.4. It is not stated at all that $\mathcal{S}_k$ is a random quantity here. Furthermore, the conditions "with/without replacement" are not described at all. Still another point to mention is that I did not understand the claim in page 7, lines 30-31. Does it mean: If one knows the number $T_\epsilon$ of steps to achieve the precision $\epsilon$, then one should set the number $K$ of local steps per synchronization should be set of the order of $\sqrt{T_\epsilon}$. But $T_\epsilon$ depends on $K$, so that it would be unnatural to assume that one knows $T_\epsilon$ irrespective of $K$ in the first place. Because of these, I would judge that this paper is not yet ready for presentation in its current form. I would therefore not be able to recommend acceptance of this paper. Minor points: - Citation style: The authors use throughout the paper what are called the *narrative citations* even though there are occasions where what are called the *parenthetical citations* (the author name and publication date are both enclosed in parentheses) should be used. - page 3, line 7: is (the -> an) unbiased stochastic gradient; There are several unbiased estimators for the gradient, and what is mentioned here is only one instance of them. - page 3, lines 23-24: The aggregation should take place not on each client but on the central server. - page 3, line 36: a(n) energy function; a(n) unbiased estimate - page 5, lines 17-20: The contents of Assumptions 5.1 and 5.2 are not assumptions but definitions. - page 6, line 2: to obtain (the -> a) lower bound - page 6, line 18: $\mathcal{D}^2$ is undefined. - page 8, line 39: (a -> the) probability $p_c$ if it is meant to be the one defined in page 3, line 8. Otherwise, use of the same symbol to represent different quantities should be avoided. - page 14, line 25: mod ($E$ -> $K$) =0 - page 15, line 30: $H_\rho^2$ -> $H_\rho$
train
[ "Jlh54F7P3N", "nOD0UbHGFtA", "QR9iZT7-BWJ", "TyFoVpUBCGS", "zBO67aYPB5a", "xXPXXL6zpz", "Zq76NCWy0vO", "4-ZKLNPMTml", "8BC3pyVQ3pK", "IXgZeuhTcrh", "Emx8L8HvBGz", "G0DhCsjZmhl" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable comments. \n\nWe believe this can be an interesting future direction to validate our theory and improve our paper. From another perspective, our result also matches the theory in [1]. Moreover, if we ignore the constant 2 along with the contraction constant $1-\\frac{\\eta m}{2}$, lemma B...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3, 5 ]
[ "4-ZKLNPMTml", "iclr_2022_LUpE0A3Q-wz", "Emx8L8HvBGz", "IXgZeuhTcrh", "G0DhCsjZmhl", "8BC3pyVQ3pK", "nOD0UbHGFtA", "iclr_2022_LUpE0A3Q-wz", "iclr_2022_LUpE0A3Q-wz", "iclr_2022_LUpE0A3Q-wz", "iclr_2022_LUpE0A3Q-wz", "iclr_2022_LUpE0A3Q-wz" ]
iclr_2022_KFUWHgRYEDF
ScaLA: Speeding-Up Fine-tuning of Pre-trained Transformer Networks via Efficient and Scalable Adversarial Perturbation
The size of transformer networks is growing at an unprecedented rate and has increased by three orders of magnitude in recent years, approaching trillion-level parameters. To train models of increasing sizes, researchers and practitioners have employed large-batch optimization to leverage massive distributed deep learning systems and resources. However, increasing the batch size changes the training dynamics, often leading to generalization gap and training instability issues that require extensive hyperparameter turning to maintain the same level of accuracy. In this paper, we explore the steepness of the loss landscape of large-batch optimization and find that it tends to be highly complex and irregular, posing challenges to generalization. To address this challenge, we propose ScaLA, a scalable and robust method for large-batch optimization of transformer networks via adversarial perturbation. In particular, we take a sequential game-theoretic approach to make large-batch optimization robust to adversarial perturbation, which helps smooth the loss landscape and improve generalization. Moreover, we perform several optimizations to reduce the computational cost from adversarial perturbation, improving its performance and scalability in the distributed training environment. We provide a theoretical convergence rate analysis for ScaLA using techniques for analyzing non-convex saddle-point problems. Finally, we perform an extensive evaluation of our method using BERT and RoBERTa on GLUE datasets. Our results show that our method attains up to 18 $\times$ fine-tuning speedups on 2 DGX-2 nodes, while achieving comparable and sometimes higher accuracy than the state-of-the-art large-batch optimization methods. When using the same number of hardware resources, ScaLA is 2.7--9.8$\times$ faster than the baselines.
Reject
The submission considers a method involving adversarial training to speed up the fine-tuning of large pre-trained transformer language models. Reviewers consider it to be a borderline paper. Many suggestions are made by the reviewers which will help improve the presentation and substance and make it more useful for the community.
val
[ "Ln87YGXGcLi", "r5e3T13kz8", "DjK-cWTiFaH", "HptG3MYjjb4", "3kff6nWPUka", "FtwoZMmS60h", "AkcgkVCYtmQ", "THlXw9wyqdk", "L7iRd1XBvlH", "G4AYkDCRFkM" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe are glad that we have addressed the raised concerns, and we thank you for devoting your time and effort to engage with us for this continual discussion. \n\nBest regards, \nOur team", "In this paper, the authors propose ScaLA to speed up the fine-tuning of large pre-trained transformer lang...
[ -1, 6, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "DjK-cWTiFaH", "iclr_2022_KFUWHgRYEDF", "HptG3MYjjb4", "3kff6nWPUka", "THlXw9wyqdk", "L7iRd1XBvlH", "G4AYkDCRFkM", "r5e3T13kz8", "iclr_2022_KFUWHgRYEDF", "iclr_2022_KFUWHgRYEDF" ]
iclr_2022_UOj0MV__Cr
A Two-Stage Neural-Filter Pareto Front Extractor and the need for Benchmarking
Pareto solutions are optimal trade-offs between multiple competing objectives over the feasible set satisfying imposed constraints. Fixed-point iterative strategies do not always converge and might only return one solution point per run. Consequently, multiple runs of a scalarization problem are required to retrieve a Pareto front, where all instances converge. Recently proposed Multi-Task Learning (MTL) solvers claim to achieve Pareto solutions combining Linear Scalarization and domain decomposition. We demonstrate key shortcomings of MTL solvers, that limit their usability for real-world applications. Issues include unjustified convexity assumptions on practical problems, incomplete and often wrong inferences on datasets that violate Pareto definition, and lack of proper benchmarking and verification. We propose a two stage Pareto framework: Hybrid Neural Pareto Front (HNPF) that is accurate and handles non-convex functions and constraints. The Stage-1 neural network efficiently extracts the \textit{weak} Pareto front, using Fritz-John Conditions (FJC) as the discriminator, with no assumptions of convexity on the objectives or constraints. An FJC guided diffusive manifold is used to bound the error between the true and the Stage-1 extracted \textit{weak} Pareto front. The Stage-2, low-cost Pareto filter then extracts the strong Pareto subset from this weak front. Numerical experiments demonstrates the accuracy and efficiency of our approach.
Reject
The paper makes two contributions: (1) Multi-task benchmarks where the pareto solution is known analytically; and (2) a verification method for testing if solutions are on the pareto front. The authors make the point that MTL methods are applied to large-scale problems, but fail to find the pareto front in problems where it is known. Reviewers appreciated the discussion and insights by the approach, and the idea that correctness of scalable methods should be evaluated with problems that have analytic solutions, but they also had grave concerns. The primary concern is that without an efficient search, a verification method that builds on filters randomly generated solutions cannot scale to high dimensional problems. There were also disagreement about the role of LS and comparison with previous literature. As a result, the contribution of the submission is not sufficient for acceptance to ICLR
train
[ "e8GmxaGatXP", "2xOY5HniC9J", "jTN9Q3qzkEK", "pVesHRdDaMW", "xZx9sBivlAP", "OvTbd7SvHi", "3PTSOswU1Nk", "nNuSEWYzht", "oPtZKMtSkH", "srtflzJ-NE9", "hWvdbOU3xo_", "lJ7Et-f_Rlt", "K85nRfhrVkq" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewers response and as mentioned in the main text (Table 1) shown the current scalability which can be prohibitive for large neural problems.\n\nHowever, we want to ask the question: is scaling a method more important when the method is not able to generate accurate results or outright cras...
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "2xOY5HniC9J", "srtflzJ-NE9", "xZx9sBivlAP", "iclr_2022_UOj0MV__Cr", "oPtZKMtSkH", "iclr_2022_UOj0MV__Cr", "K85nRfhrVkq", "lJ7Et-f_Rlt", "pVesHRdDaMW", "hWvdbOU3xo_", "iclr_2022_UOj0MV__Cr", "iclr_2022_UOj0MV__Cr", "iclr_2022_UOj0MV__Cr" ]
iclr_2022_m4BAEB_Imy
iPrune: A Magnitude Based Unstructured Pruning Method for Efficient Binary Networks in Hardware
Modern image recognition models span millions of parameters occupying several megabytes and sometimes gigabytes of space, making it difficult to run on resource constrained edge hardware. Binary Neural Networks address this problem by reducing the memory requirements (one single bit per weight and/or activation). The computation requirement and power consumption are also reduced accordingly. Nevertheless, each neuron in such networks has a large number of inputs, making it difficult to implement them efficiently in binary hardware accelerators, especially LUT-based approaches. In this work, we present a pruning algorithm and associated results on convolutional and dense layers from aforementioned binary networks. We reduce the computation by 4-70x and the memory by 190-2200x with less than 2% loss of accuracy on MNIST and less than 3% loss of accuracy on CIFAR-10 compared to full precision, fully connected equivalents. Compared to very recent work on pruning for binary networks, we still have a gain of 1% on the precision and up to 30% reduction in memory (526KiB vs 750KiB).
Reject
This paper deals with a problem of significant practical relevance: memory efficient neural networks. The authors propose some pruning methods for binary networks. However, several weaknesses were identified by the reviewers (novelty, lack of extensive experiments, problems with the presentation of the paper), and several valid points of concern were raised. These points of criticism were not adequately addressed, hence the paper in its current form cannot be recommended for publication.
test
[ "tZK1x7jmNL", "rS-m2In05F", "REKdLjZuM9k", "LG3kJNS0BKk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper suggests iPrune, a magnitude based unstructured pruning technique which reduces the number of inputs to each neuron. This scheme reduces the memory, computation, and power consumption for training binary neural networks.\n Strength:\n- This paper suggested a new method of pruning binary neural networks....
[ 3, 3, 3, 3 ]
[ 3, 4, 5, 5 ]
[ "iclr_2022_m4BAEB_Imy", "iclr_2022_m4BAEB_Imy", "iclr_2022_m4BAEB_Imy", "iclr_2022_m4BAEB_Imy" ]
iclr_2022_qQuzhbU3Gto
An Interpretable Graph Generative Model with Heterophily
Many models for graphs fall under the framework of edge-independent dot product models. These models output the probabilities of edges existing between all pairs of nodes, and the probability of a link between two nodes increases with the dot product of vectors associated with the nodes. Recent work has shown that these models are unable to capture key structures in real-world graphs, particularly heterophilous structures, wherein links occur between dissimilar nodes. We propose the first edge-independent graph generative model that is a) expressive enough to capture heterophily, b) produces nonnegative embeddings, which allow link predictions to be interpreted in terms of communities, and c) optimizes effectively on real-world graphs with gradient descent on a cross-entropy loss. Our theoretical results demonstrate the expressiveness of our model in its ability to exactly reconstruct a graph using a number of clusters that is linear in the maximum degree, along with its ability to capture both heterophily and homophily in the data. Further, our experiments demonstrate the effectiveness of our model for a variety of important application tasks such as multi-label clustering and link prediction.
Reject
The paper proposes an edge-independent graph generative model that can capture heterophily. The authors propose a 3-stage process to obtain the node representations. The idea of factorization in the form of BB^T-CC^T is an interesting approach to model heterophily. The paper can be improved in terms of writing to better motivate the need for a 3-stage algorithm and how these individual steps are related to the existing techniques in the literature. The authors should elaborate on the implications of the theorems and the concerns raised by the reviewers in the body of the paper. The algorithm faces scalability challenges, which are not studied well in the experiments. The reviewers also have raised concerns about degeneracy in network reconstruction experiments. Overall, the paper needs further improvements for publication.
train
[ "rWBzdZTtKZ1", "JWp6VUckZk", "VSg1l4AeUhv", "0gbsShpnCix", "rNrtD2w8G9G", "zgZMD97r35A" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the thorough review. We appreciate the points raised, which we address below.\n\nConcerning the need for the first two training stages, indeed, one of the benefits of these stages is to automatically set the split between the number of homophilous/heterophilous communities $k_B$/$k_C$. Add...
[ -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, 4, 5, 4 ]
[ "zgZMD97r35A", "0gbsShpnCix", "rNrtD2w8G9G", "iclr_2022_qQuzhbU3Gto", "iclr_2022_qQuzhbU3Gto", "iclr_2022_qQuzhbU3Gto" ]
iclr_2022_lEB5Dnz_MmH
A Collaborative Attention Adaptive Network for Financial Market Forecasting
Forecasting the financial market with social media data and real market prices is a valuable issue for market participants, which helps traders make more appropriate trading decisions. However, taking into account the differences of different data types, how to use a fusion method adapted to financial data to fuse real market prices and tweets from social media, so that the prediction model can fully integrate different types of data remains a challenging problem. To address these problems, we propose a collaborative attention adaptive Transformer approach to financial market forecasting (CAFF), including parallel extraction of tweets and price features, parameter-level fusion and a joint feature processing module, that can successfully deeply fuse tweets and real prices in view of the fusion method. Extensive experimentation is performed on tweets and historical price of stock market, our method can achieve a better accuracy compared with the state-of-the-art methods on two evaluation metrics. Moreover, tweets play a relatively more critical role in the CAFF framework. Additional stock trading simulations show that an actual trading strategy based on our proposed model can increase profits; thus, the model has practical application value.
Reject
This paper proposed a new approach to jointly model text and stock price information and fuse them for stock market forecasting. It encodes text and stock price information in parallel and then fuses them using a co-attention transformer. According to the reviewers, the design of the model is not very well justified and seems to be a little ad hoc. The authors spent quite a few pages introducing background knowledge and the novelty of the proposed model is not sufficiently described. Some details in the experiments are missing, and it is not clear whether the results could be easily reproduced. There are many writing issues too. As a result, we do not think the paper is ready for publication at ICLR in its current form. BTW, after the reviewers posted their comments, the authors did not submit their rebuttals.
train
[ "AQgG3n5rXjp", "tyJdT-Tpp7m", "RYx3avJcVJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a novel approach to jointly model text and stock price information and fuse them for stock market forecasting. It encodes text and stock price information in parallel and then fuses them using a co-attention transformer. Empirical results over a real-world dataset and trading simulations demons...
[ 5, 5, 3 ]
[ 3, 4, 5 ]
[ "iclr_2022_lEB5Dnz_MmH", "iclr_2022_lEB5Dnz_MmH", "iclr_2022_lEB5Dnz_MmH" ]
iclr_2022_WwKv20NrsfB
Apollo: An Adaptive Parameter-wised Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization
In this paper, we introduce Apollo, a quasi-Newton method for nonconvex stochastic optimization, which dynamically incorporates the curvature of the loss function by approximating the Hessian via a diagonal matrix. Importantly, the update and storage of the diagonal approximation of Hessian is as efficient as adaptive first-order optimization methods with linear complexity for both time and memory. To handle nonconvexity, we replace the Hessian with its rectified absolute value, which is guaranteed to be positive-definite. Experiments on three tasks of vision and language show that Apollo achieves significant improvements over other stochastic optimization methods, including SGD and variants of Adam, in term of both convergence speed and generalization performance.
Reject
This paper proposes a diagonal approximation to the Hessian in a quasi-Newton method for non convex stochastic optimization problems. They combine several good existing ideas and show empirically that the method performs well on several learning tasks, but reviewers found that the comparisons were limited in that as an (approximate) second order method, it would be more fair to compare to other second-order methods rather than largely focusing on SGD and some variants of ADAM. Overall, reviewers found the novelty limited and had some concerns about the strength of assumptions, parameter-wise updates, and some more minor comments on gaps in the presentation. The author response did not fully convince the borderline/negative reviewers, though the paper includes good ideas that would potentially be well received in a future revision.
train
[ "iD8m8QzNAQn", "M0uHjFT4JAF", "AP80iJWHLDE", "xJGziX175Vv", "jb008J24Yp", "W-R5lqybq41", "VE8jkMTaxle", "CVb7AUcPFER", "YV4BRaRhqFt", "c23w1USJntL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the response from the authors and the comments from other reviewers, and I would like to keep my score unchanged.", " We thank the authors for your feedback on our review. We realized that the last part of our review (regarding \"layer-wise\" update) was a bit unwarranted and should be \"component-w...
[ -1, -1, 6, -1, -1, -1, -1, 5, 3, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, 4, 4, 3 ]
[ "YV4BRaRhqFt", "W-R5lqybq41", "iclr_2022_WwKv20NrsfB", "YV4BRaRhqFt", "c23w1USJntL", "AP80iJWHLDE", "CVb7AUcPFER", "iclr_2022_WwKv20NrsfB", "iclr_2022_WwKv20NrsfB", "iclr_2022_WwKv20NrsfB" ]
iclr_2022_E-dq2kN8lt
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Federated Averaging (FedAvg, also known as Local-SGD) (McMahan et al., 2017) is a classical federated learning algorithm in which clients run multiple local SGD steps before communicating their update to an orchestrating server. We propose a new federated learning algorithm, FedPAGE, able to further reduce the communication complexity by utilizing the recent optimal PAGE method (Li et al., 2021) instead of plain SGD in FedAvg. We show that FedPAGE uses much fewer communication rounds than previous local methods for both federated convex and nonconvex optimization. Concretely, 1) in the convex setting, the number of communication rounds of FedPAGE is $O(\frac{N^{3/4}}{S\epsilon})$, improving the best-known result $O(\frac{N}{S\epsilon})$ of SCAFFOLD (Karimireddy et al.,2020) by a factor of $N^{1/4}$, where $N$ is the total number of clients (usually is very large in federated learning), $S$ is the sampled subset of clients in each communication round, and $\epsilon$ is the target error; 2) in the nonconvex setting, the number of communication rounds of FedPAGE is $O(\frac{\sqrt{N}+S}{S\epsilon^2})$, improving the best-known result $O(\frac{N^{2/3}}{S^{2/3}\epsilon^2})$ of SCAFFOLD (Karimireddy et al.,2020) by a factor of $N^{1/6}S^{1/3}$, if the sampled clients $S\leq \sqrt{N}$. Note that in both settings, the communication cost for each round is the same for both FedPAGE and SCAFFOLD. As a result, FedPAGE achieves new state-of-the-art results in terms of communication complexity for both federated convex and nonconvex optimization.
Reject
This paper proposes a new federated learning method which uses the recently developed PAGE gradient estimator in the local updates, and provides convergence analysis for both convex and nonconvex loss functions. There are several technical questions raised by the reviewers that are not addressed by the author rebuttal. Given such technical issues and limited novelty and empirical evidence, I cannot recommend acceptance.
train
[ "yDEkBlj51vp", "XcpGQCyqVDX", "8Zp7bMbHiY", "i-GGhw_xn9a", "chiuTd8x6b4", "reUrmzNtG0p", "ALjdmT_8UKp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "public", "official_reviewer" ]
[ "The paper considers a federated variant of the PAGE algorithm, called FedPAGE that utilizes local steps. The authors analyze the convergence behavior of FedPAGE algorithm and show that it is very similar to PAGE. And as PAGE has good convergence guarantees, compared to other distributed optimization methods, FedPA...
[ 5, 5, 3, -1, 5, -1, 5 ]
[ 4, 3, 4, -1, 4, -1, 3 ]
[ "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt", "iclr_2022_E-dq2kN8lt" ]
iclr_2022_2s4sNT11IcH
On the Convergence and Calibration of Deep Learning with Differential Privacy
In deep learning with differential privacy (DP), the neural network achieves the privacy usually at the cost of slower convergence (and thus lower performance) than its non-private counterpart. This work gives the first convergence analysis of the DP deep learning, through the lens of training dynamics and the neural tangent kernel (NTK) matrix. Our convergence theory successfully characterizes the effects of two key components in the DP training: the per-sample clipping and the noise addition. We initiate a general principled framework to understand the DP deep learning with any network architecture, loss function and various optimizers including DP-Adam. Our analysis also motivates a new clipping method, the 'global clipping', that significantly improves the convergence, as well as preserves the same DP guarantee and computational efficiency as the existing method, which we term as 'local clipping'. In addition, our global clipping is surprisingly effective at learning calibrated classifiers, in contrast to the existing DP classifiers which are oftentimes over-confident and unreliable. Implementation-wise, the new clipping can be realized by inserting one line of code into the Pytorch Opacus library.
Reject
The major concern with this paper is the unfair comparison between global and local clipping (at least from the theoretical point of view). The assumption that the norms of the gradients are bounded in Theorem 2 is too strict for the following reasons. Clipping has been introduced exactly because we cannot assume the norm of the gradient to be bounded by a fixed constant in the first place. Accordingly, comparing two clipping methods under the bounded gradient assumption does not seem relevant. Further, the two methods are not studied under the same set of assumptions (In Theorem 1, the norm of the gradient is not assumed to be bounded, but in Theorem 2 it is). A fair comparison needs to be presented to make the case for the proposed method.
train
[ "cUeMHDbL_U-", "UFkeAKbjusv", "piZTko9YyGk", "bzHtcciY_lD", "0EUMjepX14", "wNOHyo5Y8Vv", "ZYXUeGJVdIw", "_NaP72g9Rss", "i14u-HhLJ50", "85oNMM9wqhi", "UjdHCqeCVoJ", "0n_EIDxJXPh", "mVhmHVSAOBb", "vpyJco7ImB0" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the comment again and hope that you find our detailed response useful. We are happy to address any other concerns you have. In the meantime, if all of your concerns are cleared, we sincerely hope that you could reconsider the score. We want to re-emphasize the great novelty in our work -...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "UjdHCqeCVoJ", "i14u-HhLJ50", "0EUMjepX14", "iclr_2022_2s4sNT11IcH", "_NaP72g9Rss", "bzHtcciY_lD", "mVhmHVSAOBb", "wNOHyo5Y8Vv", "0n_EIDxJXPh", "vpyJco7ImB0", "mVhmHVSAOBb", "iclr_2022_2s4sNT11IcH", "iclr_2022_2s4sNT11IcH", "iclr_2022_2s4sNT11IcH" ]
iclr_2022_D8njK_Ix5dJ
Maximum Mean Discrepancy for Generalization in the Presence of Distribution and Missingness Shift
Covariate shifts are a common problem in predictive modeling on real-world problems. This paper proposes addressing the covariate shift problem by minimizing Maximum Mean Discrepancy (MMD) statistics between the training and test sets in either feature input space, feature representation space, or both. We designed three techniques that we call MMD Representation, MMD Mask, and MMD Hybrid to deal with the scenarios where only a distribution shift exists, only a missingness shift exists, or both types of shift exist, respectively. We find that integrating an MMD loss component helps models use the best features for generalization and avoid dangerous extrapolation as much as possible for each test sample. Models treated with this MMD approach show better performance, calibration, and extrapolation on the test set.
Reject
The paper addresses unsupervised domain adaptation under covariate shift and missing source and target features. Three approaches are proposed for tackling respectively covariate shift, missing data and simultaneous covariate shift and missing data. The proposed method relies on the minimization of the maximum mean discrepancy between the source and target representations in the different settings. Experiments are performed on a synthetic dataset and on two other datasets. All the reviewers highlighted several weaknesses: lack of formal definitions and of formal analyses, lack of connection with existing approaches for handling missing data, weak reproducibility. The authors did not provide responses. Reject.
train
[ "_BVYO6QLjWa", "l3Tl6jeJj8B", "hrmzLtQuzHQ", "UPBU8eAmuzg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes a solution dealing with model adaptation under covariate shift and/or missingness shift by minimizing the Maximum Mean Discrepancy (MMD) between the training labeled set and unlabeled test set. The method is supported by several evaluations on synthetic data and real data. *Strengths*\n- The pa...
[ 3, 3, 3, 3 ]
[ 4, 2, 3, 5 ]
[ "iclr_2022_D8njK_Ix5dJ", "iclr_2022_D8njK_Ix5dJ", "iclr_2022_D8njK_Ix5dJ", "iclr_2022_D8njK_Ix5dJ" ]
iclr_2022_z8j0bPU4DIw
Evolution Strategies as an Alternate Learning method for Hierarchical Reinforcement Learning
This paper investigates the performance of Scalable Evolution Strategies (S-ES) as a Hierarchical Reinforcement Learning (HRL) approach. S-ES, named for its excellent scalability across many processors, was popularised by OpenAI when they showed its performance to be comparable to the state-of-the-art policy gradient methods. However, to date, S-ES has not been tested in conjunction with HRL methods, which empower temporal abstraction thus allowing agents to tackle more challenging problems. In this work, we introduce a novel method that merges S-ES and HRL, which allows S-ES to be applied to difficult problems such as simultaneous robot locomotion and navigation. We show that S-ES needed no (methodological or hyperparameter) modifications for it to be used in a hierarchical context and that its indifference to delayed rewards leads to it having competitive performance with state-of-the-art gradient-based HRL methods. This leads to a novel HRL method that achieves state-of-the-art performance, and is also comparably simple and highly scalable.
Reject
This paper presents the use of scalable evolution strategies (S-ES) in hierarchical reinforcement learning. After reviewing the paper and reading the comments from the reviewers, here are my comments: - The proposal is quite novel. It requires major improvements to clearly state how this proposal contributes in the field. - The main concern is about the experimental results. There are some flaws in the comparative results. Also, they do not support the proposal.
train
[ "iadWG6QANxi", "tqRNAOLkho", "fOYFEswnPuT", "O0Gwj5y4K6a" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel approach for hierarchical reinforcement learning. As the title says, the authors claims that the use of evolution strategies is useful to train hierarchical policy. Basically, the search is based on the estimation of the gradients of the controller fitness and the primitive fitness usin...
[ 3, 5, 5, 3 ]
[ 4, 4, 4, 4 ]
[ "iclr_2022_z8j0bPU4DIw", "iclr_2022_z8j0bPU4DIw", "iclr_2022_z8j0bPU4DIw", "iclr_2022_z8j0bPU4DIw" ]
iclr_2022_QCeFEThVn3
GraphEBM: Towards Permutation Invariant and Multi-Objective Molecular Graph Generation
Although significant progress has been made in molecular graph generation recently, permutation invariance and multi-objective generation remain to be important but challenging goals to achieve. In this work, we propose GraphEBM, a molecular graph generation method via energy-based models (EBMs), as an exploratory work to perform permutation invariant and multi-objective molecule generation. Particularly, thanks to the flexibility of EBMs and our parameterized permutation-invariant energy function, our GraphEBM can define a permutation invariant distribution over molecular graphs. We learn the energy function by contrastive divergence and generate samples by Langevin dynamics. In addition, to generate molecules with a specific desirable property, we propose a simple yet effective learning strategy, which pushes down energies with flexible degrees according to the properties of corresponding molecules. Further, we explore to use our GraphEBM for generating molecules towards multiple objectives via compositional generation, which is practically desired in drug discovery. We conduct comprehensive experiments on random, single-objective, and multi-objective molecule generation tasks. The results demonstrate our method is effective.
Reject
This paper proposes to use an energy-based model for a multi-objective molecular generation. The energy function is parameterized by relational graph convolutional network (R-GCN) so that it has a permutation invariance property. The model is trained by contrastive divergence and the generation is performed by Langevin dynamics. Experiments on single and multi-objective molecule generation are conducted to verify the effectiveness of the proposed framework. The paper is well-written, and the experiments are comprehensive. The major shortcoming of the paper is its limited novelty, since using EBM for graph generation is a straightforward application of the existing deep EBM framework. The contribution is marginal. During the discussion, two of the reviewers pointed out that the contribution is limited and marginal. Two reviewers pointed out that the performance gain obtained by the proposed model is marginal and not significant. One reviewer has a concern about the computational cost of MCMC. However, the authors didn’t provide a rebuttal to address the concerns raised by the reviewers. Given the fact that all the concerns from the reviewers remain, and the contribution and performance gain of the work are marginal, the AC recommends rejecting the paper.
test
[ "EF75QCBruTo", "MM9pxhMqFJ", "EMVGNwoBWTu", "7jfdZ1m__x3", "j9NsDIkCRNJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I agree with reviewers Fo2L and JCco on limited novelty, simplified experiment setups, and marginal performance gain. Therefore I would keep my original score and recommend rejecting the paper.", "The paper proposes to use energy-based models (EBMs) for task-based molecule generation. The energy function is par...
[ -1, 3, 6, 3, 5 ]
[ -1, 4, 3, 3, 4 ]
[ "MM9pxhMqFJ", "iclr_2022_QCeFEThVn3", "iclr_2022_QCeFEThVn3", "iclr_2022_QCeFEThVn3", "iclr_2022_QCeFEThVn3" ]
iclr_2022_bjYunHo6LWR
Classification and Uncertainty Quantification of Corrupted Data using Semi-Supervised Autoencoders
Parametric and non-parametric classifiers often have to deal with real-world data, where corruptions like noise, occlusions, and blur are unavoidable – posing significant challenges. We present a probabilistic approach to classify strongly corrupted data and quantify uncertainty, despite the model only having been trained with uncorrupted data. A semi-supervised autoencoder trained on uncorrupted data is the underlying architecture. We use the decoding part as a generative model for realistic data and extend it by convolutions, masking, and additive Gaussian noise to describe imperfections. This constitutes a statistical inference task in terms of the optimal latent space activations of the underlying uncorrupted datum. We solve this problem approximately with Metric Gaussian Variational Inference (MGVI). The supervision of the autoencoder’s latent space allows us to classify corrupted data directly under uncertainty with the statistically inferred latent space activations. Furthermore, we demonstrate that the model uncertainty strongly depends on whether the classification is correct or wrong, setting a basis for a statistical "lie detector" of the classification. Independent of that, we show that the generative model can optimally restore the uncorrupted datum by decoding the inferred latent space activations.
Reject
This paper introduces a method for classifying corrupted data and quantifying uncertainty by training semi-supervised autoencoders only on clean (uncorrupted) data. Pro: The approach is novel utilizing metric Gaussian variational inference. Cons: More thorough experiments are needed: (1) extensive experiments on more complex data, (2) ablation study, (3) comparison to additional baselines. Summary: The paper introduces a novel method, however experiments are limited.
train
[ "Vog-R0G6U6W", "x07GdJrZTZ", "8oOftUAtbRE", "ivTSWK1JCgb", "hmaABNsqwyW", "YZ6H6wOIPlK", "tBWQIk_b2tV", "gSXoowaQrVe" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review – we appreciate the feedback.\n\nWe acknowledge that further experiments with more realistic datasets and a comparison to other competitive algorithms would better justify our method. We will consider including your main concerns in future research and further experiments.", " Thank yo...
[ -1, -1, -1, -1, 3, 5, 5, 3 ]
[ -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "gSXoowaQrVe", "tBWQIk_b2tV", "YZ6H6wOIPlK", "hmaABNsqwyW", "iclr_2022_bjYunHo6LWR", "iclr_2022_bjYunHo6LWR", "iclr_2022_bjYunHo6LWR", "iclr_2022_bjYunHo6LWR" ]
iclr_2022_fWK3qhAtbbk
A Study of Aggregation of Long Time-series Input for LSTM Neural Networks
Time series forecasting is the process of using time series data to create a prediction model. Long-short term memory (LSTM) models are the state-of-the-art for time-series forecasting. However, LSTMs can handle limited length input mostly since when the samples enter the model in sequence, the oldest samples need to propagate through the LSTM cells self loop for each new sample and thus their data diminishes in this process. This limits the length of the history that can be used in the training for each time epoch. The common way of handling this problem is by partitioning time records to uniform intervals, averaging each interval, and feeding the LSTM with rather short sequences, but each represents data from a longer history. In this paper, we show that this common data aggregation method is far from optimal. We generalize the method of partitioning the data, and suggest an Exponential partitioning. We show that non-uniformly partitioning, and especially Exponential partitioning improves LSTM accuracy, significantly. Using other aggregation functions (such as median or maximum) are shown to further improve the accuracy. Overall, using 7 public datasets we show an improvement in accuracy by 6% to 27%.
Reject
This paper addresses unique windowing schemes for the input of an LSTM model for time-series forecasting, in particular an exponential partitioning, where bin sizes increase as moving further from the current time point. Although the basic idea is interesting and motivating and experimental results are strong; as reviewers pointed out, technical significance and novelty are limited because of lack of theoretical or conceptual justification and motivation the proposed approach. The authors’ claim is primarily based on experiments results. Other critical issues include the lack of comparison with recent advances in specifically designed to attend to longer history length or the discussion of modern approaches. other issues include presentation (e.g., grammatical errors) and the use of acronyms before introducing them.
train
[ "FC02EpgWX0n", "_qKJNKluUUI", "nB2bLVnbp2e", "lXCw5_lNkfU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper looks at how to aggregate time-series inputs to LSTM models, and recommends to use non-uniform aggregation, with small recent bins and larger older bins. Furthermore, instead of simple averaging of time-samples within the bin, it also found value by using non-linear aggregations like median, max and min...
[ 3, 3, 3, 3 ]
[ 4, 3, 4, 5 ]
[ "iclr_2022_fWK3qhAtbbk", "iclr_2022_fWK3qhAtbbk", "iclr_2022_fWK3qhAtbbk", "iclr_2022_fWK3qhAtbbk" ]
iclr_2022__3bwD_KXl5K
WaveSense: Efficient Temporal Convolutions with Spiking Neural Networks for Keyword Spotting
Ultra-low power local signal processing is a crucial aspect for edge applications on always-on devices. Neuromorphic processors emulating spiking neural networks show great computational power while fulfilling the limited power budget as needed in this domain. In this work we propose spiking neural dynamics as a natural alternative to dilated temporal convolutions. We extend this idea to WaveSense, a spiking neural network inspired by the WaveNet architecture. WaveSense uses simple neural dynamics, fixed time-constants and a simple feed-forward architecture and hence is particularly well suited for a neuromorphic implementation. We test the capabilities of this model on several datasets for keyword-spotting. The results show that the proposed network beats the state of the art of other spiking neural networks and reaches near state-of-the-art performance of artificial neural networks such as CNNs and LSTMs.
Reject
The authors propose in this manuscript to use spiking neural networks (SNNs) as an efficient alternative to dilated temporal convolutions. They propose to utilize the membrane time constant of neurons instead of synaptic delays for memory efficiency. Training such networks with BPTT achieves better performance than other SNN-based methods and achieve close to SOTA compared to ANN solutions for keyword spotting. Pros: - The manuscript addresses an interesting problem. - Performance is good Cons: - Limited evaluations regarding efficiency, although this is a main point of the paper. - The technical novelty is limited. - One reviewer noted that the model is not actually an SNN, due to the use of multiple spikes per time step. - Benchmarking is weak. Little comparison with previous work. - Structure and writing of the paper needs improvement. The authors did not reply to any of these critical points. In summary, although the idea seems interesting, the manuscript is not ready for publication.
train
[ "YlWzSMllSkY", "iKPAvLpwcWK", "LqqbSbYuOYR", "Ebn86_Tu35j", "SVYiqg8GJGQ", "pIauqpFLveK", "I-0oahotiH0", "ipUEEcyujM", "e0rzWXtmqUt" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes spiking neural networks (SNNs) as an efficient neuromorphic alternative to dilated temporal convolutions, based on the WaveNet architecture for keyword spotting. The main idea is to model the delay periods in dilated convolutions of WaveNet as synaptic time constants in SNNs for efficient imple...
[ 3, -1, -1, -1, -1, 5, 3, 3, 3 ]
[ 4, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2022__3bwD_KXl5K", "ipUEEcyujM", "I-0oahotiH0", "pIauqpFLveK", "YlWzSMllSkY", "iclr_2022__3bwD_KXl5K", "iclr_2022__3bwD_KXl5K", "iclr_2022__3bwD_KXl5K", "iclr_2022__3bwD_KXl5K" ]
iclr_2022_sWqjiqlUDso
Path-specific Causal Fair Prediction via Auxiliary Graph Structure Learning
Algorithm fairness has become a trending topic, and it has a great impact on social welfare. Among different fairness definitions, path-specific causal fairness is a widely adopted one with great potentials, as it distinguishes the fair and unfair effects that the sensitive attributes exert on algorithm predictions. Existing methods based on path-specific causal fairness either require graph structure as the prior knowledge or have high complexity in the calculation of path-specific effect. To tackle these challenges, we propose a novel casual graph based fair prediction framework, which integrates graph structure learning into fair prediction to ensure that unfair pathways are excluded in the causal graph. Furthermore, we generalize the proposed framework to the scenarios where sensitive attributes can be non-root nodes and affected by other variables, which is commonly observed in real-world applications but hardly addressed by existing works. We provide theoretical analysis on the generalization bound for the proposed fair prediction method, and conduct a series of experiments on real-world datasets to demonstrate that the proposed framework can provide better prediction performance and algorithm fairness trade-off.
Reject
In this paper, the authors aim to work within the path-specific framework to implement fair predictions by learning a causal graph in such a way that some path-specific effect is removed. Generally, the paper was not received very well by reviewers, with the primary concern being lack of novelty in particular in comparison with (Kyono et al.) One additional comment I wanted to make is this: any prediction task that is a part of a pipeline that contains a graphical model selection step is properly a "post-selection inference problem." Such problems are very challenging because: (a) Learning a graph from data is known to lack consistency at any rate (meaning that the algorithm is only pointwise consistent, but not uniformly consistent). This issue propagates to "downstream" tasks in the pipeline, including prediction problems. Probably, the way this issue would manifest in this work is unless sample sizes were very large, there would be no particular reason to assume the correct causal path is removed. (b) Even if uniformly consistent modifications of structure learning algorithms were used, the uncertainty in learning the graph with error must be propagated to all subsequent steps in the pipeline. Doing so appropriately is very challenging. When revising the paper, in addition to taking reviewer comments into account, please consider how your method deals with post-selection inference issues -- I think this is a very interesting but challenging question that is likely to come in peer review.
train
[ "o-iGvblm274", "5XLVxazh4ij", "ECCpaE8X2HD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces fair prediction framework which combines graph structure learning into fair prediction to ensure that unfair pathways are discouraged in the causal graph. It shows generalization bound and proves the efficacy of the underlying method using several experiments. This is an interesting paper w...
[ 3, 3, 3 ]
[ 3, 5, 4 ]
[ "iclr_2022_sWqjiqlUDso", "iclr_2022_sWqjiqlUDso", "iclr_2022_sWqjiqlUDso" ]
iclr_2022_VABfTTrrOv
Conjugation Invariant Learning with Neural Networks
Machine learning under the constraint of symmetries, given by group invariances or equivariances, has emerged as a topic of active interest in recent years. Natural settings for such applications include the multi-reference alignment and cryo electron microscopy, multi-object tracking, spherical images, and so on. A fundamental paradigm among such symmetries is the action of a group by symmetries, which often pertains to change of basis or relabelling of objects in pure and applied mathematics. Thus, a naturally significant class of functions consists of those that are intrinsic to the problem, in the sense of being independent of such base change or relabelling; in other words invariant under the conjugation action by a group. In this work, we investigate such functions, known as class functions, leveraging tools from group representation theory. A fundamental ingredient in our approach are given by the so-called irreducible characters of the group, which are canonical tracial class functions related to its irreducible representations. Such functions form an orthogonal basis for the class functions, extending ideas from Fourier analysis to this domain, and accord a very explicit structure. Exploiting a tensorial structure on representations, which translates into a multiplicative algebra structure for irreducible characters, we propose to efficiently approximate class functions using polynomials in a small number of such characters. Thus, our approach provides a global, non-linear coordinate system to describe functions on the group that is intrinsic in nature, in the sense that it is independent of local charts, and can be easily computed in concrete models. We demonstrate that such non-linear approximation using a small dictionary can be effectively implemented using a deep neural network paradigm. This allows us to learn a class function efficiently from a dataset of its outputs.
Reject
The reviews received for this paper raise several critical concerns to which the authors have not provided a response. Thus, in its present form, the paper is not ready for publication.
val
[ "eYy0bhSEyb", "A9Gbbtdm4bu", "cZnNSZ5-M74", "oeOd06OXgeb", "_zKzRLDR7rR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a strategy to learn class function on groups (and by extension on homogeneous spaces). The paper is motivated by applications in inverse problems and image processing (e.g. particle alignment in CryoEM) as well as integer programming (e.g. the quadratic assignment problem). The authors note tha...
[ 3, 5, 3, 3, 3 ]
[ 4, 2, 4, 4, 1 ]
[ "iclr_2022_VABfTTrrOv", "iclr_2022_VABfTTrrOv", "iclr_2022_VABfTTrrOv", "iclr_2022_VABfTTrrOv", "iclr_2022_VABfTTrrOv" ]
iclr_2022_XbatFr32NRm
Generalizing MLPs With Dropouts, Batch Normalization, and Skip Connections
A multilayer perceptron (MLP) is typically made of multiple fully connected layers with nonlinear activation functions. There have been several approaches to make them better (e.g., faster convergence, better convergence limit, etc.). But the researches lack structured ways to test them. We test different MLP architectures by carrying out the experiments on the age and gender datasets. We empirically show that by whitening inputs before every linear layer and adding skip connections, our proposed MLP architecture can result in better performance. Since the whitening process includes dropouts, it can also be used to approximate Bayesian inference. We have open sourced our code, and released models and docker images at https://github.com/anonymous.
Reject
The reviewers unanimously recommend rejecting this submission, and I concur with that recommendation. This submission is not appropriate for a machine learning conference like ICLR. It does not display a thorough understanding of the literature nor does it make a sufficiently valuable contribution. There is no need to "generalize" MLPs, the community knows quite well that we can use dropout, skip connections, and batch norm with them. Even the original dropout paper applies dropout on fully connected ReLU MLPs. As another example, the submission attributes skip connections to He et al. 2016, but skip connections (also known as "shortcut connections" were in common use in the late 1980s and throughout the 1990s in the Connectionist community, including for non-convolutional simple feedforward neural networks or "MLPs". They were a well known technique throughout neural network history, although the advent of deeper layered neural network architectures perhaps gave them new importance. He et al. certainly popularized them for modern neural network architectures and popularized their residual formulation. The earliest reference I could find easily for skip connections was "Learning to Tell Two Spirals Apart" which was published in 1988 by Kevin J. Lang and Michael. J. Witbrock, but in general such architectural tricks were not viewed as particularly remarkable in the 1990s neural networks literature.
train
[ "wxUMPGSyGzx", "1j3afp_xfRh", "bEFPWfrj6K", "R5RVq_iVSL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors conduct a careful study on improving the generalization capabilities of multilayer perceptions using dropout, batch normalization, and skip-connections. They show that including an Independent Component layer (using batch normalization and dropout) before every fully connected layer, followed by a ReLU...
[ 3, 3, 3, 3 ]
[ 4, 5, 4, 4 ]
[ "iclr_2022_XbatFr32NRm", "iclr_2022_XbatFr32NRm", "iclr_2022_XbatFr32NRm", "iclr_2022_XbatFr32NRm" ]
iclr_2022_Oxdln9khkxv
Learning the Representation of Behavior Styles with Imitation Learning
Imitation learning is one of the methods for reproducing expert demonstrations adaptively by learning a mapping between observations and actions. However, behavior styles such as motion trajectory and driving habit depend largely on the dataset of human maneuvers, and settle down to an average behavior style in most imitation learning algorithms. In this study, we propose a method named style behavior cloning (Style BC), which can not only infer the latent representation of behavior styles automatically, but also imitate different style policies from expert demonstrations. Our method is inspired by the word2vec algorithm and we construct a behavior-style to action mapping which is similar to the word-embedding to context mapping in word2vec. Empirical results on popular benchmark environments show that Style BC outperforms standard behavior cloning in prediction accuracy and expected reward significantly. Furthermore, compared with various baselines, our policy influenced by its assigned style embedding can better reproduce the expert behavior styles, especially in the complex environments or the number of the behavior styles is large.
Reject
All reviewers suggested rejection of the paper. This is based on concerns regarding novelty of results, clarity of presentation, simplicity of conducted experiments and missing ablation studies (and several other points raised in the reviews). The authors also did not submit a rebuttal. Hence I am recommending rejection of the paper.
train
[ "21IFER5ZiE", "BRCuMb2M_hX", "4gerYhXGmf", "E9-na3r1Lhs" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is about imitation learning, and it particular it considers the task of replicating the style of behaviours. The authors propose a novel framework, called Style Behavioural Cloning that can learn the representation of different behaviour styles from demonstrations by experts. The approach is an extension...
[ 5, 3, 3, 5 ]
[ 3, 4, 4, 4 ]
[ "iclr_2022_Oxdln9khkxv", "iclr_2022_Oxdln9khkxv", "iclr_2022_Oxdln9khkxv", "iclr_2022_Oxdln9khkxv" ]
iclr_2022_bM5L3GLi6bG
Open Set Domain Adaptation with Zero-shot Learning on Graph
Open set domain adaptation focuses on transferring the information from a richly labeled domain called \emph{source domain} to a scarcely labeled domain called \emph{target domain} while classifying the unseen target samples as one \emph{unknown} class in an unsupervised way. Compared with the close set domain adaptation, where the source domain and the target domain share the same class space, the classification of the unknown class makes it easy to adapt to the realistic environment. Particularly, after the recognition of the unknown samples, the robot can either ask for manually labeling or further develop the classification ability of the unknown classes based on pre-stored knowledge. Inspired by this idea, in this paper we propose a model for open set domain adaptation with zero-shot learning on the unknown classes. We utilize adversarial learning to align the two domains while rejecting the unknown classes. Then the knowledge graph is introduced to generate the classifiers for the unknown classes with the employment of the graph convolution network (GCN). Thus the classification ability of the source domain is transferred to the target domain and the model can distinguish the unknown classes with prior knowledge. We evaluate our model on digits datasets and the result shows superior performance.
Reject
The paper addresses open-set DA, where samples from novel classes in the target domain get clustered into new (unlabeled) classes. A key novelty in the learning setup is that it is assumed that one has access to a knowledge graph over classes (both source and target). That KG is used for grouping target samples into novel classes. Reviewers were concerned that the method is not explained with sufficient details and the experiments lacked comparisons with openset DA baselines. No rebuttal was submitted. The paper cannot be accepted to ICLR.
train
[ "yigdgT7NOtA", "IEm99X81WbY", "HQP2tWnZKqu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper formulates a novel problem in open-set domain adaptation and aims to classify the unknown classes in the target domain. This is different from the traditional open-set domain adaptation problem setting. To do this, additional knowledge of inter-class relations are employed and embedded so that knowledge ...
[ 5, 3, 3 ]
[ 4, 5, 4 ]
[ "iclr_2022_bM5L3GLi6bG", "iclr_2022_bM5L3GLi6bG", "iclr_2022_bM5L3GLi6bG" ]
iclr_2022_fSeD40P0XTI
ACCTS: an Adaptive Model Training Policy for Continuous Classification of Time Series
More and more real-world applications require to classify time series at every time. For example, critical patients should be detected for vital signs and diagnosed at all times to facilitate timely life-saving. For this demand, we propose a new concept, Continuous Classification of Time Series (CCTS), to achieve the high-accuracy classification at every time. Time series always evolves dynamically, changing features introducing the multi-distribution form. Thus, different from the existing one-shot classification, the key of CCTS is to model multiple distributions simultaneously. However, most models are hard to achieve it due to their independent identically distributed premise. If a model learns a new distribution, it will likely forget old ones. And if a model repeatedly learns similar data, it will likely be overfitted. Thus, two main problems are the catastrophic forgetting and the over fitting. In this work, we define CCTS as a continual learning task with the unclear distribution division. But different divisions differently affect two problems and a fixed division rule may become invalid as time series evolves. In order to overcome two main problems and finally achieve CCTS, we propose a novel Adaptive model training policy - ACCTS. Its adaptability represents in two aspects: (1) Adaptive multi-distribution extraction policy. Instead of the fixed rules and the prior knowledge, ACCTS extracts data distributions adaptive to the time series evolution and the model change; (2) Adaptive importance-based replay policy. Instead of reviewing all old distributions, ACCTS only replays the important samples adaptive to the contribution of data to the model. Experiments on four real-world datasets show that our method can classify more accurately than all baselines at every time.
Reject
This paper presents a reinforcement learning algorithm to target variable in every time step. Although the paper proposes an important problem in many real-world applications, there were various major criticisms raised by reviewers. Most importantly, technical novelty is not well motivated or justified. There is also a significant lack of a specific description of the proposed method, discussion of computational complexity, clarity and presentation, and evaluation metrics, which decreased the enthusiasm of the reviewers.
train
[ "rNEzv9Vs3iD", "3gt1QdJKgDy", "bQbY6WzRPPo", "dwK9FXdai6w" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors consider a classification setting in which a class label $c_t$ is to be predicted on the basis of time series observations $(x_1, ..., x_t)$ up to time $t$. The goal is to maintain accurate prediction even as the generating distribution evolves, while retaining good accuracy on (i.e. not \"forgetting\"...
[ 3, 3, 3, 3 ]
[ 3, 3, 3, 2 ]
[ "iclr_2022_fSeD40P0XTI", "iclr_2022_fSeD40P0XTI", "iclr_2022_fSeD40P0XTI", "iclr_2022_fSeD40P0XTI" ]
iclr_2022_KkIE-qePhW
LSP : Acceleration and Regularization of Graph Neural Networks via Locality Sensitive Pruning of Graphs
Graph Neural Networks (GNNs) have emerged as highly successful tools for graph-related tasks. However, real-world problems involve very large graphs, and the compute resources needed to fit GNNs to those problems grow rapidly. Moreover, the noisy nature and size of real-world graphs cause GNNs to over-fit if not regularized properly. Surprisingly, recent works show that large graphs often involve many redundant components that can be removed without compromising the performance too much. This includes node or edge removals during inference through GNNs layers or as a pre-processing step that sparsifies the input graph. This intriguing phenomenon enables the development of state-of-the-art GNNs that are both efficient and accurate. In this paper, we take a further step towards demystifying this phenomenon and propose a systematic method called Locality-Sensitive Pruning (LSP) for graph pruning based on Locality-Sensitive Hashing. We aim to sparsify a graph so that similar local environments of the original graph result in similar environments in the resulting sparsified graph, which is an essential feature for graph-related tasks. To justify the application of pruning based on local graph properties, we exemplify the advantage of applying pruning based on locality properties over other pruning strategies in various scenarios. Extensive experiments on synthetic and real-world datasets demonstrate the superiority of LSP, which removes a significant amount of edges from large graphs without compromising the performance, accompanied by a considerable acceleration.
Reject
This paper deals with the important practical problem of speeding up GNNs. Although the proposed method based on LSH may be considered to be a rather too simple preprocessing, it would be worthwhile to share the practical idea with the community as far as the proposed method is shown effective enough. However, as pointed out by several reviewers, it is concerned that the experimental validation of this paper is not sufficient. Further and deeper validations will make this paper stronger.
train
[ "E8c2vpbQLFQ", "fF6Avh9ag4n", "AFlZMLh7d_P", "9BdGgzElb-0", "_e-gJRY0nrI", "F6nfkCL3gfX", "iPJqcVXk74N", "RXL1rs3VDzs" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time spent for reviewing our paper. In what follows, we address specific comments.\n\n```\nAlso, why do the curves for the proposed approaches stop at around 0.45 in Fig 4?\n```\nThe average node degree of these graph is small. LSP prunes the graph in a local manner and considers every node ind...
[ -1, -1, -1, 5, 3, 1, 3, 3 ]
[ -1, -1, -1, 4, 4, 4, 4, 4 ]
[ "_e-gJRY0nrI", "F6nfkCL3gfX", "9BdGgzElb-0", "iclr_2022_KkIE-qePhW", "iclr_2022_KkIE-qePhW", "iclr_2022_KkIE-qePhW", "iclr_2022_KkIE-qePhW", "iclr_2022_KkIE-qePhW" ]
iclr_2022_uUN0Huq-n_V
Polyphonic Music Composition: An Adversarial Inverse Reinforcement Learning Approach
Most recent approaches to automatic music harmony composition adopt deep supervised learning to train a model using a set of human composed songs as training data. However, these approaches suffer from inherent limitations from the chosen deep learning models which may lead to unpleasing harmonies. This paper explores an alternative approach to harmony composition using a combination of novel Deep Supervised Learning, DeepReinforcement Learning and Inverse Reinforcement Learning techniques. In this novel approach, our model selects the next chord in the composition(action) based on the previous notes(states), therefore allowing us to model harmony composition as a reinforcement learning problem in which we look to maximize an overall accumulated reward. However, designing an appropriate reward function is known to be a very tricky and difficult process. To overcome this problem we propose learning a reward function from a set of human-composed tracks using Adversarial Inverse Reinforcement Learning. We start by training a Bi-axial LSTM model using supervised learning and improve upon it by tuning it using Deep Q-learning. Instead of using GANs to generate a similar music composition to human compositions directly, we adopt GANs to learn the reward function of the music trajectories from human compositions. We then combine the learned reward function with a reward based on music theory rules to improve the generation of the model trained by supervised learning. The results show improvement over a pre-trained model without training with reinforcement learning with respect to a set of objective metrics and preference from subjective user evaluation.
Reject
This work proposes a system for generating piano music (in the symbolic domain) using a learned reward function. Reviewers raised concerns about the organisation of the paper, clarity of writing, a lack of experimental comparison with previously published approaches (and the quality of the baseline), several unsubstantiated claims, and some missing related work. Unfortunately no attempt was made to address these issues.
train
[ "PVarHsa_569", "l7A7PN3FTWj", "wd-bK27yifr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a method for polyphonic piano-based symbolic music generation based on previous work on RL-tuned recurrent networks. This paper adds an adversarial inverse reinforcement learning (AIRL) step to estimate a reward function which is used in tandem to music theoretic rewards during the Q-network tu...
[ 5, 3, 6 ]
[ 4, 5, 4 ]
[ "iclr_2022_uUN0Huq-n_V", "iclr_2022_uUN0Huq-n_V", "iclr_2022_uUN0Huq-n_V" ]
iclr_2022_dEelotBE6e2
Defending Against Backdoor Attacks Using Ensembles of Weak Learners
A recent line of work has shown that deep networks are susceptible to backdoor data poisoning attacks. Specifically, by injecting a small amount of malicious data into the training distribution, an adversary gains the ability to control the behavior of the model during inference. We propose an iterative training procedure for removing poisoned data from the training set. Our approach consists of two steps. We first train an ensemble of weak learners to automatically discover distinct subpopulations in the training set. We then leverage a boosting framework to exclude the poisoned data and recover the clean data. Our algorithm is based on a novel bootstrapped measure of generalization, which provably separates the clean from the dirty data under mild assumptions. Empirically, our method successfully defends against a state-of-the-art dirty label backdoor attack. We find that our approach significantly outperforms previous defenses.
Reject
The paper presents a new defense against backdoor attacks based on the discovery of homogeneous populations in the training data and subsequent filtering of poisoned data due to its difference from the said populations. The method has a solid theoretical foundation which, however, requires strong assumptions on attacks and benign data. Due to these assumptions the theoretical guarantees alone cannot ensure that the defense is robust against adaptive attacks. The experimental validation of the proposed method is limited to one benchmark datasets (CIFAR), additional results are briefly presented in the response but not elaborated on.
train
[ "qUqZsGmgvVT", "lik0phZ-IN", "rpkNqn7qZJQ", "YG2s7QbixA", "KDDgjV49Xsb", "-LsqjqQClz2", "ShTigcNdPHH", "m_M8cibgg4G", "epNwpXOU1-A" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the detailed comments. Our responses are below.\n\n> 1- To me, this paper is mostly about outlier detection. It is not clear to me why authors focus on backdoor poisoning attacks in the introduction as rest of the paper does not have anything to do with backdoor attacks.\n\nThe main dist...
[ -1, -1, -1, -1, -1, 3, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "-LsqjqQClz2", "rpkNqn7qZJQ", "epNwpXOU1-A", "m_M8cibgg4G", "ShTigcNdPHH", "iclr_2022_dEelotBE6e2", "iclr_2022_dEelotBE6e2", "iclr_2022_dEelotBE6e2", "iclr_2022_dEelotBE6e2" ]
iclr_2022_Fia60I79-4B
TS-BERT: A fusion model for Pre-trainning Time Series-Text Representations
There are many tasks to use news text information and stock data to predict the crisis. In the existing research, the two usually play one master and one follower in the prediction task. Use one of the news text and the stock data as the primary information source for the prediction task and the other as the auxiliary information source. This paper proposes a fusion model for pre-training time series-Text representations, in which news text and stock data have the same status and are treated as two different modes to describe crises. Our model has achieved the best results in the task of predicting financial crises.
Reject
The paper proposes a method for predicting stock market crises using a deep learning approach which combines time series stock market data with text from news articles. Their experiments show that the proposed method works better than the same model using only news or only stock price data, and a couple of deep learning baselines. All the reviewers pointed out that this paper is lack of novelty and significant technical contributions. The experiments are performed on a single dataset with incomplete baselines, and hence insufficient to support the claimed advantages of the proposed method. The writing quality is not up to the standards of an ICLR papers, with too many grammatical mistakes, typos, and unjustified arguments/claims. The clarity of the writing is poor. The authors did not provide their rebuttal.
train
[ "wIbQVRAFfgV", "zuG-zKflBn2", "OSmpoAaRlSw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a method for predicting stock market crises using a deep learning approach which combines time series stock market data with text from news articles. Experiments find that the method works better than the same model using only news or only stock prices, and a couple of deep learning baselines. ...
[ 3, 1, 3 ]
[ 4, 4, 5 ]
[ "iclr_2022_Fia60I79-4B", "iclr_2022_Fia60I79-4B", "iclr_2022_Fia60I79-4B" ]
iclr_2022_f3QTgKQW0TD
Manifold Distance Judge, an Adversarial Samples Defense Strategy Based on Service Orchestration
Deep neural networks (DNNs) are playing an increasingly significant role in the modern world. However, they are weak to adversarial examples that are generated by adding specially crafted perturbations. Most defenses against adversarial examples focused on refining the DNN models, which often sacrifice the performance and computational cost of models on benign samples. In this paper, we propose a manifold distance detection method to distinguish between legitimate samples and adversarial samples by measuring the different distances on the manifold. The manifold distance detection method neither modifies the protected models nor requires knowledge of the process for generating adversarial samples. Inspired by the effectiveness of the manifold distance detection, we demonstrated a well-designed orchestrated defense strategy, named Manifold Distance Judge (MDJ), which selects the best image processing method that will effectively expand the manifold distance between legitimate and adversarial samples, and thus, enhances the performance of the following manifold distance detection method. Tests on the ImageNet dataset, the MDJ is effective against the most adversarial samples under whitebox, graybox, and blackbox attack scenarios. We show empirically that the orchestration strategy MDJ is significantly better than Feature Squeezing on the recall rate. Meanwhile, MDJ achieves high detection rates against CW attack and DI-FGSM attack.
Reject
The paper proposes a manifold distance-based detection based against adversarial samples, i.e., using the difference between the highest and second-highest softmax outputs from a model to detect adversarial examples. All the reviewers gave negative scores. The main concerns lie in 1) poor quality of writing; 2) contributions of the paper are not clearly stated; and 3) limited engagement with well-known recommendations from the research community on the evaluation of defenses/detection methods for adversarial examples. No rebuttals are provided. Thus, I cannot recommend accepting the paper to ICLR.
train
[ "-PiU1Y9O31L", "ffq5hjGwAr1", "lfyBLZv2yzW", "nctP5GBePlb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes using the difference between the highest and second-highest softmax outputs from a model to detect adversarial examples. It also considers the use of different preprocessing methods to modify the output probabilities such that the adversarial examples can be well-differentiated from benign ones...
[ 1, 3, 1, 5 ]
[ 4, 4, 5, 4 ]
[ "iclr_2022_f3QTgKQW0TD", "iclr_2022_f3QTgKQW0TD", "iclr_2022_f3QTgKQW0TD", "iclr_2022_f3QTgKQW0TD" ]
iclr_2022_kUGYDTJUcuc
Unifying Top-down and Bottom-up for Recurrent Visual Attention
The idea of using the recurrent neural network for visual attention has gained popularity in computer vision community. Although the recurrent visual attention model (RAM) leverages the glimpses with more large patch size to increasing its scope, it may result in high variance and instability. For example, we need the Gaussian policy with high variance to explore object of interests in a large image, which may cause randomized search and unstable learning. In this paper, we propose to unify the top-down and bottom-up attention together for recurrent visual attention. Our model exploits the image pyramids and Q-learning to select regions of interests in the top-down attention mechanism, which in turn to guide the policy search in the bottom-up approach. In addition, we add another two constraints over the bottom-up recurrent neural networks for better exploration. We train our model in an end-to-end reinforcement learning framework, and evaluate our method on visual classification tasks. The experimental results outperform convolutional neural networks (CNNs) baseline and the bottom-up recurrent models with visual attention.
Reject
This submission receives mixed reviews. One reviewer leans positively while two reviewers are negative. They raise several issues upon improper evaluations, insufficient experimental analysis, baseline and sota network comparisons, presentation unclarity, and technical motivations. In the rebuttal and discussion phases, the authors do not make any response to these reviews. After checking the whole submission, the AC agrees with these two reviewers that there are several drawbacks to the aspects of the technical presentation and experimental configurations. The authors shall take these suggestions into consideration and make further improvements upon the current submission.
train
[ "c12ZLjsRz5z", "ju-3L2l7iXO", "HalC1D_Tp13" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a novel method to unify the top-down and bottom-up attention together for recurrent visual attention. They also propose two constraints in the bottom-up recurrent neural networks for better balancing the trade-off between exploration and exploitation when searching local regions....
[ 6, 3, 3 ]
[ 2, 4, 4 ]
[ "iclr_2022_kUGYDTJUcuc", "iclr_2022_kUGYDTJUcuc", "iclr_2022_kUGYDTJUcuc" ]
iclr_2022_kO-wQWwqnO
L2BGAN: An image enhancement model for image quality improvement and image analysis tasks without paired supervision
The paper presents an image enhancement model, L2BGAN, to translate low light images to bright images without a paired supervision. We introduce the use of geo- metric and lighting consistency along with a contextual loss criterion. These when combined with multiscale color, tex- ture and edge discriminators prove to provide competitive results. We perform extensive experiments on benchmark datasets to compare our results visually as well as objec- tively. We observe the performance of L2BGAN on real time driving datasets which are subject to motion blur, noise and other artifacts. We further demonstrate the application of image understanding tasks on our enhanced images using DarkFace and ExDark datasets.
Reject
None of the reviewers championed the paper. Many weaknesses were shared across the reviewers: none of the individual contributions is individually novel, paper is not well written and the results do not show significant improvement over the prior state of the art. No rebuttal was provided. The AC agrees with the reviewers that the paper is not ready for publication at ILCR.
train
[ "L_5gixXJy1g", "GGPgYaf9GjT", "8l2x8jrG05I", "McIJigBFPK6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a new image enhancement specific for low light images. They exploit the concepts of geometric and lighting consistency together with a contextual loss criterion. \nExtensive experiments are performed on benchmark datasets to compare their results both visually and objectively. The main strengt...
[ 5, 3, 5, 3 ]
[ 3, 5, 3, 4 ]
[ "iclr_2022_kO-wQWwqnO", "iclr_2022_kO-wQWwqnO", "iclr_2022_kO-wQWwqnO", "iclr_2022_kO-wQWwqnO" ]
iclr_2022_OqHtVOo-zy
Estimating Instance-dependent Label-noise Transition Matrix using DNNs
In label-noise learning, estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers. Traditionally, the transition from clean labels to noisy labels (i.e., clean label transition matrix) has been widely exploited to learn a clean label classifier by employing the noisy data. Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i.e., Bayes label transition matrix) and learn a classifier to predict Bayes optimal labels. Note that given only noisy data, it is ill-posed to estimate either the clean label transition matrix or the Bayes label transition matrix. But favorably, Bayes optimal labels have less uncertainty compared with the clean labels, i.e., the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not. This enables two advantages to estimate the Bayes label transition matrix, i.e., (a) we could theoretically recover a set of noisy data with Bayes optimal labels under mild conditions; (b) the feasible solution space is much smaller. By exploiting the advantages, we estimate the Bayes label transition matrix by employing a deep neural network in a parameterized way, leading to better generalization and superior classification performance.
Reject
This paper received a majority voting of rejection. In the internal discussion, one reviewer updated his/her score from 1 to 3 according to the author response. I have read all the materials of this paper including manuscript, appendix, comments and response. Based on collected information from all reviewers and my personal judgement, I can make the recommendation on this paper, *rejection*. Here are the comments that I summarized, which include my opinion and evidence. **Interesting Idea** Every reviewer including me agree that the idea of modelling Bayes label transition is novel and interesting. **The motivation lacks of supportive evidence** The second motivation that "the feasible solution space of the Bayes label transition matrix is much smaller than that of the clean label transition matrix" is not well supported. The authors should theoretically or empirically demonstrate this point. The current description on uncertainty is not strong enough. Moreover, if so, the benefits are not illustrated. The feasible solution space, even with a small coverage area is continuous with infinite solutions. **A new concept** The authors tried to sell the concept of a new transition matrix, but failed. I believe it might result from the organization and presentation. The authors spent too much pages introducing others' work. At least, a formal definition of the new concept should be given. In the current version, Definition 1 is from Cheng et al., 2020 on distilled examples. **Title** Literally from title, I guess DNN is a key component or a selling point of this paper. Actually no. We expect the authors could provide the insights on what benefits are using DNN over other techniques and how to apply DNN to estimate the transition matrix. If this is not a selling point, this word might be removed from the title. **Algorithm 1** I am a little surprised that the only algorithm listed in this paper is label noise generation. Instead the proposed algorithm of this paper is expected. **Experimental Evaluation** The experimental results look much better than other baselines. It is a little confusing that some best results are bold, some not. **Presentation** Although I did not notice obvious grammar errors, some sentences are very long (3 lines). They made difficulties to follow the idea. I have to read these sentences several times. In my eyes, this is the biggest one! Presentation means how to sell the idea to audience (not only reviewers, but also future readers) in an easy understood way. The current version spent much space introducing others' work; on the contrary, the original or key part is not well illustrated. Although this paper has a novel idea and good experimental support, other issues listed above demonstrate the current version is not ready for a top-tier conference. No objection from reviewers was raised to again this recommendation.
val
[ "ffEZRILxE5a", "GAv8MlYT9Z", "6VQ6LZvafHu", "md1XodR_x_g", "NkMa67peDcF", "qo6rYoPyNNt", "ddcv95fu1a7", "-zr1sBWmAVQ", "JBGrWBkfREO", "atUHboC_dit", "NmSc2oWvAEB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer. We value the suggestions and will take them in the next version.", " Dear authors,\n\nThanks for your response. My apologies if my review was too harsh and made you unhappy. I agree that the paper has a good idea, but unfortunately it suffered from many issues (the overall f...
[ -1, -1, -1, 3, -1, -1, -1, -1, 3, 8, 5 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, 4, 4, 4 ]
[ "6VQ6LZvafHu", "ddcv95fu1a7", "-zr1sBWmAVQ", "iclr_2022_OqHtVOo-zy", "NmSc2oWvAEB", "atUHboC_dit", "md1XodR_x_g", "JBGrWBkfREO", "iclr_2022_OqHtVOo-zy", "iclr_2022_OqHtVOo-zy", "iclr_2022_OqHtVOo-zy" ]
iclr_2022_jJWK09skiNl
Zero-shot detection of daily objects in YCB video dataset
To let robots be able to manipulate objects, they have to sense the location of objects. With the development of visual data collecting and processing technology, robots are gradually evolving to localize objects in a greater field of view rather than being limited to a small space where the object could appear. To train such a robot vision system, pictures of all the objects need to be taken under various orientations and illumination. In the traditional manufacturing environment, this is applicable since objects involved in the production process does not change frequently. However, in the vision of smart manufacturing and high-mix-low-volume production, parts and products for robots to handle may change frequently. Thus, it is unrealistic to re-training the vision system for new products and tasks. Under this situation, we discovered the necessity to introduce a hot concept which is zero-shot object detection. Zero-shot object detection is a subset of unsupervised learning, and it aims to detect novel objects in the image with the knowledge learned from and only from seen objects. With zero-shot object detection algorithm, time can be greatly saved from collecting training data and training the vision system. Previous works focus on detecting objects in outdoor scenes, such as bikes, car, people, and dogs. The detection of daily objects is actually more challenging since the knowledge can be learned from each object is very limited. In this work, we explore the zero-shot detection of daily objects in indoor scenes since the objects’ size and environment are closely related to the manufacturing setup. The YCB Video Dataset is used in this work, which contains 21 objects in various categories. To the best of our knowledge, no previous work has explored zero-shot detection in this object size level and on this dataset.
Reject
All five reviewers unanimously agree that the paper needs to be rejected. One of the main concerns is the lack of technical novelty/originality. The reviewers also point out lacking citation and comparison to prior work, and missing experiments. The authors have not provided any rebuttal.This paper describes an approach for zero shot detection of seen and unseen objects in scenarios. All five reviewers unanimously agree that the paper needs to be rejected. One of the main concerns is the lack of technical novelty/originality. The reviewers also point out lacking citation and comparison to prior work, and underwhelming experiments. The authors have not provided any rebuttal. We recommend rejecting the paper.
val
[ "lXEzQyenui", "NLzCZveEgr9", "09kpT_ytor8", "vYQ-VNHuQF_", "DsIVM7E3I8g" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper performs zero shot detection of seen and unseen objects in scenarios with more fine-grained division of objects. For example, in practical computer vision applications such as industrial and indoor environments, it might be necessary to differentiate between the same object in different colors, sizes and...
[ 3, 3, 1, 1, 1 ]
[ 4, 4, 4, 5, 5 ]
[ "iclr_2022_jJWK09skiNl", "iclr_2022_jJWK09skiNl", "iclr_2022_jJWK09skiNl", "iclr_2022_jJWK09skiNl", "iclr_2022_jJWK09skiNl" ]
iclr_2022_3iH9ewU_KJT
MT-GBM: A Multi-Task Gradient Boosting Machine with Shared Decision Trees
Despite the success of deep learning in computer vision and natural language processing, Gradient Boosted Decision Tree (GBDT) is yet one of the most powerful tools for applications with tabular data such as e-commerce and FinTech. However, applying GBDT to multi-task learning is still a challenge. Unlike deep models that can jointly learn a shared latent representation across multiple tasks, GBDT can hardly learn a shared tree structure. In this paper, we propose Multi-Task Gradient Boosting Machine (MT-GBM), a GBDT-based method for multi-task learning. The MT-GBM can find the shared tree structures and split branches according to multi-task losses. First, it assigns multiple outputs to each leaf node. Next, it computes the gradient corresponding to each output (task). Then, we also propose an algorithm to combine the gradients of all tasks and update the tree. Finally, we apply MT-GBM to LightGBM. Experiments show that our MT-GBM improves the performance of the main task significantly, which means the proposed MT-GBM is efficient and effective.
Reject
This paper proposes a multi-task version of Gradient Boosted Machines (GBMs). The paper proposes a learning algorithm that adaptively adjusts the learning rate per task. Empirical evaluation is carried out on two datasets with the method implemented in the LightGBM framework. The reviewers thought that the paper is not very clear. They were not ready to accept the paper claims based on the current version. In particular, the algorithms are hard to follow, the empirical evaluation is not easy to follow and there are missing comparisons to related work. The authors did not offer a response to the reviews.
test
[ "jTo4cy3_Lpd", "T2byMCYXyax", "bnlgmyScrb-", "V7ier4LGAgW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors propose a way to perform multi-task learning with gradient boosted decision trees. Multitask learning has shown promise in deep learning and other machine learning systems. It is hard to perform multitask learning in decision trees. The main contributions from the paper are as follows\nAuthors propose a mu...
[ 3, 3, 3, 3 ]
[ 3, 3, 4, 4 ]
[ "iclr_2022_3iH9ewU_KJT", "iclr_2022_3iH9ewU_KJT", "iclr_2022_3iH9ewU_KJT", "iclr_2022_3iH9ewU_KJT" ]
iclr_2022_bVT5w39X0a
Bayesian Relational Generative Model for Scalable Multi-modal Learning
The study of complex systems requires the integration of multiple heterogeneous and high-dimensional data types (e.g. multi-omics). However, previous generative approaches for multi-modal inputs suffer from two shortcomings. First, they are not stochastic processes, leading to poor uncertainty estimations over their predictions. This is mostly due to the computationally intensive nature of traditional stochastic processes, such as Gaussian Processes (GPs), that makes their applicability limited in multi-modal learning frameworks. Second, they are not able to effectively approximate the joint posterior distribution of multi-modal data types with various missing patterns. More precisely, their model assumptions result in miscalibrated precisions and/or computational cost of sub-sampling procedure. In this paper, we propose a class of stochastic processes that learns a graph of dependencies between samples across multi-modal data types through adopting priors over the relational structure of the given data modalities. The dependency graph in our method, multi-modal Relational Neural Process (mRNP), not only posits distributions over the functions and naturally enables rapid adaptation to new observations by its predictive distribution, but also makes mRNP scalable to large datasets through mini-batch optimization. We also introduce mixture-of-graphs (MoG) in our model construction and show that it can address the aforementioned limitations in joint posterior approximation. Experiments on both toy regression and classification tasks using real-world datasets demonstrate the potential of mRNP for offering higher prediction accuracies as well as more robust uncertainty estimates compared to existing baselines and state-of-the-art methods.
Reject
The paper extends the FNP model to multimodal settings using the mixture of graphs. However, there are legitimate concerns about the quality of experiments, such as baselines, as the reviewers mention. For example, mRNP is supervised, and comparison to DeepIMV is not fair. I encourage the authors to address them appropriately in the next version of the paper. The authors can significantly improve the presentation of ideas. Please avoid making hyperbole and excessively bold statements, as the reviewers have pointed out. This way, there will be room for a better demonstration of the novel parts of the paper. For example, the authors misuse the term "generative" for the proposed mRNP. There are multiple hand-waving statements about the role of uncertainty that are not well-supported in the current draft. I believe this paper can be a good paper by addressing the reviewers' comments.
train
[ "IuBXBjqDZb", "2zGqsF8EqvJ", "H4OvalAxeNh", "KnRklbQWCeC", "Zyww0ct_hj", "3KUdLeWjtm1", "yki0ZSia1J", "jPp7sWBUHN", "8tNVoc0VIA0", "JPOa379RtJC", "MA7-IPShuNd" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After carefully reading your response and the comments of other reviewers, I decided to keep my opinion unchanged. The methods and experiments in this paper still need to be improved. For example, almost all reviewers mentioned that the experiment was inadequate. At the same time, both the reviewer bvzL and I we...
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "2zGqsF8EqvJ", "JPOa379RtJC", "jPp7sWBUHN", "H4OvalAxeNh", "8tNVoc0VIA0", "yki0ZSia1J", "MA7-IPShuNd", "iclr_2022_bVT5w39X0a", "iclr_2022_bVT5w39X0a", "iclr_2022_bVT5w39X0a", "iclr_2022_bVT5w39X0a" ]
iclr_2022_XJFGyJEBLuz
Born Again Neural Rankers
We introduce Born Again neural Rankers (BAR) in the Learning to Rank (LTR) setting, where student rankers, trained in the Knowledge Distillation (KD) framework, are parameterized identically to their teachers. Unlike the existing ranking distillation work, which pursues a good trade-off between performance and efficiency, BAR adapts the idea of Born Again Networks (BAN) to ranking problems and significantly improves ranking performance of students over the teacher rankers without increasing model capacity. By examining the key differences between ranking distillation and common distillation for classification problems, we find that the key success factors of BAR lie in (1) an appropriate teacher score transformation function, and (2) a novel listwise distillation framework, both are specifically designed for ranking problems and are rarely studied in the knowledge distillation literature. Using the state-of-the-art neural ranking structures, BAR is able to push the limits of neural rankers above a recent rigorous benchmark study, and significantly outperforms strong gradient boosted decision tree based models on 7 out of 9 key metrics, the first time in the literature. In addition to the strong empirical results, we give theoretical explanations on why listwise distillation is effective for neural rankers.
Reject
The authors are strongly encouraged to elaborate further about the novelty of their method, as well as to give detailed (either theoretical or experimental) justifications for the design choices they make within the paper. Finally, the paper could benefit from additional experiments, as outlined in the reviews.
train
[ "RMKU67E_AEM", "MTAqUB8-xVQ", "ngDcJatZeqJ", "re5Ho0pSOX2", "A_2pZvJuzsH", "XDHEBGv_GER" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your review. \n\nEnsembling lambdaMART models were done in (https://openreview.net/forum?id=Ut1vF_q_vC) Appendix B.6, where there was limited benefit. The hypothesis was “model ensembles tend to be more effective for neural rankers with stronger stochastic nature”, which we concur. \n\nIn ...
[ -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "XDHEBGv_GER", "A_2pZvJuzsH", "re5Ho0pSOX2", "iclr_2022_XJFGyJEBLuz", "iclr_2022_XJFGyJEBLuz", "iclr_2022_XJFGyJEBLuz" ]
iclr_2022_WQIdU90Gsu
Compound Multi-branch Feature Fusion for Real Image Restoration
Image restoration is a challenging and ill-posed problem which also has been a long-standing issue. However, most of learning based restoration methods are proposed to target one degradation type which means they are lack of generalization. In this paper, we proposed a multi-branch restoration model inspired from the Human Visual System (i.e., Retinal Ganglion Cells) which can achieve multiple restoration tasks in a general framework. The experiments show that the proposed multi-branch architecture, called CMFNet, has competitive performance results on four datasets, including image dehazing, deraindrop, and deblurring, which are very common applications for autonomous cars. The source code and pretrained models of three restoration tasks are available at https://github.com/publish_after_accepting/CMFNet.
Reject
All reviewers have substantial concerns regarding this work including novelty and experimental validation. The authors do not provide a rebuttal for the raised concerns. As such, the area chair agrees with the reviewers and does not recommend it be accepted at this conference.
train
[ "XZ2FJHtni9K", "LGbOoknPytP", "YeRwvZI3oKJ", "YvLxqeDuv55", "LUoJMxwyFs9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new model CMFNet to handle multiple image restoration tasks. More specifically, they introduce a new skip connection scheme, a new loss function and demonstrate comperitive results on 3 tasks: deblurring, deraining and dehazing. Strengths\n1. The idea to handle multiple restoration tasks in o...
[ 3, 1, 6, 3, 3 ]
[ 5, 5, 4, 4, 4 ]
[ "iclr_2022_WQIdU90Gsu", "iclr_2022_WQIdU90Gsu", "iclr_2022_WQIdU90Gsu", "iclr_2022_WQIdU90Gsu", "iclr_2022_WQIdU90Gsu" ]
iclr_2022_oLYTo-pL0Be
Towards Scheduling Federated Deep Learning using Meta-Gradients for Inter-Hospital Learning
Given the abundance and ease of access of personal data today, individual privacy has become of paramount importance, particularly in the healthcare domain. In this work, we aim to utilise patient data extracted from multiple hospital data centres to train a machine learning model without sacrificing patient privacy. We develop a scheduling algorithm in conjunction with a student-teacher algorithm that is deployed in a federated manner. This allows a central model to learn from batches of data at each federal node. The teacher acts between data centres to update the main task (student) algorithm using the data that is stored in the various data centres. We show that the scheduler, trained using meta-gradients, can effectively organise training and as a result train a machine learning model on a diverse dataset without needing explicit access to the patient data. We achieve state-of-the-art performance and show how our method overcomes some of the problems faced in the federated learning such as node poisoning. We further show how the scheduler can be used as a mechanism for transfer learning, allowing different teachers to work together in training a student for state-of-the-art performance.
Reject
This work describes an interesting approach of using a reinforcement learning algorithm for federated learning. The paper is well organized and the use-case of performing federated learning while preserving patient privacy is also important. However, the paper has room for improvement. Important baselines used for client selection are missing and so the deep reinforcement learning approach is not well-motivated. Many important technical details are missing such as hyperparameters and distributions for MNIST and CIFAR. The approach is also lacking novelty, DRL has been used for neural scheduling before and the authors do not suggest improvements to that. Finally, the experiments showing robustness to backdoor attacks is unconvincing and can benefit from more analysis.
test
[ "yMQlc_dv1XN", "4V5JeCrDYG7", "j1UpldaPrjC", "jFhHcljwoOC", "VTBuR6FdlUF", "4KC7uFjqclC", "Li5_FdpVi0K", "8HESsJY89t-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the author's rebuttal, there still exist several concerns. \n1. From my perspective, applying the existing deep reinforcement learning for scheduling into federated learning is actually a contribution. However, the authors should clarify that why they adopt this type of scheduling technique for FL...
[ -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "jFhHcljwoOC", "VTBuR6FdlUF", "4KC7uFjqclC", "Li5_FdpVi0K", "8HESsJY89t-", "iclr_2022_oLYTo-pL0Be", "iclr_2022_oLYTo-pL0Be", "iclr_2022_oLYTo-pL0Be" ]
iclr_2022_AAeMQz0x4nA
Learning Explicit Credit Assignment for Multi-agent Joint Q-learning
Multi-agent joint Q-learning based on Centralized Training with Decentralized Execution (CTDE) has become an effective technique for multi-agent cooperation. During centralized training, these methods are essentially addressing the multi-agent credit assignment problem. However, most of the existing methods \emph{implicitly} learn the credit assignment just by ensuring that the joint Q-value satisfies the Bellman optimality equation. In contrast, we formulate an \emph{explicit} credit assignment problem where each agent gives its suggestion about how to weight individual Q-values to explicitly maximize the joint Q-value, besides guaranteeing the Bellman optimality of the joint Q-value. In this way, we can conduct credit assignment among multiple agents and along the time horizon. Theoretically, we give a gradient ascent solution for this problem. Empirically, we instantiate the core idea with deep neural networks and propose Explicit Credit Assignment joint Q-learning (ECAQ) to facilitate multi-agent cooperation in complex problems. Extensive experiments justify that ECAQ achieves interpretable credit assignment and superior performance compared to several advanced baselines.
Reject
This paper provides an interesting method to address the CTDE problem in MARL. While the experiments are promising, the theory is either insufficient or not rigorous. One of the reviewers believe that there is a flaw in the paper. There was an extensive discussion among the authors and the reviewer. The authors could not convince the reviewer for the apparent flaw.
train
[ "WKG45jHynF4", "-psKDTI0_r", "PuZeidr7Gc7", "CIXnxSmFRPr", "67Kd-_98wls", "B8sfDIvRVVT", "Faeczpp_DvA", "cepv0iKvwXo", "7v0_eACGKU8", "jFv_RbjqSbe", "MUdoWDZ94Dy", "OTEfg_1vnOf", "RLxa7pQeZ6A", "Sl7pBOx-0k", "lv7w4nFxhpL", "HDrk4i9nsuU", "gB4oNyS1kE", "OrjyjKmSsY", "DhsKaDsDmDK",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "...
[ " **Q2-1: On the credit assignment. However, we can simply modify the values of $Q_i$ to adapt to the changes in $\\alpha_i$ while keeping the global value $Q_{total}$ does not change.** \nI do not think this is a problem. Think that QMIX and many previous methods also have this property: we can modify QMIX's $Q_i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "-psKDTI0_r", "PuZeidr7Gc7", "CIXnxSmFRPr", "MUdoWDZ94Dy", "jFv_RbjqSbe", "7v0_eACGKU8", "cepv0iKvwXo", "RLxa7pQeZ6A", "TKq_uDYffj", "iLTHG9TqvX6", "OTEfg_1vnOf", "Sl7pBOx-0k", "lv7w4nFxhpL", "HDrk4i9nsuU", "DhsKaDsDmDK", "gB4oNyS1kE", "OrjyjKmSsY", "Bcar01ZXTmd", "w-PCHe8sFXQ", ...
iclr_2022_cpstx0xuvRY
Information-Theoretic Generalization Bounds for Iterative Semi-Supervised Learning
We consider iterative semi-supervised learning (SSL) algorithms that iteratively generate pseudo-labels for a large amount unlabelled data to progressively refine the model parameters. In particular, we seek to understand the behaviour of the {\em generalization error} of iterative SSL algorithms using information-theoretic principles. To obtain bounds that are amenable to numerical evaluation, we first work with a simple model---namely, the binary Gaussian mixture model. Our theoretical results suggest that when the class conditional variances are not too large, the upper bound on the generalization error decreases monotonically with the number of iterations, but quickly saturates. The theoretical results on the simple model are corroborated by extensive experiments on several benchmark datasets such as the MNIST and CIFAR datasets in which we notice that the generalization error improves after several pseudo-labelling iterations, but saturates afterwards.
Reject
This paper studies the generalization error of semi-supervised learning, where the algorithm gradually pseudo-labels the data throughout the learning process. Theoretically, an upper bound on the generalization error is shown to decompose into a term that vanishes with successive labeling and another that does not, leading to a plateau in performance. This is studied analytically for a mixture of two Gaussians. Experimentally, similar behavior is also observed to occur in more realistic scenarios. What reviewers struggled with is to understand what part of the results are, to some extent, obvious, and what offer deeper insight. What is obvious: even if a Bayes classifier were available for pseudo-labeling, feature overlap means that there is a plateau of noise beyond which labeling cannot improve. What is not obvious: is it even worth pseudo-labeling, or could we make things worse? The merit of the paper is in elucidating the latter. There are several concerns that remain, however, even after discussions. First, there is whether the insight is substantial or not. Here, some comparison and contrast with existing literature suggests otherwise. Second, there is whether the experimentally observed behavior is an instance of the phenomenon described by theory. Here, better structured experiments are needed to tie in with the theory. Overall, although the paper presents compelling insight, it is not yet ready to disseminate. It needs a stronger argument for its added theoretical contribution and clearer experiments to support that the presented theory is indeed behind the empirical behavior of these iterative algorithms.
train
[ "SqttSSEmg35", "eI0stOvus0u", "YohCARyZ-bW", "2Sq85qDhSW", "twoo6xRxR_F", "eqgNQJ28iA0", "OBFTuxZ6gUx", "CZdpI9u7S-", "eWi_IfP89jT", "bXPNE_R0a1e", "OsRk6tYUVs4", "JqI00sWGK0k" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the prompt response to ours. \n\nWe submit that we *analyze* a toy example of a binary GMM. The reason for this is that any other classification model would result in analysis that, with high likelihood, yields a generalization error expression that is not amenable to interpretation and ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "eI0stOvus0u", "twoo6xRxR_F", "2Sq85qDhSW", "JqI00sWGK0k", "OsRk6tYUVs4", "bXPNE_R0a1e", "CZdpI9u7S-", "eWi_IfP89jT", "iclr_2022_cpstx0xuvRY", "iclr_2022_cpstx0xuvRY", "iclr_2022_cpstx0xuvRY", "iclr_2022_cpstx0xuvRY" ]
iclr_2022_fgcIb5gd99r
Multi-scale fusion self attention mechanism
Self attention is widely used in various tasks because it can directly calculate the dependency between words, regardless of distance. However, the existing self attention lacks the ability to extract phrase level information. This is because the self attention only considers the one-to-one relationship between words and ignores the one-to-many relationship between words and phrases. Consequently, we design a multi-scale fusion self attention model for phrase information to resolve the above issues. Based on the traditional attention mechanism, multi-scale fusion self attention extracts phrase information at different scales by setting convolution kernels at different levels, and calculates the corresponding attention matrix at different scales, so that the model can better extract phrase level information. Compared with the traditional self attention model, we also designed a unique attention matrix sparsity strategy to better select the information that the model needs to pay attention to, so that our model can be more effective. Experimental results show that our model is superior to the existing baseline model in relation extraction task and GLUE task.
Reject
This paper proposes a multi-scale fusion self attention model for phrase information, which incorporates convolutional models into self-attention to explicitly handle word-to-phrase correlation. This is paired with a sparse masking strategy to balance between word-to-word attention and word-to-phrase attention. The model achieves good performance on downstream tasks. While the proposed method is simple and looks effective, reviewers have expressed concerns with lack of novelty (see the suggested missing references), lack of clarity in the experimental details, and unclear writing. Unfortunately, there was no response from the authors, which makes me recommend rejection. We urge the authors to follow the reviewers' suggestions in a future iteration of their work.
train
[ "tlvRtCzAuB", "8oSJcRzrfNe", "ZEALDFyaNHK", "ZAeiAAQvehM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper incorporates convolutional models into self-attention to explicitly handle word-to-phrase correlation, paired with a sparse masking strategy to balance between word-to-word attention and word-to-phrase attention. The model achieves good performance on GLUE and RE tasks. Strengths:\n\nA simple method tha...
[ 3, 3, 3, 3 ]
[ 5, 4, 3, 3 ]
[ "iclr_2022_fgcIb5gd99r", "iclr_2022_fgcIb5gd99r", "iclr_2022_fgcIb5gd99r", "iclr_2022_fgcIb5gd99r" ]
iclr_2022_Vx8l4vwv94
JOINTLY LEARNING TOPIC SPECIFIC WORD AND DOCUMENT EMBEDDING
Document embedding generally ignores underlying topics, which fails to capture polysemous terms that can mislead to improper thematic representation. Moreover, embedding a new document during the test process needs a complex and expensive inference method. Some models first learn word embeddings and later learn underlying topics using a clustering algorithm for document representation; those methods miss the mutual interaction between the two paradigms. To this point, we propose a novel document-embedding method by weighted averaging of jointly learning topic-specific word embeddings called TDE: Topical Document Embedding, which efficiently captures syntactic and semantic properties by utilizing three levels of knowledge -i.e., word, topic, and document. TDE obtains document vectors on the fly simultaneously during the jointly learning process of the topical word embeddings. Experiments demonstrate better topical word embeddings using document vector as a global context and better document classification results on the obtained document embeddings by the proposed method over the recent related models.
Reject
Strength * The paper is relatively clearly written. * A new method is proposed. Weakness * The evaluation is weak. The experimental setup is not clear enough. More quantitative evaluation is necessary. There are strong and new baselines that need to be compared with. * Relation with existing work needs to be more clearly described. * The novelty of the work is limited. It is a combination of existing methods. * Justification of the proposed method needs to be provided. * The writing of the paper can be improved.
train
[ "_ggUiv4vhXA", "Rj1rLCTQi7I", "kRJdtJqLdb", "O27v9ETPQXs" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on learning document embeddings, presenting a topic-document embedding (TDE) method employing syntactic and semantic properties by jointly learning topic and word embedding in a single framework. The proposed TDE approach follows corruption mechanism to create the global context and randomly sel...
[ 5, 3, 3, 3 ]
[ 4, 4, 5, 4 ]
[ "iclr_2022_Vx8l4vwv94", "iclr_2022_Vx8l4vwv94", "iclr_2022_Vx8l4vwv94", "iclr_2022_Vx8l4vwv94" ]
iclr_2022_87Ks7PvYVJi
Offline Decentralized Multi-Agent Reinforcement Learning
In many real-world multi-agent cooperative tasks, due to high cost and risk, agents cannot continuously interact with the environment and collect experiences during learning, but have to learn from offline datasets. However, the transition probabilities calculated from the dataset can be much different from the transition probabilities induced by the learned policies of other agents, creating large errors in value estimates. Moreover, the experience distributions of agents' datasets may vary wildly due to diverse behavior policies, causing large difference in value estimates between agents. Consequently, agents will learn uncoordinated suboptimal policies. In this paper, we propose MABCQ, which exploits value deviation and transition normalization to modify the transition probabilities. Value deviation optimistically increases the transition probabilities of high-value next states, and transition normalization normalizes the biased transition probabilities of next states. They together encourage agents to discover potential optimal and coordinated policies. Mathematically, we prove the convergence of Q-learning under the non-stationary transition probabilities after modification. Empirically, we show that MABCQ greatly outperforms baselines and reduces the difference in value estimates between agents.
Reject
This paper studies the offline multi-agent RL problem. The finding is that the dataset collected by one agent could be very different for other agents. The authors provide two solutions to this problem. Although being interesting, the reviewers found that the there are many imprecise math statements, and some of the methods are not well motivated. Hence, the overall recommendation is a reject.
train
[ "tWHnTngxWH1", "AVP4bFMZW22", "Cpks235wJn", "S3fUL-q26s", "qYntxq7bNY", "svGa2EWpdn-", "apHmidbdfjJ", "Q-zAbFMoHNA" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > Value deviation aims to increase the transition probabilities of the high-value next states, but it will increase the value estimate of the current state and action. Value deviation is not well motivated.\n\nThe \"overestimate\" caused by value deviation is exactly what we need. For example, in offline datasets...
[ -1, -1, -1, -1, 5, 3, 3, 3 ]
[ -1, -1, -1, -1, 4, 2, 3, 5 ]
[ "Q-zAbFMoHNA", "apHmidbdfjJ", "svGa2EWpdn-", "qYntxq7bNY", "iclr_2022_87Ks7PvYVJi", "iclr_2022_87Ks7PvYVJi", "iclr_2022_87Ks7PvYVJi", "iclr_2022_87Ks7PvYVJi" ]
iclr_2022_l9tb1bKyfMn
LMSA: Low-relation Mutil-head Self-Attention Mechanism in Visual Transformer
The Transformer backbone network with the self-attention mechanism as the core has achieved great success in the field of natural language processing and computer vision. However, through the self-attention mechanism brings high performance, it also brings higher computational complexity compared to the classic visual feature extraction methods. To further reduce the complexity of self-attention mechanism and explore its lighter version in computer vision, in this paper, we design a novel lightweighted self-attention mechanism: Low-relation Mutil-head Self-Attention (LMSA), which is superior than the recent self-attention. Specifically, the proposed self-attention mechanism breaks the barrier of the dimensional consistency of the traditional self-attention mechanism, resulting in lower computational complexity and occupies less storage space. In addition, employing the new mechanism can release part of the computing consumption of the Transformer network and make the best use of it. Experimental results show that the dimensional consistency inside the traditional self-attention mechanism is unnecessary. In particular, using Swin as the backbone model for training, the accuracy in CIFAR-10 image classification task is improved by 0.43$\%$, in the meanwhile, the consumption of a single self-attention resource is reduced by 64.58$\%$, and the number of model parameters and model size are reduced by more than 15$\%$. By appropriately compressing the dimensions of the self-attention relationship variables, the Transformer network can be more efficient and even perform better. The results prompt us to rethink the reason why the self-attention mechanism works.
Reject
This paper proposes a simple change to Transformer architecture to improve efficiency. While the reviewers appreciate the writing, all the reviewers agree that the novelty and contributions of the paper are limited both in the problem being solved by the paper and the level of experiments in it. Authors did not respond to reviewer's comments. Hence I recommend rejection.
train
[ "Rd0n9nSicMh", "9vNwH-Ijrzc", "VpMjMroGFK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a mechanism to reduce the computation costs of a standard self-attention module, named LMSA. \nThe basic idea of LMSA is to reduce the dimension of key&query of self-attention(SA) while keeping the dimension of value unchanged. Therefore, the computational complexity of SA will be reduced from ...
[ 3, 1, 3 ]
[ 5, 5, 5 ]
[ "iclr_2022_l9tb1bKyfMn", "iclr_2022_l9tb1bKyfMn", "iclr_2022_l9tb1bKyfMn" ]
iclr_2022_lKrchawH4sB
Heterologous Normalization
Batch Normalization has become a standard technique for training modern deep networks. However, its effectiveness diminishes when the batch size becomes smaller since the batch statistics estimation becomes inaccurate. This paper proposes Heterologous Normalization, which computes normalization's mean and standard deviation from different pixel sets to take advantage of different normalization methods. Specifically, it calculates the mean like Batch Normalization to maintain the advantage of Batch Normalization. Meanwhile, it enlarges the number of pixels from which the standard deviation is calculated, thus alleviating the problem caused by the small batch size. Experiments show that Heterologous Normalization surpasses or achieves comparable performance to existing homologous methods, with large or small batch sizes on various datasets.
Reject
All reviewers recommended reject. No responses from the authors.
val
[ "BNYKVju04Yk", "2GShibkwZ9g", "2qFGttHwa6", "Uh3fP21yhSI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces Heterologous Normalization (HN), an alternative to normalization techniques in neural networks such as BN, LN, etc. The key insight is, the optimal statistics of mean and standard deviation may be derived from different pixel sets respectively. Based on the observation, the proposed HN calcula...
[ 5, 5, 5, 5 ]
[ 4, 4, 4, 4 ]
[ "iclr_2022_lKrchawH4sB", "iclr_2022_lKrchawH4sB", "iclr_2022_lKrchawH4sB", "iclr_2022_lKrchawH4sB" ]
iclr_2022_KVYq2Ea90PC
A Study of Face Obfuscation in ImageNet
Face obfuscation (blurring, mosaicing, etc.) has been shown to be effective for privacy protection; nevertheless, object recognition research typically assumes access to complete, unobfuscated images. In this paper, we explore the effects of face obfuscation on the popular ImageNet challenge visual recognition benchmark. Most categories in the ImageNet challenge are not people categories; however, many incidental people appear in the images, and their privacy is a concern. We first annotate faces in the dataset. Then we demonstrate that face blurring and overlaying---two typical obfuscation techniques---have minimal impact on the accuracy of recognition models. Concretely, we benchmark multiple deep neural networks on face-obfuscated images and observe that the overall recognition accuracy drops only slightly (<= 1.0%). Further, we experiment with transfer learning to 4 downstream tasks (object recognition, scene recognition, face attribute classification, and object detection) and show that features learned on face-obfuscated images are equally transferable. Our work demonstrates the feasibility of privacy-aware visual recognition, improves the highly-used ImageNet challenge benchmark, and suggests an important path for future visual datasets.
Reject
This paper received 3 quality reviews, with 2 rated 5 and 1 rated 6. While the reviewers recognize the various contributions and insights made by this work, it was also pointed out that this work lacks technical novelty. The authors agreed with this concerns and argued that this work provides a service to the community, citing imageNet and COCO papers. The AC agrees with the contribution and major concerns. Furthermore, the AC would like to point out that in term of the level of efforts, this work might not be on par with the imageNet and COCO. All things considered, the AC believes that this work is not ready for publication at its current form, and hence recommend rejection.
train
[ "YiErhboQMk", "hkhY-LAudvC", "wh4857w61de", "2zWZaOHfB5h", "wmytw7qfx33", "TBl_jsly6qe", "BPCd70GZ4f", "O_pQAS3OQuo", "QMmULX5fYXv", "muaJz2tXtFF", "Y-UbKxlTgxo", "6m-g0uRpct-" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for engaging with our work and providing valuable feedback! We further address the questions about the face annotation process and the contributions of the work \n\n# Face Annotation Process\n\nAppendix A includes the full details of our face annotation method. Here we selectively cover a few points relate...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "hkhY-LAudvC", "wh4857w61de", "2zWZaOHfB5h", "TBl_jsly6qe", "BPCd70GZ4f", "6m-g0uRpct-", "Y-UbKxlTgxo", "muaJz2tXtFF", "iclr_2022_KVYq2Ea90PC", "iclr_2022_KVYq2Ea90PC", "iclr_2022_KVYq2Ea90PC", "iclr_2022_KVYq2Ea90PC" ]
iclr_2022_zou-Ry64vqx
FedMorph: Communication Efficient Federated Learning via Morphing Neural Network
The two fundamental bottlenecks in Federated Learning (FL) are communication and computation on heterogeneous edge networks, restricting both model capacity and user participation. To address these issues, we present FedMorph, an approach to automatically morph the global neural network to a sub-network to reduce both the communication and local computation overloads. FedMorph distills a fresh sub-network from the original one at the beginning of each communication round while keeps its `knowledge' as similar as the aggregated model from local clients in a federated average (FedAvg) like way. The network morphing process considers the constraints, e.g., model size or computation flops, as an extra regularizer to the objective function. To make the objective function solvable, we relax the model with the concept of soft-mask. We empirically show that FedMorph, without any other tricks, reduces communication and computation overloads and increases the generalization accuracy. E.g., it provides an $85\times$ reduction in server-to-client communication and $18\times$ reduction in local device computation on the MNIST dataset with ResNet8 as the training network. With benchmark compression approaches, e.g., TopK sparsification, FedMorph collectively provides an $847\times$ reduction in upload communication.
Reject
The paper proposes FedMorph to address the communication and computation heterogeneity problem in federated learning. The proposed FedMorph extracts sub-models from the global model and dispatch these to the clients to perform local training. Then, the morphed sub-networks get aggregated into the global model via distillation. The paper reports two to three orders of magnitudes savings in communication bandwidth using the proposed method. However, as agreed by all reviewers, the paper has some critical problems as listed below that prevent it being accepted at this point. 1. The idea of training smaller networks to workaround heterogeneity is not novel, though the authors proposed a formulation that optimizes the subnetwork together with a distillation loss when updating server model parameters. Authors should include in the Related Work and compare against other distillation-related FL work in terms of: (1) communication costs savings, (2) easing the overfit problem, (3) reducing the compute and memory footprints of performing local training. 2. Optimizing the distillation loss relies on using a validation dataset on the server, and the quality of distillation relies heavily on whether the distribution the validation dataset is close to that of the decentralized training set. This seems to be a rather strong requirement in federated learning where the data is hard to obtain and the distribution may evolve over time. Therefore it makes me question whether the distillation is a realistic proposal in practice. 3. The test dataset is used as the distillation dataset which is a major experimentation flaw that the pixels from the test set are leaked into the training algorithm. 4. It may be unrealistic to assume that there exists a representative validation dataset for the global model in FL. The proposed method's error (in Theorem 2) depends on the distance between the distillation and the local training datasets, which can be arbitrarily large in practice.
val
[ "hkPXCHOzHtQ", "1IC3-oeqyzH", "upNvidE5UkZ", "uiqWXkMfrm3", "OsAWxCANwD8", "dpkb_KCG46" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the objective suggestions from the reviewer. Here we need to make some clarifications for the misunderstandings of the reviewer.\nAfter seriously double-checking our codes, we believe the results of ResNet18 on Cifar10 on NonIID partition differ from the reviewers' might due to the different hy...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "dpkb_KCG46", "OsAWxCANwD8", "uiqWXkMfrm3", "iclr_2022_zou-Ry64vqx", "iclr_2022_zou-Ry64vqx", "iclr_2022_zou-Ry64vqx" ]
iclr_2022_FCxWzalZp9N
AF$_2$: Adaptive Focus Framework for Aerial Imagery Segmentation
As a specific semantic segmentation task, aerial imagery segmentation has been widely employed in high spatial resolution (HSR) remote sensing images understanding. Besides common issues (e.g. large scale variation) faced by general semantic segmentation tasks, aerial imagery segmentation has some unique challenges, the most critical one among which lies in foreground-background imbalance. There have been some recent efforts that attempt to address this issue by proposing sophisticated neural network architectures, since they can be used to extract informative multi-scale feature representations and increase the discrimination of object boundaries. Nevertheless, many of them merely utilize those multi-scale representations in ad-hoc measures but disregard the fact that the semantic meaning of objects with various sizes could be better identified via receptive fields of diverse ranges. In this paper, we propose Adaptive Focus Framework (AF$_2$), which adopts a hierarchical segmentation procedure and focuses on adaptively utilizing multi-scale representations generated by widely adopted neural network architectures. Particularly, a learnable module, called Adaptive Confidence Mechanism (ACM), is proposed to determine which scale of representation should be used for the segmentation of different objects. Comprehensive experiments show that AF$_2$ has significantly improved the accuracy on three widely used aerial benchmarks, as fast as the mainstream method.
Reject
Four experts reviewed this paper and all recommended rejection. There was no rebuttal. The reviewers raised many concerns regarding the paper, such as missed citations, lack of comparison with related methods, and some presentation issues. Considering the reviewers' concerns, we regret that the paper cannot be recommended for acceptance at this time. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.
train
[ "nE5pgBMDlOY", "3LiMiZWMNu", "EMqCsONEbnA", "SHri1GvrZ3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the Adaptive Focus Framework (AF2), which adopts a hierarchical segmentation procedure and focuses on adaptively utilizing multiscale representations generated by widely adopted neural network architectures, is proposed. Particularly, a learnable module, called Adaptive Confidence Mechanism (ACM), i...
[ 3, 3, 5, 5 ]
[ 4, 4, 5, 3 ]
[ "iclr_2022_FCxWzalZp9N", "iclr_2022_FCxWzalZp9N", "iclr_2022_FCxWzalZp9N", "iclr_2022_FCxWzalZp9N" ]
iclr_2022_qPQRIj_Y_EW
Learning to Solve an Order Fulfillment Problem in Milliseconds with Edge-Feature-Embedded Graph Attention
The order fulfillment problem is one of the fundamental combinatorial optimization problems in supply chain management and it is required to be solved in real-time for modern online retailing. Such a problem is computationally hard to address by exact mathematical programming methods. In this paper, we propose a machine learning method to solve it in milliseconds by formulating a tripartite graph and learning the best assignment policy through the proposed edge-feature-embedded graph attention mechanism. The edge-feature-embedded graph attention considers the high-dimensional edge features and accounts for the heterogeneous information, which are important characteristics of the studied optimization problem. The model is also size-invariant for problem instances of any scale, and it can address cases that are completely unseen during training. Experiments show that our model substantially outperforms the baseline heuristic method in optimality. The online inference time is milliseconds, which is thousands of times faster than the exact mathematical programming methods.
Reject
A GNN model is developed for the supervised, real-time learning of optimal solutions for an order-fulfillment problem. GNNs with fast forward computations are naturally one good choice given the real-time nature of the problem. While the complexity of the problem and formulation were generally appreciated by the referees, there were major concerns about the experimental setup, datasets, technical claims, sample complexity, and suitability for ICLR. Overall, the paper does not seem ready for publication in ICLR, and the authors are encouraged to consider and work on the reviews carefully.
train
[ "sfT2qjVnVfZ", "_zQhCgB0-wv", "5bUcD7bvPM9", "n3jS7I3z2U" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors propose to solve an order fulfillment problem with imitation learning backed by GNNs. The specific version of the fulfillment problem is nontrivial: discussed has a hierarchy in order (an order can be decomposed into sub-orders), and items become forbidden over time. SCIP package is used as the expert poli...
[ 3, 5, 3, 6 ]
[ 4, 4, 3, 3 ]
[ "iclr_2022_qPQRIj_Y_EW", "iclr_2022_qPQRIj_Y_EW", "iclr_2022_qPQRIj_Y_EW", "iclr_2022_qPQRIj_Y_EW" ]
iclr_2022_gX9Ub6AwAd
ANOMALY DETECTION WITH FRAME-GROUP ATTENTION IN SURVEILLANCE VIDEOS
The paper proposes an end-to-end abnormal behavior detection network to detect strenuous movements in slow moving crowds, such as running, bicycling, throwing from a height. The algorithm forms continuous video frames into a frame group and uses the frame-group feature extractor to obtain the spatio-temporal information. The implicit vector based attention mechanism will work on the extracted frame-group features to highlight the important features. We use fully connected layers to transform the space and reduce the computation. Finally, the group-pooling maps the processed frame-group features to the abnormal scores. The network input is flexible to cope with the form of video streams, and the network output is the abnormal score. The designed compound loss function will help the model improve the classification performance. This paper arranges several commonly used anomaly detection datasets and tests the algorithms on the integrated dataset. The experimental results show that the proposed algorithm has significant advantages in many objective metrics compared with other anomaly detection algorithms.
Reject
This paper handles anomaly detection in surveillance videos. The authors proposed to use a frame-group attention method for handling this task. However, all the reviewers have concerns about the novelty, clarity, and experimental evaluations of this work. Moreover, no rebuttal is provided by the authors.
test
[ "vkfMRYiAIrH", "4vhuYnL1Tn", "VxMdtdyxmFS", "bNGGxmRO2U6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel neural network that maps short chunks of eight video frames to a final probability score (0-1) of abnormality. The network is trained and evaluated in a supervised fashion, using the Avenue, UMN and UCSD data (all three datasets capture semantic anomalies e.g. walking in opposite direct...
[ 3, 3, 1, 1 ]
[ 3, 4, 4, 5 ]
[ "iclr_2022_gX9Ub6AwAd", "iclr_2022_gX9Ub6AwAd", "iclr_2022_gX9Ub6AwAd", "iclr_2022_gX9Ub6AwAd" ]
iclr_2022_q2ZaVU6bEsT
CONTEXT AUGMENTATION AND FEATURE REFINEMENT NETWORK FOR TINY OBJECT DETECTION
Tiny objects are hard to detect due to their low resolution and small size. The poor detection performance of tiny objects is mainly caused by the limitation of network and the imbalance of training dataset. A new feature pyramid network is proposed to combine context augmentation and feature refinement. The features from multi-scale dilated convolution are fused and injected into feature pyramid network from top to bottom to supplement context information. The channel and spatial feature refinement mechanism is introduced to suppress the conflicting formation in multi-scale feature fusion and prevent tiny objects from being submerged in the conflict information. In addition, a data enhancement method called copy-reduce-paste is proposed, which can increase the contribution of tiny objects to loss during training, ensuring a more balanced training. Experimental results show that the mean average precision of target targets on the VOC dataset of the proposed network reaches 16.9% (IOU=0.5:0.95), which is 3.9% higher than YOLOV4, 7.7% higher than CenterNet, and 5.3% higher than RefineDet.
Reject
This paper is proposed to address the tiny object detection with the help of the context augmentation module (CAM) and feature refinement module (FRM). To obtain rich context information for feature augmentation, CAM merges multi-scale dilated convolution features. The proposed method has been verified on the PASCAL VOC dataset with the considerable improvements over the latest baseline methods. The major concern of this paper is the novelty that similar ideas such as context augmentation and multi-scale have been commonly applied in previous works. And authors failed to provide the results on the COCO benchmark, which is more important than PASCAL VOC. Moreover, the authors have not offered any rebuttal to address the reviewers' concerns.
train
[ "SsgfGIXixZR", "3grkXvGoyV", "9bz9Qam5VHK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper aims at the tiny object detection and point out the issues are small context feature, semantic feature conflicts, and less tiny objects in training data. To solve the aforementioned problems, authors introduce context augmentation module (CAM), design a feature refinement module, and adopt data-augmenta...
[ 3, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2022_q2ZaVU6bEsT", "iclr_2022_q2ZaVU6bEsT", "iclr_2022_q2ZaVU6bEsT" ]
iclr_2022_t7y6MKiyiWx
Classical and Quantum Algorithms for Orthogonal Neural Networks
Orthogonal neural networks have recently been introduced as a new type of neural network imposing orthogonality on the weight matrices. They could achieve higher accuracy and avoid evanescent or explosive gradients for deep architectures. Several classical gradient descent methods have been proposed to preserve orthogonality while updating the weight matrices, but these techniques suffer from long running times and/or provide only approximate orthogonality. In this paper, we introduce a new type of neural network layer called Pyramidal Circuit, which implements an orthogonal matrix multiplication. It allows for gradient descent with perfect orthogonality with the same asymptotic running time as a standard fully connected layer. This algorithm is inspired by quantum computing and can therefore be applied on a classical computer as well as on a near term quantum computer. It could become the building block for quantum neural networks and faster orthogonal neural networks.
Reject
This paper introduces a quantum pyramidal circuit for the computation of orthogonal layers in neural networks and implements the algorithm on simulators and on a quantum computer to illustrate its effectiveness. It also obtains an O(n^2) classical algorithm for forward and backpropagation. The reviewers generally found strength in the derivations and implementation on real quantum machines. Some reviewers regarded the contributions as strong and novel, while others expressed skepticism about the novelty and the robustness of the algorithm. Having read the paper in detail, I concur with the several reviewers who found the literature review of classical orthogonal NNs to be lacking. In particular, one reviewer highlights similarities with Householder reflections and Givens rotations, for which substantial literature already exists. Without a proper comparison to this existing work, it is not possible to properly assess the novelty or relative contributions of the current paper. Beyond an extended discussion of related work, the paper would also benefit from improved experimental analysis. While the paper is framed around the quantum algorithm, the main contributions are described as a novel and efficient classical algorithm. This would indeed be a contribution of interest to the broader (non-quantum) ICLR community, but there is no experimental evidence supporting the utility of the proposed methods. An analysis that compares the classical algorithm to the numerous prior works that parameterize orthogonal layers would be an essential addition. As it stands, I cannot recommend the paper for publication.
train
[ "aMUAoQ0TbCZ", "iH7r4-49c3l", "0ci22s11lQC", "gTe0kQM5SM1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors introduce a quantum pyramidal circuit to achieve an orthogonal layer of a neural network, which is fast and can maintain orthogonality. The angle gradients are derived. The authors implement the orthogonal NN on simulators and quantum machines to demonstrate the effectiveness. Strength:\n1. The paper g...
[ 1, 6, 5, 6 ]
[ 5, 4, 2, 2 ]
[ "iclr_2022_t7y6MKiyiWx", "iclr_2022_t7y6MKiyiWx", "iclr_2022_t7y6MKiyiWx", "iclr_2022_t7y6MKiyiWx" ]
iclr_2022_tiQ5Zh2S3zV
A multi-domain splitting framework for time-varying graph structure
The Graph Signal Processing (GSP) methods are widely used to solve structured data analysis problems, assuming that the data structure is fixed. In the recent GSP community, anomaly detection on datasets with the time-varying structure is an open challenge. To address the anomaly detection problem for datasets with a spatial-temporal structure, in this work, we propose a novel graph multi-domain splitting framework, called GMDS, by integrating the time, vertex, and frequency features to locate the anomalies. Firstly, by introducing the discrete wavelet transform into vertex function, we design a splitting approach for separating the graph sequences into several sub-sequences adaptively. Then, we specifically design an adjacency function in the vertex domain to generate the adjacency matrix adaptively. At last, by utilizing the learned graphs to the spectral graph wavelet transform, we design a module to extract vertices features in the frequency domain. To validate the effectiveness of our framework, we apply GMDS in the anomaly detection of actual traffic flow and urban datasets and compare its performances with acknowledged baselines. The experimental results show that our proposed framework outperforms all the baselines, which distinctly demonstrate the validity of GMDS.
Reject
The authors propose a graph multi-domain splitting framework, called GMDS, to detect anomalies in datasets with temporal information. The reviewers agree that the paper studies an important and interesting problem but they think that the paper should be improved significantly before being accepted. In particular, the reviewers feel that the authors should provide more technical details and insights on the design of the solution proposed and the proposed method should be compared with other(even simple) baselines for the same problem.
train
[ "d34vvc10CVY", "Kt5o9fqetoz", "EON6LrqBJdy", "nge_mOIkXUL", "vxtMWQ1NhOo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a multi-domain splitting framework for the time varying graph structures, in applications such as traffics in urban areas. The problem is interesting but the paper lacks of some important discussions, and has several limitations. The strengths of the paper lie in the 1) formulating an importan...
[ 3, 1, 1, 5, 3 ]
[ 2, 5, 5, 2, 4 ]
[ "iclr_2022_tiQ5Zh2S3zV", "iclr_2022_tiQ5Zh2S3zV", "iclr_2022_tiQ5Zh2S3zV", "iclr_2022_tiQ5Zh2S3zV", "iclr_2022_tiQ5Zh2S3zV" ]