paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_k32ZY1CmE0
How to train RNNs on chaotic data?
Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights, we offer an effective yet simple training technique for chaotic data and guidance on how to choose relevant hyperparameters according to the Lyapunov spectrum.
Reject
This paper analyzes the gradient behavior of RNNs in terms of the Lyaponuv exponents of its trajectory/orbit, showing that RNNs with cyclic or stable equilibrium dynamics have bounded gradients, but if the dynamics are chaotic the gradients will explode. From these insights, the authors propose an algorithmic remedy for this pathology, which is essentially a teacher-forcing method that periodically projects the observation onto the hidden state during training. A thorough empirical investigation is performed showing the utility of the proposed approach for modeling chaotic data. The reviewers had split opinions on this paper. Some reviewers found value in the theoretical contributions and the connection between Lyapunov exponents and behaviors of the dynamics of recurrent neural networks, while others thought the theoretical framework may have limited practical utility. Several reviewers found the initial experiments to be lacking, though many of their concerns were alleviated after the substantial additions the authors provided during the discussion phase. I believe the observation that exploding gradients are unavoidable when modeling chaotic data is important and would be of significant interest to the broader ICLR community. However, the practical implications of this observation have not been thoroughly described or investigated, and without this perspective, the theoretical results by themselves are much less impactful. In practice, it is usually the case that the ground-truth function is not learned exactly, the time horizon is finite, the gradients are noisy, the data-generating process is opaque, etc. Do these caveats have any bearing on the conclusions? The experiments address some of these questions, but only indirectly, and a more explicit discussion of the practical implications would broaden the impact of the paper. Along the same lines, the practical utility of the theoretical framework could be further supported if there were some analysis of more varied or additional RNN use-cases. As one reviewer mentioned, I think the ICLR community in particular would appreciate any theoretical or algorithmic insights that might yield improvements on a standard baseline task like seqMNIST, which has served as a point-of-comparison for many alternative methods and which would facilitate comparisons to prior work. Overall, this paper does make some nice and potentially important theoretical insights about training RNNs on chaotic data, and it does include an extensive battery of empirical evaluations, however the practical implications remain largely unconvincing, and I believe the paper falls just short of the bar for acceptance.
test
[ "H0jqoDVjL28", "8EL1xHLptC-", "nk-zC-WX5E8", "yHT-8AiLmzt", "2htgiTI__uG", "78fdwA4Tw5v", "Y0NQIYjUG7H", "1Huks8RWVdQ", "pSTcTDr5tzY", "quRxtwPDWA5", "b4eDZmVAz8R", "kdGPL05kcMe", "oSNlVrsiJCk", "0PmSmrB0uhR", "CJ4iscKTPYX", "IPQ8P0EJ4gt", "ClOYQTyShqx", "QyUf4hnkErk", "QBsmtcTk4...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer",...
[ " Dear Referee,\n\nWe would like to make another attempt to come back to your major point. It appears we still may have missed something.\n\nBefore getting there, we once again apologize that one of our remarks was apparently upsetting. It certainly wasn’t meant confrontational and, in our minds, we didn’t even spe...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nk-zC-WX5E8", "iclr_2022_k32ZY1CmE0", "yHT-8AiLmzt", "2htgiTI__uG", "78fdwA4Tw5v", "Y0NQIYjUG7H", "IPQ8P0EJ4gt", "iclr_2022_k32ZY1CmE0", "iclr_2022_k32ZY1CmE0", "b4eDZmVAz8R", "V4IZNt0UnxK", "O3GMEERCcd2", "0PmSmrB0uhR", "CJ4iscKTPYX", "5VQmEEUCQ1p", "ClOYQTyShqx", "QyUf4hnkErk", ...
iclr_2022_Sqv6rs_TRV
WHAT TO DO IF SPARSE REPRESENTATION LEARNING FAILS UNEXPECTEDLY?
Learning physical equations from data is essential for scientific discovery and engineering modeling. However, most of the existing methods rely on two rules: (1) learn a sparse representation to fit data and (2) check if the loss objective function satisfies error thresholds. This paper illustrates that such conditions are far from sufficient. Specifically, we show that sparse non-physical approximations exist with excellent fitting accuracy, but fail to adequately model the situation. To fundamentally resolve the data-fitting problem, we propose a physical neural network (PNN) utilizing “Range, Inertia, Symmetry, and Extrapolation” (RISE) constraints. RISE is based on a complete analysis for the generalizability of data properties for physical systems. The first three techniques focus on the definition of physics in space and time. The last technique of extrapolation is novel based on active learning without an inquiry, using cross-model validation. We validate the proposed PNN-RISE method via a synthetic dataset, power system dataset, and mass-damper system dataset. Numerical results show the universal capability of the PNN-RISE approach to quickly identify the hidden physical models without local optima, opening the door for the fast and highly accurate discovery of the physical laws or systems with external loads.
Reject
The topic and ambition of this paper has been judged as important by all reviewers. Yet there is a consensus that the theoretical and experimental contribution is not strong enough to effectively argue for an important novel lead which would justify publication at ICLR. For these rejections, this paper cannot be endorsed for publication at ICLR 2022.
train
[ "w6gs0HNmrWO", "FLVplgjTQq1H", "F-z-uMB5yv1", "JA0EXL03qs2", "7H6X5QTxBJ8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "A framework for learning succinct and humanly interpretable descriptions of physical systems from data is outlined. The main focus of the approach is eliminating/avoiding local optima, which is achieved through both architectural design of the model and modification of learning algorithm. Empirical evaluations con...
[ 5, 3, 3, 6, 3 ]
[ 3, 3, 3, 4, 4 ]
[ "iclr_2022_Sqv6rs_TRV", "iclr_2022_Sqv6rs_TRV", "iclr_2022_Sqv6rs_TRV", "iclr_2022_Sqv6rs_TRV", "iclr_2022_Sqv6rs_TRV" ]
iclr_2022_GesLOTU_r23
Gradient Explosion and Representation Shrinkage in Infinite Networks
We study deep fully-connected neural networks using the mean field formalism, and carry out a non-perturbative analysis of signal propagation. As a result, we demonstrate that increasing the depth leads to gradient explosion or to another undesirable phenomenon we call representation shrinkage. The appearance of at least one of these problems is not restricted to a specific initialization scheme or a choice of activation function, but rather is an inherent property of the fully- connected architecture itself. Additionally, we show that many popular normal- ization techniques fail to mitigate these problems. Our method can also be applied to residual networks to guide the choice of initialization variances.
Reject
This work performs a mean field analysis of a certain class of fully connected networks with and without layer normalization. Theory is provided which successfully predicts when some networks will exhibit either exploding gradients, or "representation shrinkage" which is similar to the extreme ordered phase discussed in prior works on signal propagation. The primary concerns raised by reviewers included, large overlap with prior works on signal propagation, a bug in the proof of the main theorem, lack of clarity, and many assumptions made in the theory which significantly limit the space of architectures for which the theory can be applied. Some of these concerns were addressed in the rebuttal period, notably major flaw in the main theorem was resolved and some concerns on clarity were addressed. However, with the remaining issues (notably overlap with prior work, and overly restrictive assumptions made) a majority of reviewers did not recommend acceptance in the end. The AC agrees with this final decision and recommends the authors look to further expand upon the contributions relative to prior work.
train
[ "RYe6UeNEZCb", "xgWPBre49uo", "09Ya8u6nYb", "LgjI0QOHd-G", "O8TebFT9umW", "V2yhrwVEbCp", "UzANH_pexf8", "NIo9WODTU5", "s0U_F1PRa9O", "x1CuUm9sJQX", "QtYnP8CzcK", "d8z2wFsVvZk", "LCAZ0ftkW2Y", "4IL0Hh1yOz", "M6u6jNUsDVh", "MCg4jcrXPne", "MgcBBmdTgL" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for that clear derivation of the asymptotic exponents using Taylor expansions around the fixed-point behavior, that is very helpful!", " We would like to thank the reviewer for their engagement, which notably improved the quality of our paper.\n\nBy \"handling data points with non-infinitesimal differ...
[ -1, -1, -1, 5, 5, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, -1, 3, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "M6u6jNUsDVh", "V2yhrwVEbCp", "s0U_F1PRa9O", "iclr_2022_GesLOTU_r23", "iclr_2022_GesLOTU_r23", "O8TebFT9umW", "iclr_2022_GesLOTU_r23", "LgjI0QOHd-G", "QtYnP8CzcK", "MCg4jcrXPne", "LCAZ0ftkW2Y", "UzANH_pexf8", "4IL0Hh1yOz", "O8TebFT9umW", "MgcBBmdTgL", "iclr_2022_GesLOTU_r23", "iclr_2...
iclr_2022_R0xRE2MU2uA
Graph Piece: Efficiently Generating High-Quality Molecular Graphs with Substructures
Molecular graph generation is a fundamental but challenging task in various applications such as drug discovery and material science, which requires generating valid molecules with desired properties. Auto-regressive models, which usually construct graphs following sequential actions of adding nodes and edges at the atom-level, have made rapid progress in recent years. However, these atom-level models ignore high-frequency subgraphs that not only capture the regularities of atomic combination in molecules but also are often related to desired chemical properties. In this paper, we propose a method to automatically discover such common substructures, which we call graph pieces, from given molecular graphs. Based on graph pieces, we leverage a variational autoencoder to generate molecules in two phases: piece-level graph generation followed by bond completion. Experiments show that our graph piece variational autoencoder achieves better performance over state-of-the-art baselines on property optimization and constrained property optimization tasks with higher computational efficiency.
Reject
While the reviewers appreciated the new methodology and presentation of the paper the reviewers were concerned about the experimental section. Specifically they wanted to see optimization outside of penalized logP and QED, which are now viewed by the community as toy molecule optimization tasks (e.g., Penalized logP can always be improved by just adding a longer chain of carbon atoms). The authors responded that this would have taken too long to run Guacamol tasks in the rebuttal phase as all methods would need to be rerun for all tasks, but this is not true: many methods e.g., Ahn et al., 2020, already have reported these results and could be directly compared against (as this paper is near state of the art this would have been a convincing comparison). Another odd thing about the experimental setup is that the authors compare with Ahn et al., 2020 only for constrained property prediction. However Ahn et al., 2020 achieves a Penalized logP of 31.40 whereas the proposed method only achieves 13.95. It's suspicious that this result is missing in Table 2 of the current paper. If the authors are able to improve their work beyond Ahn et al., 2020 and related recent work on Guacamol and othe real-world tasks, the paper will make a much stronger submission.
train
[ "uDZEXNNOadc", "2zidyWfNb9l", "JgIiE32C6Db", "8oWiQgvThpI", "D8uyHx_Yby", "3mNOgen0xFD", "0O-gm_X3IAa", "bgO7-e3Xq1p", "DJDAR7MKpiG", "VfqfKZM0SgX", "z9H-73ABvjn", "n-6bRzoydX", "J1knh56FC9", "yhGe-UEumQi" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply! We provide further responses to your concerns as follows:\n\n1. About other tasks in the GuacaMol benchmark:\n\nWe choose the distribution-learning part of the GuacaMol benchmark is because the goal-directed part is too time-consuming as none of the baselines have provided their performa...
[ -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "D8uyHx_Yby", "JgIiE32C6Db", "bgO7-e3Xq1p", "iclr_2022_R0xRE2MU2uA", "z9H-73ABvjn", "VfqfKZM0SgX", "iclr_2022_R0xRE2MU2uA", "J1knh56FC9", "iclr_2022_R0xRE2MU2uA", "0O-gm_X3IAa", "8oWiQgvThpI", "yhGe-UEumQi", "iclr_2022_R0xRE2MU2uA", "iclr_2022_R0xRE2MU2uA" ]
iclr_2022_oC12z8lkbrU
Generate, Annotate, and Learn: Generative Models Advance Self-Training and Knowledge Distillation
Semi-Supervised Learning (SSL) has seen success in many application domains, but this success often relies on the availability of task-specific unlabeled data. Knowledge distillation (KD) has enabled compressing deep networks, achieving the best results when distilling knowledge on fresh task-specific unlabeled examples. However, task-specific unlabeled data can be challenging to find, especially for NLP problems. We present a simple framework called "generate, annotate, and learn (GAL)" that uses unconditional language models to synthesize in-domain unlabeled data, helping advance SSL and KD on NLP and tabular tasks. To obtain strong task-specific generative models, we either fine-tune a large language model (LLM) on inputs from specific tasks, or prompt a LLM with a few input examples to generate more unlabeled examples. Then, we use existing classifiers to annotate generated unlabeled examples with pseudo labels, which are used as additional training data or as additional prompts. GAL improves prompt-based few-shot learning on several NLP tasks. It also yields a new state-of-the-art for 6-layer transformers on the GLUE leaderboard. Finally, self-training with GAL offers large gains on four tabular tasks from the UCI repository.
Reject
To overcome the challenge of lacking task-specific unlabeled data in semi-supervised learning (SSL) or knowledge-distillation (KD) tasks, this paper presents a new framework called "generate, annotate, and learn (GAL)" that uses unconditional language models to synthesize in-domain unlabeled data to advance SSL and KD. Extensive experiments on both NLP and tabular tasks demonstrate positive results of the proposed method. Reviewers generally agree on several key strengths of the paper, e.g., the paper is well-written, literature review is comprehensive, experimental results are generally positive (the improvements over the standard baselines on GLUE benchmark looks solid despite not very significant). On the negative side, some reviewers did raise some major concerns about the novelty of the proposed framework and the lack of strong baselines for comparison. For example, the proposed GAL framework doesn’t seem particularly novel as neither of the proposed components is new, and the key value of the work seems on the contribution of evaluating the large LM's ability to generate good in-domain unlabeled data (as agreed by authors). Therefore, it is very important to compare with other existing data augmentation baselines in the empirical studies. While authors did try to add one round-trip-translation (RT) data augmentation baseline for comparison during the rebuttal, more stronger SOTA data augmentation baselines should be compared. Overall, this is a good paper which is worthy of publication in near future but it still needs some more work on more extensive comparison of more baselines and improvements on the writing of novelty and contribution claims.
train
[ "HAchQiR0f6X", "V31zmRnT4Lb", "X9ChBTfYgCe", "XBWs1YLqlhU", "h_0iLcWUXN", "srlHHg-olpR", "i07bsyCInp", "LGcbOk7PLz2", "3-B2XZbgRON", "VI_4MSJamjt", "gbtcKIbVwVj", "qMIp_OuxaD", "Qc71bvKwnoR" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a new framework of data augmentation based on generative models. The idea is to generate some new samples and then to classify them in an unsupervised manner to improve a student model. The authors provide an extreme experiment where the sentences are composed of numerical data raw extracted fr...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_oC12z8lkbrU", "h_0iLcWUXN", "XBWs1YLqlhU", "i07bsyCInp", "3-B2XZbgRON", "Qc71bvKwnoR", "qMIp_OuxaD", "HAchQiR0f6X", "gbtcKIbVwVj", "iclr_2022_oC12z8lkbrU", "iclr_2022_oC12z8lkbrU", "iclr_2022_oC12z8lkbrU", "iclr_2022_oC12z8lkbrU" ]
iclr_2022_kUtux8k0G6y
Avoiding Robust Misclassifications for Improved Robustness without Accuracy Loss
While current methods for training robust deep learning models optimize robust accuracy, in practice, the resulting models are often both robust and inaccurate on numerous samples, providing a false sense of safety for those. Further, they significantly reduce natural accuracy, which hinders the adoption in practice. In this work, we address both of these challenges by extending prior works in three main directions. First, we propose a new training method that jointly maximizes robust accuracy and minimizes robust inaccuracy. Second, since the resulting models are trained to be robust only if they are accurate, we leverage robustness as a principled abstain mechanism. Finally, this abstain mechanism allows us to combine models in a compositional architecture that significantly boosts overall robustness without sacrificing accuracy. We demonstrate the effectiveness of our approach to both empirical and certified robustness on six recent state-of-the-art models and using several datasets. Our results show that our method effectively reduces robust and inaccurate samples by up to 97.28%. Further, it successfully enhanced the $\epsilon_\infty = 1/255$ robustness of a state-of-the-art model from 26% to 86% while only marginally reducing its natural accuracy from 97.8% to 97.6%.
Reject
This paper presents the problem of robust inaccuracy (model predictions being robust to perturbations but inaccurate on datapoints), and present methods to maximize robustness while avoiding robust inaccuracy. Furthermore, they develop an abstention mechanism based on robustness to prevent prediction on points where the model is not robust. Results show improvement in adversarial robustness to standard attacks with only small reduction in natural accuracy. Reviewers were mixed on the clarity and importance of this submission. A major concern raised was on the importance of robust inaccuracy, motivation for avoiding it, and novelty of the proposed method. Other abstention mechanisms are available and one does not solely need to rely on robustness. Additionally, results are often presented on a pareto front and the method does not strictly dominate prior approaches. Authors addressed many of the clarity concerns in their updated revision, and reviewers commented on the high quality of analysis performed in the experiments. But several reviewers still found the draft and description of the robust inaccuracy problem insufficiently motivated and the methodology not well explained. Given lingering concerns over clarity and motivation (in spite of a revision that exceeds the page limit), I cannot recommend this paper for acceptance.
train
[ "zr4fdhhoU1M", "4U-V8vmyGZb", "cibcfXhoUZG", "6UDBkL97xj", "juLOVM2nROK", "f5AYH3wk5s", "FnTjTe_BaDh", "RCpZ9VJGWjD", "WIaImGDer8f", "ginqjzskreZ", "X3Xz9hKs60", "IipQiZS6tdi" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for the response and the updated version of the paper. I would like to acknowledge that the structure/presentation of the paper structure have been improved.", " Dear reviewers, we would like to thank you for all your comments and suggestions.\nWe have uploaded a revised version of...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "f5AYH3wk5s", "iclr_2022_kUtux8k0G6y", "6UDBkL97xj", "RCpZ9VJGWjD", "WIaImGDer8f", "ginqjzskreZ", "X3Xz9hKs60", "IipQiZS6tdi", "iclr_2022_kUtux8k0G6y", "iclr_2022_kUtux8k0G6y", "iclr_2022_kUtux8k0G6y", "iclr_2022_kUtux8k0G6y" ]
iclr_2022_ZnUwk6i_iTR
Symmetric Machine Theory of Mind
Theory of mind (ToM), the ability to understand others' thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers' internal "mental state''. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multi-agent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent's rewards. We show that multi-agent deep-reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
Reject
The paper introduces a theory of mind benchmark. This paper certainly improved during the discussion period. However, the paper is still incomplete. The authors are working related work (paper was not updated in this regard). The experiments still need significant work. The original submission used only 3 runs (very, very low). Although the authors bumped up the # of runs, the learning curves in the appendix feature very large and overlapping error bars, and the main table of results presented in the paper contains no measures of certainty---those a reported in a separate table in the appendix making comparison tedious. The paper has a fairly informal approach to dealing with hyper-parameters that should be discussed and improved. The reviewers pointed out (in their reviews and dialog with the authors) several ways the experiments should be extended. The contribution of the benchmark is evaluated primarily via experiments; much work needs to be done before acceptance.
train
[ "OjlZn6IpPmD", "LK45CkuBlqn", "fYMl0uqIk1", "J8kBtKO5ICr", "fZ6_Nz-dYO", "EmRW_AR0T9", "WT-V_bYfp3e", "EkWbPXdVIlG", "QnFPkHFoM-", "IqwbaRpFdw", "1moOMYptWhK", "ZHRoz1SXOWd", "VeyDIduxKCC", "P9tFgi63Qfx", "quwmUYuhZ8g" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a multi-agent environment and task, termed SymmToM, for analyzing machine theory of mind emerged from multi-agent RL training. In SymmToM, agents can see and act in a 2D grid world as well as share information about the world state through communication. Crucially, all agents have the same phys...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "iclr_2022_ZnUwk6i_iTR", "QnFPkHFoM-", "EkWbPXdVIlG", "WT-V_bYfp3e", "IqwbaRpFdw", "iclr_2022_ZnUwk6i_iTR", "VeyDIduxKCC", "P9tFgi63Qfx", "OjlZn6IpPmD", "quwmUYuhZ8g", "ZHRoz1SXOWd", "iclr_2022_ZnUwk6i_iTR", "iclr_2022_ZnUwk6i_iTR", "iclr_2022_ZnUwk6i_iTR", "iclr_2022_ZnUwk6i_iTR" ]
iclr_2022_yx_uIzoHJv
Effect of Pressure for Compositionality on Language Emergence
Humans can use natural languages compositionally, where complicated concepts are expressed using expressions grounded in simpler concepts. Hence, it has been argued that compositionality increases the ability of generalization. This behavior is acquired during natural language learning. Natural languages contain a large number of compositional phrases that function as examples of how to construct compositional expressions for human learners. However, in language emergence, neural agents do not have access to such compositional language expressions. It can be circumvent by optimizing a suitably devised metric of compositionality, which does not require supervising examples. In this paper, we present a learning environment where agents are pressured to make their emerging languages compositional by incorporating a metric of topological similarity into the loss function. We observe that when this pressure is carefully adjusted, agents can achieve higher generalizations. The optimal level of this pressure is highly dependent on the agent architecture, input, and structure of the message space. However, we find no simple correlation between high compositionality and generalization. The advantage offered by compositional pressure is situational. We observe instances where moderately compositional languages are showing generalizing behavior to the extent of some highly compositional ones.
Reject
This work presents a proposal for increasing the compositionality of emergent languages that uses a measure of topographic similarity as an auxiliary loss function on the communication game. The authors find that in certain cases this loss indeed results in increased generalization but overall the authors do not find a strong relation between high weights for weighting loss and and generalization. All reviewers agree that the of compositionality is very important and the idea of explicitly optimizing for compositionality (through the topographic similarity metric) is also novel. At the same time a number of concerns are raised by reviewers: a) B3Jo and WEY2 raise the point that more evidence is required to establish the robustness of the current findings, e.g., by controlling whether topsim is merely inducing a regularization behaviour and providing more confidence on the current presented results (e.g., Figure 2 currently provides a somewhat perplexed pictured as the additional loss doesn't seem to improve across the board). b) The relation between compositionality and generalization is not a new one and it is not clear what exactly the current paper is adding on this discussion and, as zB6g and B3Jo point, this makes it seem rather incremental. c) the paper is currently somewhat hard to follow with numerous results reported in a somewhat raw format, little to no examples and important details being presented only in Appendix (e.g., the loss function is only given in p12 and the actual format of L_{C} is never provided) As such, I cannot recommend acceptance at this time but, given the importance of the topic, I sincerely hope the authors will work on incorporating reviewers' feedback for a later resubmission.
train
[ "p5m7xkCHcwR", "r1f8Z3l5pv0", "oUEzMCWeV5M", "mdqA_HkvfH", "O7tRW7bYC_H", "v5iqdqhzavH", "h1b75Bs_8RF", "NvHaalaORZ9", "u5Jx2dYqt4U", "8fTBZqy0zb4", "u3_pvhAhO0p", "EKY-Uh1NIf", "oLo2IL7pXNa", "WZKPUBUX6S0", "3REHgIU-1Gh", "GGAT345ydu", "DRmnNXG-nX4", "JGA8VI4BeeL", "ddS_QbiYLGg"...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", ...
[ " First, we highly thank the reviewer for their valuable replies. We will explain our stance and address the reviewers' concerns below.\n\nIn the current draft, we only measure compositionality for the cases where we do not employ any type of compositionality loss on the discrete link, which we denote as the baseli...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4, 4, 2 ]
[ "IOST4FEJ1g-", "oUEzMCWeV5M", "h1b75Bs_8RF", "u5Jx2dYqt4U", "uQDmHZeg9UK", "uQDmHZeg9UK", "uQDmHZeg9UK", "ZLXSLBK-uVa", "AwgEz_Rilh0", "plQVkIARKBJ", "plQVkIARKBJ", "plQVkIARKBJ", "plQVkIARKBJ", "JbgX3HOovsc", "GGAT345ydu", "DRmnNXG-nX4", "JGA8VI4BeeL", "AQyqVH2vdpV", "NyCpAusmsE...
iclr_2022_qaQ8kUBYhEK
Spectral Multiplicity Entails Sample-wise Multiple Descent
In this paper, we study the generalization risk of ridge and ridgeless linear regression. We assume that the data features follow a multivariate normal distribution and that the spectrum of the covariance matrix consists of a given set of eigenvalues of proportionally growing multiplicity. We characterize the limiting bias and variance when the dimension and the number of training samples tend to infinity proportionally. Exact formulae for the bias and variance are derived using the random matrix theory and convex Gaussian min-max theorem. Based on these formulae, we study the sample-wise multiple descent phenomenon of the generalization risk curve, i.e., with more data, the generalization risk can be non-monotone, and specifically, can increase and then decrease multiple times with more training data samples. We prove that sample-wise multiple descent occurs when the spectrum of the covariance matrix is highly ill-conditioned. We also present numerical results to confirm the values of the bias and variance predicted by our theory and illustrate the multiple descent of the generalization risk curve. Moreover, we theoretically show that the ridge estimator with optimal regularization can result in a monotone generalization risk curve and thereby eliminate multiple descent under some assumptions.
Reject
The authors analyze linear regression with gaussian covariates in an asymptoptic setting, where the number of examples and the number of covariates go to infinity together. They identify conditions on the covariance under which "multiple descent" occurs, and conditions under which a regularization removes this effect. Concerns were raised that the overlap between this paper and previous research was too substantial for it to be published in ICLR. These persisted after the authors' response and the discussion period.
train
[ "cjrfkiMcVI", "5HTOQYoF6Kv", "AzM40ae6qxm", "k6K3a3bwaaT", "fqAfymfHlBC", "3Q-36vYaExJ", "Q1g8kEobQjF", "2a4vQNSThgF", "Zm5ui6Daks6", "qb0PNr557WV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for responding to my questions and for incorporating the suggestions in the revised draft. The reference I had in mind showing sample-wise multiple descent is this one: https://arxiv.org/abs/1912.07242. I realize now that it only shows double descent and not multiple descent.\n\nOverall, the paper's con...
[ -1, -1, -1, -1, -1, -1, 8, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 4, 2 ]
[ "AzM40ae6qxm", "3Q-36vYaExJ", "qb0PNr557WV", "2a4vQNSThgF", "Q1g8kEobQjF", "Zm5ui6Daks6", "iclr_2022_qaQ8kUBYhEK", "iclr_2022_qaQ8kUBYhEK", "iclr_2022_qaQ8kUBYhEK", "iclr_2022_qaQ8kUBYhEK" ]
iclr_2022_6PlIkYUK9As
Less data is more: Selecting informative and diverse subsets with balancing constraints
Deep learning has yielded extraordinary results in vision and natural language processing, but this achievement comes at a cost. Most models require enormous resources during training, both in terms of computation and in human labeling effort. We show that we can identify informative and diverse subsets of data that lead to deep learning models with similar performance as the ones trained with the original dataset. Prior methods have exploited diversity and uncertainty in submodular objective functions for choosing subsets. In addition to these measures, we show that balancing constraints on predicted class labels and decision boundaries are beneficial. We propose a novel formulation of these constraints using matroids, an algebraic structure that generalizes linear independence in vector spaces, and present an efficient greedy algorithm with constant approximation guarantees. We outperform competing baselines on standard classification datasets such as CIFAR-10, CIFAR-100, ImageNet, as well as long-tailed datasets such as CIFAR-100-LT.
Reject
The paper studied an interesting yet challenging problem in active learning and provided an intuitive heuristic for selecting informative subset(s) of training examples. The reviewers generally find the paper well presented and highlight that the clarity of the exposition of the issues of existing query heuristics, especially for training deep models with class-imbalance data. However, there are shared concerns among all reviewers in whether the existing experiments sufficiently justify the practical significance of the proposed heuristic (Reviewer 4ATq: Missing comparison against important baselines such as Gal et al 2017; Reviewer Cp2k: ablations of class and boundary balancing; Reviewer yngU: lack of comparison to SOTA and ablation for important hyperparameters; Reviewer oEcZ: lack comparison against SOTA). Reviewers also point out that the approximation guarantee is against an algorithm that is optimal wrt a somewhat ad-hoc objective, which makes the theoretical components of the paper not as significant. Given the above concerns, the paper does not appear to be ready for acceptance at the current stage.
train
[ "PJ9BUWVuNOE", "K4_IG4XhSZw", "i7lxqVfmhlg", "Bqch2Dl9L8B", "0WWcyv-8x-D", "nHv2kBBG4ak", "pK0FZ_oO_A", "JONzkhrmv0", "oMCnjYBEvwx", "Z_z8U6en3DV", "m4C2kxF3bpH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Most batch-mode active learning strategies involve maximizing a sub-modular score function of the value of each image to be labeled. This paper demonstrates that current methods fail to sample diverse classes or images near decision boundaries, arguably one of the most important regions to obtain label information...
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_6PlIkYUK9As", "Bqch2Dl9L8B", "nHv2kBBG4ak", "m4C2kxF3bpH", "oMCnjYBEvwx", "Z_z8U6en3DV", "PJ9BUWVuNOE", "0WWcyv-8x-D", "iclr_2022_6PlIkYUK9As", "iclr_2022_6PlIkYUK9As", "iclr_2022_6PlIkYUK9As" ]
iclr_2022_UECzHrGio7i
Robust Imitation Learning from Corrupted Demonstrations
We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers. Classical approaches such as Behavior Cloning assumes that demonstrations are collected by an presumably optimal expert, hence may fail drastically when learning from corrupted demonstrations. We propose a novel robust algorithm by minimizing a Median-of-Means (MOM) objective which guarantees the accurate estimation of policy, even in the presence of constant fraction of outliers. Our theoretical analysis shows that our robust method in the corrupted setting enjoys nearly the same error scaling and sample complexity guarantees as the classical Behavior Cloning in the expert demonstration setting. Our experiments on continuous-control benchmarks validate that existing algorithms are fragile under corrupted demonstration while our method exhibits the predicted robustness and effectiveness.
Reject
This submission addressed offline imitation learning problem with non-optimal demonstrations. The AC went through the draft, reviews, and replies. The AC agrees with all reviewers that the mathematical analysis, empirical evaluation, and general quality of writing haven't reached the bar of ICLR papers.
test
[ "uAXAXVPEVk", "WfRBYeFG19C", "6C8woeWg7KO", "HFKbFUXvTP", "yD0r-_ZEq6z", "uba9o3cCmM0", "RoumQlgtM7d", "VRTd4wpL7R", "RIAx8jW4wLR" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\n\nThank you for your response. Thank you for adding additional experiments and clarifications to the paper, these have indeed strengthened the paper. \n\nYou have however also updated some of the prose which I fundamentally do not agree with. For instance, you have added: \"We note that Definitio...
[ -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "WfRBYeFG19C", "RIAx8jW4wLR", "VRTd4wpL7R", "RoumQlgtM7d", "uba9o3cCmM0", "iclr_2022_UECzHrGio7i", "iclr_2022_UECzHrGio7i", "iclr_2022_UECzHrGio7i", "iclr_2022_UECzHrGio7i" ]
iclr_2022_eqaxDZg4MHw
Understanding the Generalization Gap in Visual Reinforcement Learning
Deep Reinforcement Learning (RL) agents have achieved superhuman performance on several video game suites. However, unlike humans, the trained policies fail to transfer between related games or even between different levels of the same game. Recent works have attempted to reduce this generalization gap using ideas such as data augmentation and learning domain invariant features. However, the transfer performance still remains unsatisfactory. In this work, we use procedurally generated video games to empirically investigate several hypotheses to explain the lack of transfer. We also show that simple auxiliary tasks can improve the generalization of policies. Contrary to the belief that policy adaptation to new levels requires full policy finetuning, we find that visual features transfer across levels, and only the parameters, that use these visual features to predict actions, require finetuning. Finally, to inform fruitful avenues for future research, we construct simple oracle methods that close the generalization gap.
Reject
This paper presents an empirical study of generalization in visual reinforcement learning. This study is carried out in the domain of video games and it addresses the benefits of techniques such as regularization, augmentation and training with auxiliary tasks. The reviewers for this submission were positive about the goal and setups in this paper. They agreed that understanding why present day methods that attempt to improve generalization continue to fall short, is an important problem. However, most reviewers were underwhelmed by the findings presented in the submission. As examples: Reviewer 185P mentions that "the paper does not seem to provide a clear and definite answer to the question" and " I am not convinced the experiments described in this paper support the claims made by the authors." and Reviewer SFef mentions that "Most of the conclusions are already known". Some reviewers also found a lack of clarity and several typos in the initial submission. The authors have provided detailed responses to the reviewers. In particular they have fixed most writing issues. They also detailed why certain algorithms and techniques were benchmarked in this submission and others were left out. I think this is reasonable. One cannot expect a paper to benchmark every algorithm out there, and choosing promising and representative ones is sufficient. My takeaway from detailed discussions about this paper are that: The paper is much improved from a writing point of view and the rebuttal addresses some concerns well. However, I do agree with the reviewers that the findings presented in the paper are for the most part expected. This reduces the value of the paper to readers. When this is the case, it may be beneficial to dig deeper into these findings and present a narrow but deep analysis. Please see Reviewer 185P's suggestions in this regard. Given the above, I am recommending rejection for this conference, but I encourage the authors to take into the reviewers suggestions and resubmit.
train
[ "jliW0Z0HDe9", "Lrbme9K5-7_", "4UGBuYpyCS", "-_HloaYWB8j", "Il8iQGSsJAe", "pIT4iCxcZI", "GeGs64EKW4g", "IpM2VV2nqcv", "8OeAMH6Ho7w", "Afn5us1MTi4", "4KBPeOn_iJ", "xDL_d_ffvsl", "mjNWTJoTmL7", "dw0-6MT_-kj", "yIhL7Dp29Ce" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I checked the rebuttal carefully. There are several unconvincing points in the rebuttal. Here are two examples:\n(1) The rebuttal states that using inverse models is under-appreciated. Inverse models have been used extensively in different contexts. So they are not under-appreciated.\n(2) The rebuttal states that...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "4KBPeOn_iJ", "Afn5us1MTi4", "IpM2VV2nqcv", "yIhL7Dp29Ce", "dw0-6MT_-kj", "xDL_d_ffvsl", "mjNWTJoTmL7", "8OeAMH6Ho7w", "GeGs64EKW4g", "pIT4iCxcZI", "-_HloaYWB8j", "iclr_2022_eqaxDZg4MHw", "iclr_2022_eqaxDZg4MHw", "iclr_2022_eqaxDZg4MHw", "iclr_2022_eqaxDZg4MHw" ]
iclr_2022_OhytAdNSzO-
An Investigation on Hardware-Aware Vision Transformer Scaling
Vision Transformer (ViT) has demonstrated promising performance in various computer vision tasks, and recently attracted a lot of research attention. Many recent works have focused on proposing new architectures to improve ViT and deploying it into real-world applications. However, little effort has been made to analyze and understand ViT’s architecture design space and its implication of hardware-cost on different devices. In this work, by simply scaling ViT's depth, width, input size, and other basic configurations, we show that a scaled vanilla ViT model without bells and whistles can achieve comparable or superior accuracy-efficiency trade-off than most of the latest ViT variants. Specifically, compared to DeiT-Tiny, our scaled model achieves a $\uparrow1.9\%$ higher ImageNet top-1 accuracy under the same FLOPs and a $\uparrow3.7\%$ better ImageNet top-1 accuracy under the same latency on an NVIDIA Edge GPU TX2. Motivated by this, we further investigate the extracted scaling strategies from the following two aspects: (1) "can these scaling strategies be transferred across different real hardware devices?''; and (2) "can these scaling strategies be transferred to different ViT variants and tasks?''. For (1), our exploration, based on various devices with different resource budgets, indicates that the transferability effectiveness depends on the underlying device together with its corresponding deployment tool; for (2), we validate the effective transferability of the aforementioned scaling strategies obtained from a vanilla ViT model on top of an image classification task to the PiT model, a strong ViT variant targeting efficiency, as well as object detection and video classification tasks. In particular, when transferred to PiT, our scaling strategies lead to a boosted ImageNet top-1 accuracy of from $74.6\%$ to $76.7\%$ ($\uparrow2.1\%$) under the same 0.7G FLOPs; and when transferred to the COCO object detection task, the average precision is boosted by $\uparrow0.7\%$ under a similar throughput on a V100 GPU.
Reject
This paper explores strategies for scaling vision transformers that can be transferable across hardware devices and ViT variants. While it presents some interesting observations as well as a useful practical guide, multiple reviewers expressed major concerns over the novelty and significance of the methods and findings. Besides novelty and significance, there are also some concerns about comparison with existing work as well as clarity of the presentation.
val
[ "1Ks7Js7Qq5p", "ERPiV1ArA4m", "SVkeebXh4w", "0vPIkO8kys2", "g1ZJUNz1IZC", "TUsKV6QZjg1", "20WjhiZsk7b", "3f2USQqe1JD", "MTirErbcFQw", "YDZnw8GwGo", "GsnH291vdEr", "hnmBkrAeYLM" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a model scaling strategy to change the model size. With a greedy search strategy, the author proposes a hardware-aware scaling method. Compared with the existing ViT variants, the proposed method achieves better performance. Strength:\n\n-This paper analyzes different scaling strategies of visi...
[ 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2022_OhytAdNSzO-", "0vPIkO8kys2", "ERPiV1ArA4m", "MTirErbcFQw", "iclr_2022_OhytAdNSzO-", "20WjhiZsk7b", "hnmBkrAeYLM", "1Ks7Js7Qq5p", "g1ZJUNz1IZC", "GsnH291vdEr", "iclr_2022_OhytAdNSzO-", "iclr_2022_OhytAdNSzO-" ]
iclr_2022_l431c_2eGO2
Mix-MaxEnt: Creating High Entropy Barriers To Improve Accuracy and Uncertainty Estimates of Deterministic Neural Networks
We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates. Our approach, on top of the cross-entropy loss, simply puts an entropy maximization regularizer corresponding to the predictive distribution in the regions of the embedding space between the class clusters. This is achieved by synthetically generating between-cluster samples via the convex combination of two images from different classes and maximizing the entropy on these samples. Such a data-dependent regularization guides the maximum likelihood estimation to prefer a solution that (1) maps out-of-distribution samples to high entropy regions (creating an entropy barrier); and (2) is more robust superficial input perturbations. Via extensive experiments on real-world datasets (CIFAR-10 and CIFAR-100) using ResNet and Wide-ResNet architectures, we demonstrate that Mix-MaxEnt consistently provides much improved classification accuracy, better calibrated probabilities for in-distribution data, and reliable uncertainty estimates when exposed to situations involving domain-shift and out-of-distribution samples.
Reject
This paper proposes a new regularizer, based on entropy maximization of samples near the decision boundary, to improve the calibration of neural networks while maintaining their accuracy. The method seems simple, sufficiently novel, and has promising results. However, based on the review process (described below), I feel the paper needs to significantly improve its evaluation and presentation before it can be accepted. The review process summary: * Two reviews were eventually weakly positive about the paper: without major concerns, but not enthusiastic. * One review (L8Yz) was not sufficiently informative. * One review (ESue) raised many points. I disagreed with most of these points, following the authors' discussion. However, a few points seemed valid, such as the not-so-impressive performance for OOD detection, which the authors did not address. * I therefore asked for an additional review (iva2). The review concluded the paper is interesting and potentially useful, but requires another round of revision before it can be accepted, mainly because of its clarity and missing comparisons. I agree with these conclusions.
train
[ "3KWt4BnSAq", "tPX2ArR_YU", "yDg3x9wq0hu", "j_zmQBCh-QB", "jb4QAS9P2uF", "-Pz5XyX2UU1", "1xVgUnEQaSS", "dAc3fzZe33A", "SeOGWPKzH2K", "QvKcDYpPWqL", "f2WHw3e8P0Y", "LqLAzK5p_Qq", "x9Szz6Je7i", "KoRLFod-PZ5", "9TBGeioOLQa", "Ht7zgVWqHKF", "jH91paSOjsE", "c5nom5QGrSr", "8Av2bqCvp2y"...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "a...
[ " **Q: “For instance, the authors can show that using a more relaxed beta distribution (e.g., α=β=0.95 such that λ has a higher probability to get values near 0.5) does not reproduce results with Mixup alone to strengthen the novelty of this work.”**\n \nThis type of analysis is one of the first preliminary analyse...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 6, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "tPX2ArR_YU", "yDg3x9wq0hu", "-Pz5XyX2UU1", "jb4QAS9P2uF", "-Pz5XyX2UU1", "iclr_2022_l431c_2eGO2", "Ax0zmWSdXaJ", "f2WHw3e8P0Y", "f2WHw3e8P0Y", "EW7y4d0ItEH", "9TBGeioOLQa", "KoRLFod-PZ5", "iclr_2022_l431c_2eGO2", "oAx51TfyIOe", "iclr_2022_l431c_2eGO2", "9TBGeioOLQa", "LiBIr5suJKD", ...
iclr_2022_RQIvNJDHwy
Improving Neural Network Generalization via Promoting Within-Layer Diversity
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization, where the errors are back-propagated from the last layer back to the first one. At each optimization step, neurons at a given layer receive feedback from neurons belonging to higher layers of the hierarchy. In this paper, we propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer. To this end, we measure the pairwise similarity between the outputs of the neurons and use it to model the layer's overall diversity. By penalizing similarities and promoting diversity, we encourage each unit within the layer to learn a distinctive representation and, thus, to enrich the data representation learned and to increase the total capacity of the model. We theoretically study how the within-layer activation diversity affects the generalization performance of a neural network and prove that increasing the diversity of hidden activations reduces the estimation error. In addition to the theoretical guarantees, we present an extensive empirical study confirming that the proposed approach enhances the performance of state-of-the-art neural network models and decreases the generalization gap in multiple tasks.
Reject
This work proposes an approach to encourage within-layer diversity in neuron activations, and derive a generalization bound meant to motivate their approach. None of the reviewers support the acceptance of this work, despite the authors' detailed rebuttals, with the majority of reviewers confirming their preference for rejection following the author response. Many raised concerns regarding the value of the accompanying theory. The empirical results demonstrated by the proposed regularizer were also not judged to be sufficiently compelling to compensate for the shortfall on the theory side. I unfortunately could not find a good reason to dissent from the reviewers majority opinion, and therefore also recommend rejection at this time.
train
[ "dAaolLKijrD", "nhl3DO4dSS8", "nQhrb1ziLRd", "s2vrTUf87jx", "o3TjKAI3jdr", "yYoglwAbSUa", "AHSz8_pifKI", "Fle2udBitjn", "ysLBlbjZYah", "DCP_0n2cVtU", "iplCeK4Lm4Q", "DLOVJ3vb7T", "MAdvbBTgu3o", "rzF4q4tq3zs" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their explanation and the revised draft. Still I think the theoretical part is quite disjoint from the empirical part. I would like to see the connection of Theorem 4 with the results of Golowich et al. I would keep the ratings unchanged.", " **Theoretical Significance:** \...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "Fle2udBitjn", "nQhrb1ziLRd", "DCP_0n2cVtU", "iclr_2022_RQIvNJDHwy", "iclr_2022_RQIvNJDHwy", "s2vrTUf87jx", "rzF4q4tq3zs", "MAdvbBTgu3o", "DLOVJ3vb7T", "iplCeK4Lm4Q", "iclr_2022_RQIvNJDHwy", "iclr_2022_RQIvNJDHwy", "iclr_2022_RQIvNJDHwy", "iclr_2022_RQIvNJDHwy" ]
iclr_2022__ysluXvD1M
Equal Experience in Recommender Systems
We explore the fairness issue that arises in recommender systems. Biased data due to inherent stereotypes of particular groups (e.g., male students' average rating on mathematics is often higher than that on humanities, and vice versa for females) may yield a limited scope of suggested items to a certain group of users. Our main contribution lies in the introduction of a novel fairness notion (that we call equal experience), which can serve to regulate such unfairness in the presence of biased data. The notion captures the degree of the equal experience of item recommendations across distinct groups. We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization. Experiments on synthetic and benchmark real datasets demonstrate that the proposed framework can indeed mitigate such unfairness while exhibiting a minor degradation of recommendation accuracy.
Reject
This paper introduces a new (un)fairness metric for recommender systems based on mutual information and then develop an algorithm to account for this metric in matrix factorization-based collaborative filtering. The reviewers all agree that the proposed metric and algorithm are sound at a technical level, however, they have concerns regarding the motivation of the introduced metric as well as the experimental evaluation. The rebuttal by the authors did not persuade the reviewers to reconsider their original assessment and they still argued that their concerns remained. In the final recommendation, the simplicity of the metric was not seeing as a weakness of the work.
train
[ "olDEwFr4v-", "o9Y_kZFmxnZ", "L5eCOI3TUIF", "hG5OwELN3MB", "7J0_qkqx0J-", "TECFkWhFGda", "gHqJvBPby6N", "xQ_24pVfBoD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to express our sincere gratitude for the constructive feedback and detailed suggestions, which helped us improve the manuscript. We provide point-by-point replies below.\n\n[1-1] (*The motivation of the proposed fairness notion*): As the reviewer commented, in general the population exhibits a high...
[ -1, -1, -1, -1, 6, 5, 6, 3 ]
[ -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "TECFkWhFGda", "gHqJvBPby6N", "xQ_24pVfBoD", "7J0_qkqx0J-", "iclr_2022__ysluXvD1M", "iclr_2022__ysluXvD1M", "iclr_2022__ysluXvD1M", "iclr_2022__ysluXvD1M" ]
iclr_2022_J7V_4aauV6B
Understanding and Scheduling Weight Decay
Weight decay is a popular and even necessary regularization technique for training deep neural networks that generalize well. Previous work usually interpreted weight decay as a Gaussian prior from the Bayesian perspective. However, weight decay sometimes shows mysterious behaviors beyond the conventional understanding. For example, the optimal weight decay value tends to be zero given long enough training time. Moreover, existing work typically failed to recognize the importance of scheduling weight decay during training. Our work aims at theoretically understanding novel behaviors of weight decay and designing schedulers for weight decay in deep learning. This paper mainly has three contributions. First, we propose a novel theoretical interpretation of weight decay from the perspective of learning dynamics. Second, we propose a novel weight-decay linear scaling rule for large-batch training that proportionally increases weight decay rather than the learning rate as the batch size increases. Third, we provide an effective learning-rate-aware scheduler for weight decay, called the Stable Weight Decay (SWD) method, which, to the best of our knowledge, is the first practical design for weight decay scheduling. In our various experiments, the SWD method often makes improvements over $L_{2}$ Regularization and Decoupled Weight Decay.
Reject
This paper analyzes the effects of the weight decay hyperparameter, and based on this analysis, proposes methods to schedule the weight decay. Overall, while I'm glad that more work is being done on understanding the effects of weight decay, I don't think this submission is of sufficient quality for ICLR. Theorem 1 is simply re-expressing the well-known fact that if the regularization version of weight decay is used, then (simply because it's based on a single objective function) the stationary points are invariant to the choice of learning rate. This may not be apparent due to the misused terminology: "invariant" is referred to as "stable", but "stable stationary point" has a technical meaning very different from the one used here. Corollary 2 essentially shows that the optimum of the regularized loss is different from the optimum of the unregularized loss. The authors conclude from this that the optimal value of lambda is 0 from the perspective of test error, which is unwarranted. Overall, the paper centers around the interaction between learning rates and the weight decay parameter. However, as various reviewers point out, this interaction has been analyzed in detail for networks with normalization layers, and normalization completely changes the nature of the interaction. So any analysis would either need to take this into account or limit the scope to networks without normalization. I encourage the authors to take the reviewers' feedback into account and improve the paper for the next submission cycle.
train
[ "kbSYyZ2jYsT", "ANheMpQCYGy", "rib3qTY-2mh", "r1nPNCTHCYE", "WLffkuNtf3b", "YYFWqjoZsYZ", "jAc2G1wLFWA", "pfCGGWZuu08", "EbqriKBGb0j", "RaAPRB7vynP", "X6JHyt6FWcm", "22TZNM26DC", "Em0jFIOrQ4v", "PVVgqfLpkW", "ciw-w3CoBLd", "rA7fDhYd1ax", "bIpl9tkN38y" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We highly appreciate Reviewer saHk’s updated review and constructive comments. \n\nWe would like to present three direct responses as follows. \n\nWe believe that your main concerns have been addressed. \n\nWe really hope Reviewer saHk may reconsider the responses and evaluate the contributions of our work. \n\n\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "ANheMpQCYGy", "RaAPRB7vynP", "WLffkuNtf3b", "bIpl9tkN38y", "r1nPNCTHCYE", "iclr_2022_J7V_4aauV6B", "PVVgqfLpkW", "rA7fDhYd1ax", "PVVgqfLpkW", "ciw-w3CoBLd", "ciw-w3CoBLd", "PVVgqfLpkW", "PVVgqfLpkW", "iclr_2022_J7V_4aauV6B", "iclr_2022_J7V_4aauV6B", "iclr_2022_J7V_4aauV6B", "iclr_20...
iclr_2022_zLb9oSWy933
Fast Finite Width Neural Tangent Kernel
The Neural Tangent Kernel (NTK), defined as the outer product of the neural network (NN) Jacobians, $\Theta_\theta(x_1, x_2) = \left[\partial f(\theta, x_1)\big/\partial \theta\right] \left[\partial f(\theta, x_2)\big/\partial \theta\right]^T$, has emerged as a central object of study in deep learning. In the infinite width limit, the NTK can sometimes be computed analytically and is useful for understanding training and generalization of NN architectures. At finite widths, the NTK is also used to better initialize NNs, compare the conditioning across models, perform architecture search, and do meta-learning. Unfortunately, the finite-width NTK is notoriously expensive to compute, which severely limits its practical utility. We perform the first in-depth analysis of the compute and memory requirements for NTK computation in finite width networks. Leveraging the structure of neural networks, we further propose two novel algorithms that change the exponent of the compute and memory requirements of the finite width NTK, dramatically improving efficiency. We open-source (https://github.com/iclr2022anon/fast_finite_width_ntk) our two algorithms as general-purpose JAX function transformations that apply to any differentiable computation (convolutions, attention, recurrence, etc.) and introduce no new hyper-parameters.
Reject
This is an interesting and carefully-presented work which discusses how to implement finite-width NTKs more efficiently. Overall, the reviews were slightly tending positive, though with a variety of concerns, including some concern that the contribution is not sufficiently substantial. In my own perusal of the paper, personally I feel it could be made more compelling if (a) more speedups could be considered, including ones with various tradeoffs, for instance via randomized linear algebra, (b) explicit consequences on various prediction tasks, rather than plotting wall-clock times (e.g., as this paper cites many works which tried to use finite-width NTK, and as this paper claims massive speedups, then it will be able to repeat some of those experiments at much larger sizes, which should lead to interesting and valuable larger-scale experiments which ideally have some new phenomena, but are even interesting if they simply confirm the smaller-scale phenomena). As a separate concern, I second the comments of one reviewer, that part of this paper's contribution is to a single software package, which is moreover listed in the abstract (and not just part of the standard code release, e.g., as a footnote); this feels a little strange, like an announcement of a code release, and further limits the impact to general machine learning researchers (for instance, I feel completing some of my preceding suggestions could result in, say, researchers who use other software feeling eager to re-implement this). Overall, I urge the authors to continue with their interesting work and aim to resolve these concerns and those of the reviewers.
test
[ "hZ1-kyNFf8", "9UNvs4yWcSW", "qosu2HA68yf", "wy-QR9zwtAY", "BYbuvXI-lDD", "19zBpFNiW2v", "jabLl-UZJdi", "GRZcXGD71Ru", "m1LSWUYH_39", "695QLOwP1e5", "_8flR2VElID", "o0hF10nXS40", "NMVol4UgVe", "vF95WqSwalyj", "Ce_lD-gb7Yy", "2u-1W64IEXs", "gYw5I-WkIiq", "Fd6hMDyxcW", "OTjrrjooZVf...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " I would like to thank the authors for their detailed response. The clarification on complexity analysis and comparison between Jax and Tensorflow and PyTorch regarding the computation of NTK has addressed my concerns. Overall, I think this work is marginally above the acceptance threshold.", " Dear reviewers, w...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "pmzNMUxuEDp", "vF95WqSwalyj", "wy-QR9zwtAY", "Ce_lD-gb7Yy", "iclr_2022_zLb9oSWy933", "jabLl-UZJdi", "GRZcXGD71Ru", "m1LSWUYH_39", "o0hF10nXS40", "WPVoHUX2aO8", "BYbuvXI-lDD", "YkT7LIPoGnz", "_uymrH5KAs6", "iclr_2022_zLb9oSWy933", "BYbuvXI-lDD", "BYbuvXI-lDD", "BYbuvXI-lDD", "BYbuv...
iclr_2022_F1Z3QH-VjZE
A Fair Generative Model Using Total Variation Distance
We explore a fairness-related challenge that arises in generative models. The challenge is that biased training data with imbalanced representations of demographic groups may yield a high asymmetry in size of generated samples across distinct groups. We focus on practically-relevant scenarios wherein demographic labels are not available and therefore the design of a fair generative model is particularly challenging. In this paper, we propose an optimization framework that regulates such unfairness by employing one prominent statistical notion, total variation distance (TVD). We quantify the degree of unfairness via the TVD between the generated samples and balanced-yet-small reference samples. We take a variational optimization approach to faithfully implement the TVD-based measure. Experiments on benchmark real datasets demonstrate that the proposed framework can significantly improve the fairness performance while maintaining realistic sample quality for a wide range of the reference set size all the way down to 1% relative to training set.
Reject
This paper is concerned with fairness in the generative setting, specifically the setting in which various groups have very different sizes, and are therefore treated disproportionately by the model, with the group memberships further being unknown. The reviewers generally agreed that the setting was interesting and important. However, they were critical of the writing quality, significance, and quality of the theoretical contribution. The authors made significant improvements in the review period, and while these were not quite enough to satisfy enough reviewers, opinions clearly changed in a positive direction during the discussion period. Future changes motivated by the existing reviewer concerns should significantly improve the paper.
train
[ "VTXsZDSAtsA", "xUzf8AZBmDK", "6FPyKqfDt0-", "ob_WwqGIeWk", "C_rqquUCWJW", "xghN7c7mbJU", "QWCOaqsl3B8", "TFTlALIiBro", "Ssz9-rmYLTp", "uDpUcwKDDZ", "a2ezHo79Q-h" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new generative model with “fairness” constraints by adding a total-variation regularization. The authors conduct several experiments and compare them with benchmark methods. Overall, the paper is well written and easy to follow. The problem of studying fairness in generative models is signif...
[ 3, -1, -1, 6, -1, -1, -1, -1, -1, 5, 3 ]
[ 3, -1, -1, 3, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2022_F1Z3QH-VjZE", "6FPyKqfDt0-", "C_rqquUCWJW", "iclr_2022_F1Z3QH-VjZE", "TFTlALIiBro", "a2ezHo79Q-h", "uDpUcwKDDZ", "ob_WwqGIeWk", "VTXsZDSAtsA", "iclr_2022_F1Z3QH-VjZE", "iclr_2022_F1Z3QH-VjZE" ]
iclr_2022_GgOEm9twFO_
PhaseFool: Phase-oriented Audio Adversarial Examples via Energy Dissipation
Audio adversarial attacks design perturbations onto inputs that lead an automatic speech recognition (ASR) model to predict incorrect outputs. Current audio adversarial attacks optimize perturbations with different constraints (e.g. lp-norm for waveform or the principle of auditory masking for magnitude spectrogram) to achieve their imperceptibility. Since phase is not relevant for speech recognition, the existing audio adversarial attacks neglect the influence of phase spectrogram. In this work, we propose a novel phase-oriented algorithm named PhaseFool that can efficiently construct imperceptible audio adversarial examples with energy dissipation. Specifically, we leverage the spectrogram consistency of short-time Fourier transform (STFT) to adversarially transfer phase perturbations to the adjacent frames of magnitude spectrogram and dissipate the energy that is crucial for ASR systems. Moreover, we propose a weighted loss function to improve the imperceptibility of PhaseFool. Experimental results demonstrate that PhaseFool can inherently generate full-sentence imperceptible audio adversarial examples with the 100% targeted success rate within 500 steps on average (9.24x speed-up over current state-of-the-art imperceptible counterparts), which is verified through a human study. Most importantly, our PhaseFool is the first to exploit the phase-oriented energy dissipation in the audio adversarial examples rather than add perturbations on the audio waveform like most previous works.
Reject
This paper proposed a novel phase-oriented algorithm to efficiently construct imperceptible audio adversarial attacks. It leverages the spectrogram consistency of STFT to adversarially transfer phase perturbations to the adjacent frames and dissipate the energy that is crucial for ASR systems. Empirical evaluations show that the attack effectiveness of the proposed attack is high. As agreed by the reviewers the method is very interesting, but the experimental justification is limited, lacking strong SOTA baseline ASR systems, different ASR model architectures, the adversarial transferability analysis etc. The author did add DeepSpeech 2 results to the initial version's Low-rank Transformer only results, which is still not convincing enough. The author also commented that they will "add more comprehensive experiments with different systems and …architectures in our next version of the paper" which is not in the current paper yet. Hence, resubmission with more experimental evaluations is recommended. The decision is mainly due to the weak experimental justification.
train
[ "Z7Z2TlYzQZB", "9byY4tq0c6f", "Ghz6TdZVjF5", "E380o1CJgLm", "-bFA85szkS1", "yW-LAIzEPKO", "uC-wnz0D-8", "1ZmkLNZToEd", "QYQJtvtaATI", "I1-9kfnKwzJ", "gBe3oUj4a8q", "XovtM82heXN", "_Ga7idD4Bmx", "3hzl_eyAFXV", "CS_pg4MLu8U", "sAbpdVRzf8V", "z64lSXOITQA", "-ofMiWtnbGl", "yYY4YdzMZt...
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " \n\n### [About the similar results of various defenses]\n\nYes, the noise defense is naive and attack-agnostic. However, the boundary of noise is -0.02 to +0.02 and the values of the adversarial perturbations are usually ranged between -0.005 to +0.005. Thus, the adversarial perturbations will be greatly influenc...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "9byY4tq0c6f", "3hzl_eyAFXV", "I1-9kfnKwzJ", "-bFA85szkS1", "Ghz6TdZVjF5", "_Ga7idD4Bmx", "1_1RxUlTcp", "1_1RxUlTcp", "iclr_2022_GgOEm9twFO_", "rwUKAcuLsA", "iclr_2022_GgOEm9twFO_", "sAbpdVRzf8V", "iclr_2022_GgOEm9twFO_", "z64lSXOITQA", "iclr_2022_GgOEm9twFO_", "W2KF4a7k42b", "-ofMiW...
iclr_2022_06fUz_bJStS
Differentially Private SGD with Sparse Gradients
A large number of recent studies reveal that networks and their optimization updates contain information about potentially private training data. To protect sensitive training data, differential privacy has been adopted in deep learning to provide rigorously defined and measurable privacy. However, differentially private stochastic gradient descent (DP-SGD) requires the injection of an amount of noise that scales with the number of gradient dimensions, while neural networks typically contain millions of parameters. As a result, networks trained with DP-SGD typically have large performance drops compared to non-private training. Recent works propose to first project gradients into a lower dimensional subspace, which is found by application of the power method, and then inject noise in this subspace. Although better performance has been achieved, the use of the power method leads to a significantly increased memory footprint by storing sample gradients, and more computational cost by projection. In this work, we mitigate these disadvantages through a sparse gradient representation. Specifically, we randomly freeze a progressively increasing subset of parameters, which results in sparse gradient updates while maintaining or increasing accuracy over differentially private baselines. Our experiment shows that we can reduce up to 40\% of the gradient dimension while achieve the same performance within the same training epochs. Additionally, sparsity of the gradient updates is beneficial for decreasing communication overhead when deployed in collaborative training, e.g. federated learning. When we apply our approach across various DP-SGD frameworks, we maintain accuracy while achieve up to 70\% representation sparsity, which proves that our approach is a safe and effective add-on to a variety of methods. We further notice that our approach leads to improvement in accuracy in particular for large networks. Importantly, the additional computational cost of our approach is negligible, and results in reduced computation during training due to lower computational cost in power method iterations.
Reject
This paper provides a new differentially private training method. The key idea is sparse gradient updates---that is, their variant of differentially private SGD (DP-SGD) only updates on a random subset of the parameters in each iteration. The authors argued that their method has a benefit in terms of memory and communication efficiency. The reviews suggested that the paper may require further evidence to motivate and justify the novelty of the proposed method. First, the reviewers are not fully convinced that the proposed method reduced both memory and communication. In particular, would the technique of random freeze require running DP-SGD for more iterations? Even though the authors added a new theoretical result (mostly adapted from Chen et al.), the newly added Theorem 2 does not explain the benefits of the freezing technique. Thus, the paper can benefit from more extensive theoretical analyses or justification. The authors should also consider including the additional related work brought up by the reviewers. In summary, the paper is not ready for publication at ICLR.
train
[ "I0xf_QaziB3", "eKA243UQ5A", "g2FzMgtbu9U", "RFT28-gNif", "IU0WcYzAIFD", "-tMjaZWSF_", "vCk1h40WXdN", "ALrro5ZN423", "NBNu3M6O7k", "bTaYbB22SSE", "N1j6gBW7EBA", "TopaogEetjm", "gxJbjYAXC7", "Ae59Bn-U0oQ", "8e8MUkMPFK8" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. We appreciate that the paper can benefit from a clearer statement significance of Theorem 2. It actually clearly reveals why the random freeze strategy can maintain or even further improve the performance of DP-SGD. First, for vanilla SGD without injected noise, random freeze will impede...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "eKA243UQ5A", "N1j6gBW7EBA", "-tMjaZWSF_", "NBNu3M6O7k", "gxJbjYAXC7", "TopaogEetjm", "bTaYbB22SSE", "IU0WcYzAIFD", "iclr_2022_06fUz_bJStS", "8e8MUkMPFK8", "Ae59Bn-U0oQ", "iclr_2022_06fUz_bJStS", "iclr_2022_06fUz_bJStS", "iclr_2022_06fUz_bJStS", "iclr_2022_06fUz_bJStS" ]
iclr_2022_1uf_kj0GUF-
Nonparametric Learning of Two-Layer ReLU Residual Units
We describe an algorithm that learns two-layer residual units using rectified linear unit (ReLU) activation: suppose the input $\mathbf{x}$ is from a distribution with support space $\mathbb{R}^d$ and the ground-truth generative model is a residual unit of this type, given by $\mathbf{y} = \boldsymbol{B}^\ast\left[\left(\boldsymbol{A}^\ast\mathbf{x}\right)^+ + \mathbf{x}\right]$, where ground-truth network parameters $\boldsymbol{A}^\ast \in \mathbb{R}^{d\times d}$ represent a nonnegative full-rank matrix and $\boldsymbol{B}^\ast \in \mathbb{R}^{m\times d}$ is full-rank with $m \geq d$ and for $\boldsymbol{c} \in \mathbb{R}^d$, $[\boldsymbol{c}^{+}]_i = \max\{0, c_i\}$. We design layer-wise objectives as functionals whose analytic minimizers express the exact ground-truth network in terms of its parameters and nonlinearities. Following this objective landscape, learning residual units from finite samples can be formulated using convex optimization of a nonparametric function: for each layer, we first formulate the corresponding empirical risk minimization (ERM) as a positive semi-definite quadratic program (QP), then we show the solution space of the QP can be equivalently determined by a set of linear inequalities, which can then be efficiently solved by linear programming (LP). We further prove the statistical strong consistency of our algorithm, and demonstrate its robustness and sample efficiency through experimental results.
Reject
In this paper, the authors propose a non-parametric approach for learning a two-layer neural net. I agree with the authors and reviewers that this is a timely problem. However, the solution in this paper come short of achieving this goal. In particular, the assumtions are very strong and cannot be generalized (e.g., non-negativity). The authors also need to better spell out the sample complexity.
train
[ "GDWVKYktHO", "9XeWW9n9SEY", "ymOBAeLvsDr", "BdWzaarlYon", "ue41Zoh8RAZ", "aT5PVIa5Evu", "O8vDjT3oDbm", "rFby-ik9l7", "jh-sSn2f_OH", "_bxibaWuRxH", "pDvJuqe63x", "IN298xCpvid" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the follow-up experiments confirming your claim. I could not find enough reasons to increase my score substantially. It was a pleasure to review your article.", " I would like to thank the authors for their response. However, regretfully, my concerns have not been fully addressed.\n\n1) My first poin...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 6, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "rFby-ik9l7", "ue41Zoh8RAZ", "iclr_2022_1uf_kj0GUF-", "iclr_2022_1uf_kj0GUF-", "_bxibaWuRxH", "BdWzaarlYon", "IN298xCpvid", "BdWzaarlYon", "pDvJuqe63x", "iclr_2022_1uf_kj0GUF-", "iclr_2022_1uf_kj0GUF-", "iclr_2022_1uf_kj0GUF-" ]
iclr_2022_INO8hGXD2M
Adversarial Distributions Against Out-of-Distribution Detectors
Out-of-distribution (OOD) detection is the task of determining whether an input lies outside the training data distribution. As an outlier may deviate from the training distribution in unexpected ways, an ideal OOD detector should be able to detect all types of outliers. However, current evaluation protocols test a detector over OOD datasets that cover only a small fraction of all possible outliers, leading to overly optimistic views of OOD detector performance. In this paper, we propose a novel evaluation framework for OOD detection that tests a detector over a larger, unexplored space of outliers. In our framework, a detector is evaluated with samples from its adversarial distribution, which generates diverse outlier samples that are likely to be misclassified as in-distribution by the detector. Using adversarial distributions, we investigate OOD detectors with reported near-perfect performance on standard benchmarks like CIFAR-10 vs SVHN. Our methods discover a wide range of samples that are obviously outlier but recognized as in-distribution by the detectors, indicating that current state-of-the-art detectors are not as perfect as they seem on existing benchmarks.
Reject
This paper studies the general problem of out-of-distribution (OOD) detection, where the goal is to detect outliers (i.e., points not in the distribution of training data) in the sample. The paper introduces a methodology for measuring robustness by using adversarial search/distributions. Experimental evaluation indicates that traditional metrics fail to fully capture OOD detection. The reviewers' evaluations of this work were mixed. Overall, there was consensus about the importance of the problem. Moreover, some of the reviewers argued that the submission contains some interesting new ideas. On the other hand, concerns were raised regarding lacking comparison to prior work, potential overselling of the contributions, and several aspects of the experimental evaluation. At the end, there was not sufficient support for acceptance. In its current form, the work appears to be slightly below the acceptance threshold.
train
[ "y6grKoC9IZb", "bntzjigAAzN", "-1PyzEXCXQ2", "WG9KhLVwzVT", "nIMDpSUg5y3", "cw9iz7jhddL", "Ii-DvoRiVVT", "gbk29jd6sD4", "gembMpJhqBi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to evaluate out-of-distribution (OOD) detection methods on adversarial distribution to detect unexplored space of outliers. - The proposed benchmark is reasonable in a different way, so I recommend to redirect the goal and rewrite the paper.\n\n- To my knowledge, OOD detection is not supposed t...
[ 3, 6, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_INO8hGXD2M", "iclr_2022_INO8hGXD2M", "WG9KhLVwzVT", "gbk29jd6sD4", "bntzjigAAzN", "gembMpJhqBi", "y6grKoC9IZb", "iclr_2022_INO8hGXD2M", "iclr_2022_INO8hGXD2M" ]
iclr_2022_lyzRAErG6Kv
Self-Supervised Structured Representations for Deep Reinforcement Learning
Recent reinforcement learning (RL) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies. However, these methods focus on learning global representations of images, and disregard local spatial structures present in the consecutively stacked frames. In this paper, we propose a novel approach that learns self-supervised structured representations ($\mathbf{S}^3$R) for effectively encoding such spatial structures in an unsupervised manner. Given the input frames, the structured latent volumes are first generated individually using an encoder, and they are used to capture the change in terms of spatial structures, i.e., flow maps among multiple frames. To be specific, the proposed method establishes flow vectors between two latent volumes via a supervision by the image reconstruction loss. This enables for providing plenty of local samples for training the encoder of deep RL. We further attempt to leverage the structured representations in the self-predictive representations (SPR) method that predicts future representations using the action-conditioned transition model. The proposed method imposes similarity constraints on the three latent volumes; warped query representations by estimated flows, predicted target representations from the transition model, and target representations of future state. Experimental results on complex tasks in Atari Games and DeepMind Control Suite demonstrate that the RL methods are significantly boosted by the proposed self-supervised learning of structured representations. The code is available at https://sites.google.com/view/iclr2022-s3r.
Reject
After carefully reading the reviews and the rebuttal, unfortunately I feel this work is not yet ready for acceptance. I want to acknowledge the effort put in the rebuttal for this work and I think all the changes greatly increased the value of the work. However, I feel that the work could greatly benefit from running on a different domain where the gain is more considerable. My worry is that the complexity of the method compared to the relatively small improvement (at least as perceived from the current results) will reduce considerably the attention the work will receive from the community (unfairly so). Or some of the analysis and ablation done (e.g. the flow visualization) which are now in the appendix could be brought in the main manuscript to be able to drive the message home. An understanding of the impact on the accuracy of the flow model on the overall performance (which as pointed out by reviewer aHc1 is a really hard task in a more natural context). Or maybe a 3D visually complex environment is exactly where this method will shine as flows are more complex and hence more informative. Overall I think this is solid work, but I feel it does not manage to convince the reader of the significance of the proposed approach. And hence, if published in this form, I feel it will do a disservice to the work, as it will not receive the attention it merits from the community.
train
[ "AF_NXKOJyvI", "_jMOEtoghrj", "uI9rvnWVwTk", "m4uLoUQmuJ8", "0DWAqu4HzMK", "esjavlndGAz", "KCQvBFRyykR", "itxGS2jEjN7", "p3FhjPhXsX9", "gRh_HoCyRYD", "xaFUnqc_gO", "qAFlEFxEtYT", "6NFZW-tJwug", "Ck8J4CS11fF", "3ctCfyTn1t", "o-ph0QuR5U2", "Io3XF5ktvYd", "pXpSxTXg9l3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\nAs I said in the original review the method itself makes sense, that was never the question, I am highlighting more that producing a good flow model that performs well is not trivial in practice. There are reasons why supervised synthetic flow is still outperforming unsupervised flo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 2 ]
[ "itxGS2jEjN7", "iclr_2022_lyzRAErG6Kv", "Io3XF5ktvYd", "Io3XF5ktvYd", "o-ph0QuR5U2", "o-ph0QuR5U2", "3ctCfyTn1t", "3ctCfyTn1t", "Ck8J4CS11fF", "Io3XF5ktvYd", "pXpSxTXg9l3", "3ctCfyTn1t", "Io3XF5ktvYd", "iclr_2022_lyzRAErG6Kv", "iclr_2022_lyzRAErG6Kv", "iclr_2022_lyzRAErG6Kv", "iclr_2...
iclr_2022_qNcedShvOs4
EinSteinVI: General and Integrated Stein Variational Inference
Stein variational inference is a technique for approximate Bayesian inference that has recently gained popularity because it combines the scalability of variational inference (VI) with the flexibility of non-parametric inference methods. While there has been considerable progress in developing algorithms for Stein variational inference, integration in existing probabilistic programming languages (PPLs) with an easy-to-use interface is currently lacking. EinSteinVI is a lightweight compostable library that integrates the latest Stein variational inference method with the PPL NumPyro (Phan et al., 2019). EinSteinVI provides ELBO-within-Stein to support the use of custom inference programs (guides), implementations of a wide range of kernels, non-linear scaling of the repulsion force (Wang & Liu,2019b), and second-order gradient updates using matrix-valued kernels (Wang et al.,2019b). We illustrate EinSteinVI using toy examples and show results on par with or better than existing state-of-the-art methods for real-world problems. These include Bayesian neural networks for regression and a Stein-mixture deep Markov model, which shows EinSteinVI scales to large models with more than 500,000 parameters.
Reject
The paper aims to integrate Stein variational inference methods into the existing probabilistic programming language NumPyro. The implemented methods include variantions of Stein variational gradient descent with different types of kernel functions, non-linear scaling of update terms, and matrix-valued kernels. The paper includes empirical results with a comparsion with existing baselines in real-world problems. Using this framework, the authors developed a new Stein mixture algorithm for deep Markov models, which shows better performance than existing methods. Strengths: - The paper is overall well-written and the method is clearly explained. - The literature review is thorough. - Integration of SteinVI into numpyro seems useful. Users can easily take advantage of the state-of-the-art SteinVI algorithms for their own Bayesian modelings. - Extending the stein mixture method to deep Markov models is a novel application. Weaknesses: - The originality is low the authors propose algorithms that are very similar to previous work and there is a lack of experiments to verify the usefulness of the proposed method, for example, to verify the decreased variance of the gradient estimates claimed by the authors. - Efforts are required to illustrate why ELBO-within-Stein is preferred over the existing work. - Some important Stein VI methods seem lacking. - No experiments to support the usefulness of EinSteinVI for Non-linear Stein VI, Matrix-valued kernel stein VI, and message passing stein VI. All reviewers vote for rejection. I recommend the authors to addrss the limitatoins mentioned above and improve the paper before its resubmission to another venue.
test
[ "8ZDgVE4MUOj", "HRqgaykwVr5", "VItK2iuGRg0", "Qbg5iAQHbX9", "B0nNBQt8Ul", "GHYxBXwYGCu", "JXvZJOrgRAl", "_fri3p1qOkI", "jKZOA3vWsGT", "z7L0B82gYxc", "55H_543ScTY", "YB0lfLAEg4-", "bvInj3TaQfq" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I truly appreciate the response from the authors, which helps make clear why they chose NumPyro and that ELBO-within-Stein is actually one of the contributions in this work. Still, I echo with the other reviewers that more efforts are required to illustrate why ELBO-within-Stein is preferred over the existing wor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "jKZOA3vWsGT", "VItK2iuGRg0", "Qbg5iAQHbX9", "GHYxBXwYGCu", "iclr_2022_qNcedShvOs4", "bvInj3TaQfq", "YB0lfLAEg4-", "55H_543ScTY", "z7L0B82gYxc", "iclr_2022_qNcedShvOs4", "iclr_2022_qNcedShvOs4", "iclr_2022_qNcedShvOs4", "iclr_2022_qNcedShvOs4" ]
iclr_2022_RQ3xUXjZWMO
Implicit Jacobian regularization weighted with impurity of probability output
Gradient descent (GD) plays a crucial role in the success of deep learning, but it is still not fully understood how GD finds minima that generalize well. In many studies, GD has been understood as a gradient flow in the limit of vanishing learning rate. However, this approach has a fundamental limitation in explaining the oscillatory behavior with iterative catapult in a practical finite learning rate regime. To address this limitation, we rather start with strong empirical evidence of the plateau of the sharpness (the top eigenvalue of the Hessian) of the loss function landscape. With this observation, we investigate the Hessian through simple and much lower-dimensional matrices. In particular, to analyze the sharpness, we instead explore the eigenvalue problem for the low-dimensional matrix which is a rank-one modification of a diagonal matrix. The eigendecomposition provides a simple relation between the eigenvalues of the low-dimensional matrix and the impurity of the probability output. We exploit this connection to derive sharpness-impurity-Jacobian relation and to explain how the sharpness influences the learning dynamics and the generalization performance. In particular, we show that GD has implicit regularization effects on the Jacobian norm weighted with the impurity of the probability output.
Reject
The paper studies an interesting question of the relationship between the eigenvalues of the Hessian matrix with the probability of the output in the logistic loss, and use this to propose a regularization that can improve the performance of the neural networks. All the reviewers agree that although the question is interesting, the paper lacks significantly in terms of representation and would benefit from another round of revision.
train
[ "z9zPBpKXE6i", "SzgUbStons", "UCd-DURrFjN", "VxbeQoAOC-k", "pu0RvEuofuh", "j6jSD9R4WG", "U7MAq3jvP7u", "rHez94Vl_K", "IKdPfRxJhV", "sHniEpDFmbJ", "w7I8DYUUfrM", "du24nZk68Kv", "kSE3Kx875M", "2zwxPWbHg6n", "HPRngdTjgxP", "1zdUFWFEAy", "CwWKuJ6a43", "Ri8xeH_gVu4", "BEkp9ZYg7wS" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " C1. The technical part was not convincing to me. It seems the theoretical results and empirical results that should be distinguished are mixedly used to conclude a vague statement. For example, A1-3-1 noted \"||H|| and λ||J1|| behave similarly up to a constant factor\", but what does it mean exactly? Can you prov...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "SzgUbStons", "j6jSD9R4WG", "CwWKuJ6a43", "2zwxPWbHg6n", "UCd-DURrFjN", "U7MAq3jvP7u", "Ri8xeH_gVu4", "du24nZk68Kv", "Ri8xeH_gVu4", "CwWKuJ6a43", "kSE3Kx875M", "BEkp9ZYg7wS", "1zdUFWFEAy", "iclr_2022_RQ3xUXjZWMO", "1zdUFWFEAy", "iclr_2022_RQ3xUXjZWMO", "iclr_2022_RQ3xUXjZWMO", "icl...
iclr_2022_ab7lBP7Fb60
Enforcing fairness in private federated learning via the modified method of differential multipliers
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy. However, differential privacy can disproportionately degrade the performance of the models on under-represented groups, as these parts of the distribution are difficult to learn in the presence of noise. Existing approaches for enforcing fairness in machine learning models have considered the centralized setting, in which the algorithm has access to the users' data. This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices. First, the paper extends the modified method of differential multipliers to empirical risk minimization with fairness constraints, thus providing an algorithm to enforce fairness in the central setting. Then, this algorithm is extended to the private federated learning setting. The proposed algorithm, FPFL, is tested on a federated version of the Adult dataset and an "unfair" version of the FEMNIST dataset. The experiments on these datasets show how private federated learning accentuates unfairness in the trained models, and how FPFL is able to mitigate such unfairness.
Reject
This paper continues the investigation on fairness and privacy in the context of federated learning. We appreciate the detailed response from the authors. During the rebuttal period, the authors have largely updated the set of experiments, since there was an identified bug in the previous implementation. Another drawback that the AC identified is that there is a lack of formulation and formal guarantees in the paper. In particular, is the proposed algorithm trying to satisfy example-level or client-level data privacy? The resulting noise scale can be quite different. Unlike prior work (e.g. Jagielski et al), the proposed algorithm does not seem to provide any fairness guarantee. Thus, it is not clear why the proposed approach is justified (even under some assumptions). In a similar vein, perhaps the authors could consider a more in-depth discussion that compares their approach with prior work and articulate what advantages does their new method offers. Overall, the paper is not ready for publication at ICLR.
train
[ "KzAUcfbfsDU", "i4Qz3opC8P", "tjPVozuQphZ", "jiY2oBH61kv", "jt40sQzLC9x", "bkuof5lURd", "d0zEHh9y-qk", "A3MNAwUpiR4", "bG1Vh6ndnkN", "f61iW3URBT", "2LHGG7yNCub", "dH3IJr8u6Jy", "wyvNV-rgdP", "VeNdB4mhBdY", "gBIvsCJ8T6", "dZkVIk1HkQ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answer. We will follow your recommendation and add a short discussion on this matter.", " Thanks for clarifying this. I think this basically answers my question. Sounds like the hyperparameters are not that sensitive. The fact that it is enough to try $c$-values for few orders of magnitudes s...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "i4Qz3opC8P", "tjPVozuQphZ", "jiY2oBH61kv", "d0zEHh9y-qk", "dZkVIk1HkQ", "dZkVIk1HkQ", "gBIvsCJ8T6", "VeNdB4mhBdY", "VeNdB4mhBdY", "wyvNV-rgdP", "wyvNV-rgdP", "iclr_2022_ab7lBP7Fb60", "iclr_2022_ab7lBP7Fb60", "iclr_2022_ab7lBP7Fb60", "iclr_2022_ab7lBP7Fb60", "iclr_2022_ab7lBP7Fb60" ]
iclr_2022_QdcbUq0-tYM
Universal Controllers with Differentiable Physics for Online System Identification
Creating robots that can handle changing or unknown environments is a critical step towards real-world robot applications. Existing methods tackle this problem by training controllers robust to large ranges of environment parameters (Domain Randomization), or by combining ``Universal'' Controllers (UC) conditioned on environment parameters with learned identification modules that (implicitly or explicitly) identify the environment parameters from sensory inputs (Domain Adaptation). However, these methods can lead to over-conservative behaviors or poor generalization outside the training distribution. In this work, we present a domain adaptation approach that improves generalization of the identification module by leveraging prior knowledge in physics. Our proposed algorithm, UC-DiffOSI, combines a UC trained on a wide range of environments with an Online System Identification module based on a differentiable physics engine (DiffOSI). We evaluate UC-DiffOSI on articulated rigid body control tasks, including a wiping task that requires contact-rich environment interaction. Compared to previous works, UC-DiffOSI outperforms domain randomization baselines and is more robust than domain adaptation methods that rely on learned identification models. In addition, we perform two studies showing that UC-DiffOSI operates well in environments with changing or unknown dynamics. These studies test sudden changes in the robot's mass and inertia, and they evaluate in an environment (PyBullet) whose dynamics differs from training (NimblePhysics).
Reject
The paper addresses the learning of robot controllers for changing or unknown environments. It makes use of differentiable physics for online system identification and of reinforcement learning for offline policy training. A universal controller is trained on a distribution of simulation parameters in order to ensure its robustness. Differentiable physics is used to estimate the simulation parameters from the recent observation history. These parameters are fed to the controller so as to modulate the policy. This approach is evaluated on three benchmarks (2 + 1 added during the rebuttal). The main originality of the paper is the use of differentiable physics for the identification of the parameters in the context of varying environments. The topic is of interest and in line with the recent developments for robotics. However, the novelty is limited, and all evaluators were concerned about the limited experimental contribution. The authors have added a new experiment during the rebuttal but this was not considered sufficient to change the evaluation. Overall this is considered as a promising contribution, but the experimental setting should be largely improved with additional problems and comparisons with SOTA methods from the recent RL literature.
train
[ "ni000W_z7vg", "anD4CXuUKY", "4YxQsBQRRUs", "3gmggOx5cOz", "LnGkAE3c8Wi", "el4dTBUh_7z", "b6EZZTKRFw4", "dPwf6Xqw-j", "IJRG2perXod" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. However, adding a single domain does not make a substantial difference. Without addressing my remaining concerns, I will not change my score.", " Thank you for your comments and suggestions on the paper!\n\n> The current paper only includes a very limited set of environments (cartpole ...
[ -1, -1, -1, -1, -1, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "anD4CXuUKY", "dPwf6Xqw-j", "b6EZZTKRFw4", "IJRG2perXod", "el4dTBUh_7z", "iclr_2022_QdcbUq0-tYM", "iclr_2022_QdcbUq0-tYM", "iclr_2022_QdcbUq0-tYM", "iclr_2022_QdcbUq0-tYM" ]
iclr_2022_pIjvdJ_QUYv
Practical and Private Heterogeneous Federated Learning
Heterogeneous federated learning (HFL) enables clients with different computation/communication capabilities to collaboratively train their own customized models, in which the knowledge of models is shared via clients' predictions on a public dataset. However, there are two major limitations: 1) The assumption of public datasets may be unrealistic for data-critical scenarios such as Healthcare and Finance. 2) HFL is vulnerable to various privacy violations since the samples and predictions are completely exposed to adversaries. In this work, we develop PrivHFL, a general and practical framework for privacy-preserving HFL. We bypass the limitations of public datasets by designing a simple yet effective dataset expansion method. The main insight is that expanded data could provide good coverage of natural distributions, which is conducive to the sharing of model knowledge. To further tackle the privacy issue, we exploit the lightweight additive secret sharing technique to construct a series of tailored cryptographic protocols for key building blocks such as secure prediction. Our protocols implement ciphertext operations through simple vectorized computations, which are friendly with GPUs and can be processed by highly-optimized CUDA kernels. Extensive evaluations demonstrate that PrivHFL outperforms prior art up to two orders of magnitude in efficiency and realizes significant accuracy gains on top of the stand-alone method.
Reject
A heterogeneous federated learning framework is proposed which does not require auiliary public data sets, and does not reveal the private data to the server or answering parties if they operate as honest-but-curious entities. It builds a new protocol for private inference, which can run on GPUs, and proposes a dataset expansion method to not need an auxiliary data set. The paper presents extensive empirical experiments on the method. The paper was extensively discussed with the authors. The concerns included both technical issues and more general issues on missing DP guarantees and realisticness of the threat model. Many of the issues were resolved by the clarifications provided by the authors, and as a result two reviewers increased their scores. However, all reviewers still place the paper to the borderline. While the paper contains solid work, and improves efficiency compared to previous models, this is a borderline paper where the final judgement needs to be based on importance of the presented new contributions in advancing the field. The paper may not yet quite reach the bar, but I believe the reviewer comments have enabled the authors to improve the paper for further work.
train
[ "Gf2C-ixq2jR", "jv_bWbFSPus", "kAU5-JVaL9", "Zs5JTim4KTG", "wHq45hwZk9H", "zC_uACya6cW", "fA9Z_ZKeBU", "y6uTvYY0z0", "7Z1e5rSz-DJ", "3TtIUzTHrcE", "QVyGtsI5VGV", "QLy2CmT2KJg", "CT7afE7uTN", "AdmrAFCXYZX", "2Gy7u3AbUFn", "lObylghSMpy", "68GEdKwZ8Tw", "uQNRE9n5m5_", "TQu4ipffj1", ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " Dear Reviewer,\n\nThanks for the feedback.\n\nSince this paper mainly focuses on the privacy protection with cryptography, for the generalization of the mixup-based data expansion method, we only experimentally verified this argument without theoretical analysis. We agree that theoretical investigation is an inte...
[ -1, -1, -1, -1, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "Zs5JTim4KTG", "wHq45hwZk9H", "fA9Z_ZKeBU", "y6uTvYY0z0", "iclr_2022_pIjvdJ_QUYv", "iclr_2022_pIjvdJ_QUYv", "7Z1e5rSz-DJ", "TQu4ipffj1", "cj3PTiK-7V1", "QVyGtsI5VGV", "QLy2CmT2KJg", "AdmrAFCXYZX", "2Gy7u3AbUFn", "lObylghSMpy", "uQNRE9n5m5_", "68GEdKwZ8Tw", "uQNRE9n5m5_", "wHq45hwZk...
iclr_2022_6gLEKETxUWp
Interpreting Molecule Generative Models for Interactive Molecule Discovery
Discovering novel molecules with desired properties is crucial for advancing drug discovery and chemical science. Recently deep generative models can synthesize new molecules by sampling random vectors from latent space and then decoding them to a molecule structure. However, through the feedforward generation pipeline, it is difficult to reveal the underlying connections between latent space and molecular properties as well as customize the output molecule with desired properties. In this work, we develop a simple yet effective method to interpret the latent space of the learned generative models with various molecular properties for more interactive molecule generation and discovery. This method, called Molecular Space Explorer (MolSpacE), is model-agnostic and can work with any pre-trained molecule generative models in an off-the-shelf manner. It first identifies latent directions that govern certain molecular properties via the property separation hyperplane and then moves molecules along the directions for smooth change of molecular structures and properties. This method achieves interactive molecule discovery through identifying interpretable and steerable concepts that emerge in the representations of generative models. Experiments show that MolSpacE can manipulate the output molecule toward desired properties with high success. We further quantify and compare the interpretability of multiple state-of-the-art molecule generative models. An interface and a demo video are developed to illustrate the promising application of interactive molecule discovery.
Reject
The reviewers find the work to address an interesting and important problem but have several critical concerns about its insufficient treatment of prior work in this area, lack of novelty in relation to the body of existing literature.
train
[ "na9lVna6IKs", "VJZyg09SKD", "tdbzqxKELBr", "75Yjwaxt9dF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work investigates molecule manipulation towards desirable properties by leveraging existing deep generative models. The paper uses linear models trained on latent dimensions of an existing deep generative model for molecular property prediction. The normal directions of the learned decision boundaries are the...
[ 3, 3, 3, 3 ]
[ 4, 4, 5, 4 ]
[ "iclr_2022_6gLEKETxUWp", "iclr_2022_6gLEKETxUWp", "iclr_2022_6gLEKETxUWp", "iclr_2022_6gLEKETxUWp" ]
iclr_2022_R3zqNwzAVsC
Learning an Ethical Module for Bias Mitigation of pre-trained Models
In spite of the high performance and reliability of deep learning algorithms in broad range everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against some subgroups of the population. This urges the practitioner to develop fair systems whose performances are uniform among individuals. In this work, we propose a post-processing method designed to mitigate bias of state-of-the-art models. It consists in learning a shallow neural network, called the Ethical Module, which transforms the deep embeddings of a pre-trained model in order to give more representation power to the disadvantaged subgroups. Its training is supervised by the von Mises-Fisher loss, whose hyperparameters allow to control the space allocated to each subgroup in the latent space. Besides being very simple, the resulting methodology is more stable and faster than most current bias mitigation methods. In order to illustrate our idea in a concrete use case, we focus here on gender bias in facial recognition and conduct extensive numerical experiments on standard datasets.
Reject
The paper proposes a novel post-processing method technique that can mitigate the model bias, called the Ethical Module. It transforms the deep embeddings of a given model to give more representation power to the disadvantaged subgroups. The idea of ​​resolving discrimination against a specific group through effective post processing is promising, and proposing new metrics for fairness is also a very important and relevant issue. However, the connection between the technique proposed in this paper and the newly proposed fairness metric is not clear, so the focus of the paper is somewhat lowered. Moreover, several design choices are somewhat unclear and ad-hoc. In particular, although there was a lot of improvement through the rebuttal period, it is difficult to verify the superiority of the proposed method via the experiments in the paper; Direct comparisons with existing methods for fairness is essential, and it seems necessary to consider a hyperparameter selection strategy that can be taken in a practical scenario rather than simply choosing the best performing hyperparameter for the test set.
train
[ "5vcSRy2ksON", "WdLGG0mJrzw", "NuSfYBTsIIM", "Kpl6KfsoFQc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a post-processing mitigation method, termed as the Ethical Module, to mitigate gender bias of Face Recognition models. The ethical module is a three-layer shallow MLP built on top of a frozen pretrained model, with the goal to correct the biases that could exist in the embedding space of the pr...
[ 5, 5, 5, 5 ]
[ 4, 4, 3, 3 ]
[ "iclr_2022_R3zqNwzAVsC", "iclr_2022_R3zqNwzAVsC", "iclr_2022_R3zqNwzAVsC", "iclr_2022_R3zqNwzAVsC" ]
iclr_2022_VXqNHWh3LL
Shift-tolerant Perceptual Similarity Metric
Existing perceptual similarity metrics assume an image and its reference are well aligned. As a result, these metrics are often sensitive to a small alignment error that is imperceptible to the human eyes. This paper studies the effect of small misalignment, specifically a small shift between the input and reference image, on existing metrics, and accordingly develops a shift-tolerant similarity metric. This paper builds upon LPIPS, a widely used learned perceptual similarity metric and explores architectural design considerations to make it robust against imperceptible misalignment. Specifically, we study a wide spectrum of neural network elements, such as anti-aliasing filtering, pooling, striding, padding, and skip connection, and discuss their roles in making a robust metric. Based on our studies, we develop a new deep neural network-based perceptual similarity metric. Our experiments show that our metric is tolerant to imperceptible shifts while being consistent with the human similarity judgment.
Reject
This paper introduces a perceptual similarity on top on the commonly used perceptual loss in the literature (LPIPS). The authors draw experiments highlighting that human perceptual similarity is invariant to small shifts, whereas standard metrics are not. The paper studies several factors (anti-aliasing, pooling, striding, padding, skip connection) in order to propose a measure on top of LPIPS achieving shift-invariance. This paper initially received mixed reviews. RLHuY was positive about the submission, pointing out the relevance of the real human data and the studied factors for measuring the impact on shift invariance. RGQvy was slightly positive, but also raised several concerns on justification of the claimed properties, human perception experiments, and positioning with respect to data augmentation (PIM). RLHuY, an expert on the field, recommended clear rejection, pointing out missing references (including DISTS), the limited scope of the paper (shift invariance and tiny shifts). After rebuttal, RLHuY and RLHuY stuck to their positions ; RGQvy were inclined to borderline reject because of unconvincing answers on comparison to data augmentation techniques. The AC's own readings confirmed the concerns raised by RGQvy and RLHuY, and points the following limitations: - The submission includes limited contribution and expected results: the studied modifications on neural networks' architecture, although meaningful, directly follow ideas borrowed from the literature. They are not supported by stronger theoretical analysis, and several insights related to accuracy or robustness remain unclear. - Experimental results are contrasted, e.g. compared to data-augmentation: although these approaches are more demanding at train time, they do not induce any overhead at test time - in contrast to the proposed approach. Therefore, the AC recommends rejection.
train
[ "0eggs3i-9Ew", "ktgjkBQv9lE", "CmmncWGwS3o", "vAV9HhK7Vau", "NmsosuhrzbZ", "-Icx_QWRNS", "a0lbMHAtqmW", "XNqA7Po44Pa", "P91fcnMpJ8u" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the constructive comments. Below please find our responses.\n\n**Do humans all notice the shift in the same images or is there a huge variability?**\n\nRe: As suggested by the reviewer, we analyzed the variability of the user responses. In our analysis, if a user noticed the shift betwee...
[ -1, -1, -1, -1, -1, -1, 3, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "P91fcnMpJ8u", "a0lbMHAtqmW", "XNqA7Po44Pa", "iclr_2022_VXqNHWh3LL", "a0lbMHAtqmW", "XNqA7Po44Pa", "iclr_2022_VXqNHWh3LL", "iclr_2022_VXqNHWh3LL", "iclr_2022_VXqNHWh3LL" ]
iclr_2022_3r034NfDKnL
The Role of Learning Regime, Architecture and Dataset Structure on Systematic Generalization in Simple Neural Networks
Humans often systematically generalize in situations where standard deep neural networks do not. Empirical studies have shown that the learning procedure and network architecture can influence systematicity in deep networks, but the underlying reasons for this influence remain unclear. Here we theoretically study the acquisition of systematic knowledge by simple neural networks. We introduce a minimal space of datasets with systematic and non-systematic features in both the input and output. For shallow and deep linear networks, we derive learning trajectories for all datasets in this space. The solutions reveal that both shallow and deep networks rely on non-systematic inputs to the same extent throughout learning, such that even with early stopping, no networks learn a fully systematic mapping. Turning to the impact of architecture, we show that modularity improves extraction of systematic structure, but only achieves perfect systematicity in the trivial setting where systematic mappings are fully segregated from non-systematic information. Finally, we analyze iterated learning, a procedure in which generations of networks learn from languages generated by earlier learners. Here we find that networks with output modularity successfully converge over generations to a fully systematic `language’ starting from any dataset in our space. Our results contribute to clarifying the role of learning regime, architecture, and dataset structure in promoting systematic generalization, and provide theoretical support for empirical observations that iterated learning can improve systematicity.
Reject
The Authors study the emergence of systematic generalization in neural networks. The paper studies a timely topic and presents a set of concrete results. For example, reviewer ZgRW emphasizes that a key strength of the paper is constructing simple datasets where systematicity emerges. I think indeed it is valuable, as systematicity is sometimes poorly defined and understood, so building a theoretical testbed might be very helpful. However, the reviewers found important issues, which the rebuttal was unable to address. Perhaps the key issue (raised e.g. by reviewer 9QCY) is that results do not clearly generalize to more practically relevant settings. What is somewhat missing is a clear set of guidelines or implications for how to improve systematicity in more practically relevant neural networks. Based on this and other issues raised by the reviewers, unfortunately, I have to recommend rejecting the paper. Thank you for your submission, and I hope that the review process will help you improve the work.
val
[ "2OilDSr761", "9XV5vJ6ZOVI", "lWgdAGjCZif", "1A356j_nt-O", "VXlOoanJnvX", "XSlSTaC4FtH", "CE4o1AViw9V", "azHOl_DQJli", "utWQel_tIe", "b9ro28jPCZe", "X3IXQnTUqwj", "IhpkA7IIZAw", "D1UOkEWnZmn", "z4hndYVfyj4", "QlvxlqyVEBb", "Rv_-blHUlrf", "btQAVcaPsj7", "XmyNkviXhs8", "MmF1uaXDoSA...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "officia...
[ " We thank the reviewer for engaging. The dataset in Figure 1 is used as an example to aid in the understanding of our setup and how it might relate to the linguistics literature (mapping from a logical to phonetic form). However, we would like to reiterate that from the equations derived in this work we show that ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "9XV5vJ6ZOVI", "IhpkA7IIZAw", "1A356j_nt-O", "R7f7j6m6hU", "XSlSTaC4FtH", "X3IXQnTUqwj", "azHOl_DQJli", "utWQel_tIe", "R7f7j6m6hU", "X3IXQnTUqwj", "Hw4ek6dxXed", "D1UOkEWnZmn", "boOmHdgg8R7", "MmF1uaXDoSA", "Rv_-blHUlrf", "btQAVcaPsj7", "XmyNkviXhs8", "8K3r5lpDN0M", "ot1oXYybCc",...
iclr_2022_HG7vlodGGm
TempoRL: Temporal Priors for Exploration in Off-Policy Reinforcement Learning
Effective exploration is a crucial challenge in deep reinforcement learning. Behavioral priors have been shown to tackle this problem successfully, at the expense of reduced generality and restricted transferability. We thus propose temporal priors as a non-Markovian generalization of behavioral priors for guiding exploration in reinforcement learning. Critically, we focus on state-independent temporal priors, which exploit the idea of temporal consistency and are generally applicable and capable of transferring across a wide range of tasks. We show how dynamically sampling actions from a probabilistic mixture of policy and temporal prior can accelerate off-policy reinforcement learning in unseen downstream tasks. We provide empirical evidence that our approach improves upon strong baselines in long-horizon continuous control tasks under sparse reward settings.
Reject
This paper was close and also very polarizing with the reviewers. On the positive side, some reviewers found: 1. the results impressive 2. the proposed method to be novel, interesting, and produce good performance across several settings 3. the paper was well written On the other hand, others found: 1. the motivation suspect 2. missing experiments to characterize the sensitivity to numerous hyper-parameters 3. the baselines compared with weak and not representative 4. significant performance drop comparing the results in the original submission and the new ones added during discussion period 5. low number of seeds initially In the end, multiple reviewers raised serious issues regarding the motivation for the approach and the quality and ultimately credibility of the results presented. One of the high-scoring reviewers agreed the paper was a bit misleading (limitations relegated to the appendix). Unfortunately, none of the high-scoring reviewers provided counters to this points.
train
[ "PrWf7ik1tgP", "-yvrMU-w1zY", "DgCoTe-goFx", "iQ4_GAlDuUL", "L3zsToxxBXP", "5cbrL7vX0j_", "GGw8DYZZiY", "49ZNbMm-R8r", "pwsggy-WgBf", "KO3faLM_VSb", "_Muc4sIk4rp", "lqxG-UkI3tr", "HKDpPB96m2", "OUjrLQzQVZG", "T4B5ZkZhMNh", "fw0vEgZG682", "WhxZfnzjzp2", "1zjDUGqUdxY", "b8UMm6XppM8...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "...
[ "This paper considers the exploration problem in RL and proposes a new type of the behavioral prior, one only conditioned by recent actions. The paper empirically demonstrates the effectiveness of this approach in sparse-reward environments. It also shows that this type of the behavioral prior can be learned in a s...
[ 6, -1, 3, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, -1, 3, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_HG7vlodGGm", "L3zsToxxBXP", "iclr_2022_HG7vlodGGm", "49ZNbMm-R8r", "iclr_2022_HG7vlodGGm", "T4B5ZkZhMNh", "iclr_2022_HG7vlodGGm", "TDtmDogFWor", "OUjrLQzQVZG", "T4B5ZkZhMNh", "iclr_2022_HG7vlodGGm", "nACz2YxTlX", "b8UMm6XppM8", "NyhLz5RxdH2", "1DoRmsK0Twk", "iclr_2022_HG7vlo...
iclr_2022_inA3szzFE5
Spatial Frequency Sensitivity Regularization for Robustness
The ability to generalize to out-of-distribution data is a major challenge for modern deep neural networks. Recent work has shown that deep neural networks latch on to superficial Fourier statistics of the training data and fail to generalize when these statistics change, such as when images are subject to common corruptions. In this paper, we study the frequency characteristics of deep neural networks in order to improve their robustness. We first propose a general measure of a model's $\textit{\textbf{spatial frequency sensitivity}}$ based on its input-Jacobian represented in the Fourier-basis. When applied to deep neural networks, we find that standard minibatch training consistently leads to increased sensitivity towards particular spatial frequencies independent of network architecture. We further propose a family of $\textit{\textbf{spatial frequency regularizers}}$ based on our proposed measure to induce specific spatial frequency sensitivities in a model. In experiments on datasets with out-of-distribution test images arising from various common image corruptions, we find that deep neural networks trained with our proposed regularizers obtain significantly improved classification accuracy while maintaining high accuracy on in-distribution clean test images.
Reject
The reviewers all generally appreciated the idea in the paper. However, the nature of this contribution necessitates an empirical evaluation, and the reviewers generally found this to not be sufficient convincing. My assessment is that this idea can likely result in a successful publication, but will require additional empirical evaluation and analysis as suggested by reviewers. While the authors did add some additional results during the response period, they do not seem to be sufficient to fully address reviewer concerns.
train
[ "XgeiomiZ_Vs", "xsKJQhQWyKP", "d0kjkGySbp", "6zKyR2PoCna", "WqCjDxuE4kA", "dvtC1IqTvS", "wdCgOBsxBii", "Y-axX7FrNvG", "BLifmMdu9gu", "mDfayuKwVoE", "wG_aVAa1JS", "TIzMLkmnH-", "eH7bCO-lFz" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, \n\nThank you for your response. FYI, we have responded to comments from Reviewer GEyu. ", " Dear reviewer, \n\nThank you for your response and questions. \n\n> Similarly, if the method presented also lowers clean accuracy, what's to guarantee that the improved corruption accuracy isn't simply ...
[ -1, -1, -1, 6, 3, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, 3, 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "d0kjkGySbp", "dvtC1IqTvS", "wG_aVAa1JS", "iclr_2022_inA3szzFE5", "iclr_2022_inA3szzFE5", "mDfayuKwVoE", "iclr_2022_inA3szzFE5", "eH7bCO-lFz", "TIzMLkmnH-", "WqCjDxuE4kA", "6zKyR2PoCna", "iclr_2022_inA3szzFE5", "iclr_2022_inA3szzFE5" ]
iclr_2022_CgV7NVOgDJZ
Guided-TTS:Text-to-Speech with Untranscribed Speech
Most neural text-to-speech (TTS) models require $\langle$speech, transcript$\rangle$ paired data from the desired speaker for high-quality speech synthesis, which limits the usage of large amounts of untranscribed data for training. In this work, we present Guided-TTS, a high-quality TTS model that learns to generate speech from untranscribed speech data. Guided-TTS combines an unconditional diffusion probabilistic model with a separately trained phoneme classifier for text-to-speech. By modeling the unconditional distribution for speech, our model can utilize the untranscribed data for training. For text-to-speech synthesis, we guide the generative process of the unconditional DDPM via phoneme classification to produce mel-spectrograms from the conditional distribution given transcript. We show that Guided-TTS achieves comparable performance with the existing methods without any transcript for LJSpeech. Our results further show that a single speaker-dependent phoneme classifier trained on multispeaker large-scale data can guide unconditional DDPMs for various speakers to perform TTS.
Reject
The paper proposes using unlabelled speech data for TTS by decoupling parts of the model. However, all reveiwers agree that the technique is already known and the experimental results are not strong enough to make advantage of training on more data. A reject.
train
[ "s_HpKY4XANT", "HvA743B0dlA", "sVw0JEf0_ZE", "bBy0oKxY4jT", "l9UG8rTSGh", "lMrVd2iofrB", "9o4xEjmKyvd", "Vs2xoqNsX3m", "CfnTyMFg-pr", "Fj0obcxEfA5", "xjNAarzNPIx", "MVu4VFKH_k" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed feedback. This clarifies many of my questions.\n\nUnfortunately I still don't see this good enough for acceptance. I think it needs some rewriting, maybe even title change, also abstract change, to make the motivation more clear because I totally misunderstood this.\n\nAlso, I don't thi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "HvA743B0dlA", "MVu4VFKH_k", "MVu4VFKH_k", "MVu4VFKH_k", "xjNAarzNPIx", "Fj0obcxEfA5", "CfnTyMFg-pr", "iclr_2022_CgV7NVOgDJZ", "iclr_2022_CgV7NVOgDJZ", "iclr_2022_CgV7NVOgDJZ", "iclr_2022_CgV7NVOgDJZ", "iclr_2022_CgV7NVOgDJZ" ]
iclr_2022_7WVAI3dRwhR
Adversarial twin neural networks: maximizing physics recovery for physical system
The exact modeling of modern physical systems is challenging due to the expanding system territory and insufficient sensors. To tackle this problem, existing methods utilize sparse regression to find physical parameters or add another virtual learning model like a Neural Network (NN) to universally approximate the unobserved physical quantities. However, the two models can't perfectly play their own roles in joint learning without proper restrictions. Thus, we propose (1) sparsity regularization for the physical model and (2) physical superiority over the virtual model. They together define output boundaries for the physical and virtual models. Further, even the two models output properly, the joint model still can't guarantee learning maximal physical knowledge. For example, if the data of an observed node can linearly represent those of an unobserved node, these two nodes can be aggregated. Therefore, we propose (3) to seek the dissimilarity of physical and virtual outputs to obtain maximal physics. To achieve goals (1)-(3), we design a twin structure of the Physical Neural Network (PNN) and Virtual Neural Network (VNN), where sparse regularization and skip-connections are utilized to guarantee (1) and (2). Then, we propose an adversarial learning scheme to maximize output dissimilarity, achieving (3). We denote the model as the Adversarial Twin Neural Network (ATN). Finally, we conduct extensive experiments over various systems to demonstrate the best performance of ATN over other state-of-the-art methods.
Reject
This paper addresses the identification of physical systems defined on graphs. The authors introduce the Adversarial Twin Neural Network (ATN), which consists in augmenting a simple linear model (PNN) with a virtual neural network (VNN). Some regularization terms are used to enforce maximum prediction from the PNN, and to enforce diverse outputs between PNN and VNN. The paper initially received tree rejection recommendations. The main limitations pointed out by reviewers relate to the limited contributions, the limiting assumption of using a linear mode for PNN, the lack of positioning with respect to related works, and clarifications on experiments. The authors' rebuttal answered to some reviewers concerned: Rdem1 increased its grade from 3 to 5, and Rdem1 from 5 to 6 - although not willing to champion the paper. R8dT9, which provided a very detailed review and feedback after rebuttal still voted for rejection, especially because he was not convinced by the positioning with recent related works and the answers on experiments. The AC's own readings confirmed the issues essentially raised by R8dT9 and other reviewers. Especially, the AC considers that: - The contributions for driving a proper cooperation between the PNN and VNN models are weak, since it reduces to using simple skip connection and adversarial training. - The importance of these aspects have not been analysed in depth in the revised version of the paper, neither theoretically nor experimentally: for example, the difference with respect to [Yin+ 2021] for a proper augmentation, the discussion to alternative methods for representing diversity as done in [Rame & Cord 2021], or the positioning with respect to Wasserstein distance-based objectives. - There remains ambiguities in the cross-validation process, which have not been addressed in the rebuttal. Therefore, the AC recommends rejection.
train
[ "_-S7r_iuPPm", "VZ2b8xGbTIm", "Hmw7IUgPr1t", "Q4uhbne5lkQ", "jy8_-B9FyeG", "0WLVR1FHjVC", "tsNqBXKeY1A", "LpnvC-egIJF", "jO5cM6z-KYU", "ekyY2hPiC2H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " after reading the answers to my question 1 and 2, it is still not clear to me which is a set, which is a variable. symbols y_o and y_u are also confusing.", "The paper proposes ATN to model and identify physical systems. Here, the physical systems are grids consisting of measurement sensors: because the sensor ...
[ -1, 6, -1, 5, -1, -1, -1, -1, -1, 5 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, -1, 2 ]
[ "LpnvC-egIJF", "iclr_2022_7WVAI3dRwhR", "jO5cM6z-KYU", "iclr_2022_7WVAI3dRwhR", "tsNqBXKeY1A", "iclr_2022_7WVAI3dRwhR", "Q4uhbne5lkQ", "ekyY2hPiC2H", "VZ2b8xGbTIm", "iclr_2022_7WVAI3dRwhR" ]
iclr_2022_yztpblfGkZ-
Graph Convolutional Networks via Adaptive Filter Banks
Graph convolutional networks have been a powerful tool in representation learning of networked data. However, most architectures of message passing graph convolutional networks (MPGCNs) are limited as they employ a single message passing strategy and typically focus on low-frequency information, especially when graph features or signals are heterogeneous in different dimensions. Then, existing spectral graph convolutional operators lack a proper sharing scheme between filters, which may result in overfitting problems with numerous parameters. In this paper, we present a novel graph convolution operator, termed BankGCN, which extends the capabilities of MPGCNs beyond single `low-pass' features and simplifies spectral methods with a carefully designed sharing scheme between filters. BankGCN decomposes multi-channel signals on arbitrary graphs into subspaces and shares adaptive filters to represent information in each subspace. The filters of all subspaces differ in frequency response and together form a filter bank. The filter bank and the signal decomposition permit to adaptively capture diverse spectral characteristics of graph data for target applications with a compact architecture. We finally show through extensive experiments that BankGCN achieves excellent performance on a collection of benchmark graph datasets.
Reject
The paper proposes a graph convolution operator (BankGCN) to be used in graph neural networks. The reviewers mainly raised concerns about the limited of novelty in the light of numerous previous works that are similar or address similar problems as well as lacking evaluation. While the rebuttal addressed some of the concerns, the overall impression is that the paper is not of sufficient methodological or experimental significance for the conference.
train
[ "bhFCTJlXZlR", "LDsOHC8W4L", "domSFhgRgAE", "tBKJ7QMQZ-0", "VgIBafCrSwO", "iJrh7WxWl70", "6X5UAypczuU", "ggUqP3ciihO", "TL8G3oIFOC4", "JfFG1rpsyTM" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a graph convolutional networks where, at each layer, the graph signal is decomposed with an adaptive filter bank. Strengths:\nGraph neural networks are timely and of great interest to researchers.\n\nWeaknesses:\n\nIntegrating filter banks in graph neural networks is not new. There has been man...
[ 6, -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "iclr_2022_yztpblfGkZ-", "VgIBafCrSwO", "JfFG1rpsyTM", "TL8G3oIFOC4", "ggUqP3ciihO", "6X5UAypczuU", "iclr_2022_yztpblfGkZ-", "iclr_2022_yztpblfGkZ-", "iclr_2022_yztpblfGkZ-", "iclr_2022_yztpblfGkZ-" ]
iclr_2022_b-VKxdc5cY
Distribution Matching in Deep Generative Models with Kernel Transfer Operators
Generative models which use explicit density modeling (e.g., variational autoencoders, flow-based generative models) involve finding a mapping from a known distribution, e.g. Gaussian, to the unknown input distribution. This often requires searching over a class of non-linear functions (e.g., representable by a deep neural network). While effective in practice, the associated runtime/memory costs can increase rapidly, usually as a function of the performance desired in an application. We propose a substantially cheaper (and simpler) distribution matching strategy based on adapting known results on kernel transfer operators. We show that our formulation enables highly efficient distribution approximation and sampling, and offers surprisingly good empirical performance that compares favorably with powerful baselines, but with significant runtime savings. We show that the algorithm also performs well in small sample size settings (in brain imaging).
Reject
Perron-Frobenius operator (P) is a well-known tool which maps the density of a dynamical system at time t (p_t) to that at t+1 (p_{t+1}): p_{t+1} = P p_t. The idea has recently been extended (kernel Perron-Frobenius operator (kPF); Klus et al. 2020) to map a probability measure p_Z to p_X via covariance operators (4) associated to a reproducing kernel; this corresponds to the transformation of the kernel mean embedding of Z to that of X as it is recalled in (5). The authors use the kPF technique in generative modelling to map the known prior (p_Z) to the data-generating distribution (p_X), and illustrate the idea numerically. While the focus of the paper (generative modelling) is relevant, the reviewers had several severe issues with the submission: 1) the manuscript lacks clarity of presentation at multiple points, 2) the reviewers had concerns with the scalability of the approach (which unfortunately has not been analyzed), 3) the submission is a straightforward application of a well-established tool (kPF) in the literature; the paper lacks novelty. Significantly more effort is required before publication.
train
[ "MRYrYigTfJo", "vQxD_JjAHSH", "mFm1jeqlfxz", "dxZXiAKrf5A", "uBQREjb-nwS", "jFBs_xIjGxq", "nd6G2Rmf2P5", "DAjabKtcev1", "7--fS4KMz5l", "4tiMARLwQj3", "udsnwC4aJq", "1hnZO83K5jM", "rVxMpDV1L5z", "wKGw1mLQ-R", "qlOb7vKeOIR", "rTkZM1NF7xm", "14cad_6mrBt", "F6d6TCyjDZ2", "HVVqd564AN"...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response. I'm confirming that I have read the rebuttal, updated version of the PDF, and the other reviewers' responses as well. The explanations cleared up some of my confusion, so I will keep my score as is.", " Dear reviewer hWmF,\n\nThank you for your valuable feedback. We agree th...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "7--fS4KMz5l", "nd6G2Rmf2P5", "jFBs_xIjGxq", "iclr_2022_b-VKxdc5cY", "iclr_2022_b-VKxdc5cY", "DAjabKtcev1", "4tiMARLwQj3", "uBQREjb-nwS", "HVVqd564AN", "udsnwC4aJq", "F6d6TCyjDZ2", "wKGw1mLQ-R", "iclr_2022_b-VKxdc5cY", "qlOb7vKeOIR", "rTkZM1NF7xm", "14cad_6mrBt", "rVxMpDV1L5z", "ic...
iclr_2022_7zc05Ua_HOK
Learning Sample Reweighting for Adversarial Robustness
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy. We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin, with the goal of improving robust generalization. Inspired by MAML-based approaches, we formulate weighted adversarial training as a bilevel optimization problem where the upper-level task corresponds to learning a robust classifier, and the lower-level task corresponds to learning a parametric function that maps from a sample's \textit{multi-class margin} to an importance weight. Extensive experiments demonstrate that our approach improves both clean and robust accuracy compared to related techniques and state-of-the-art baselines.
Reject
The paper introduces a meta-learning approach for re-weighting samples for better adversarial robustness. Specifically, they parameterize the weights using an additional module and learn it with the MAML objective. I have read the paper and reviews carefully by myself and found that the paper has several weaknesses that are not well addressed in the rebuttal. 1) limited novelty. The proposed approach is a direct adaptation of the classical MAML algorithm to adversarial training, which is of limited technical novelty as pointed by serveral reviewers. 2) Adaptive attack experiments are incomplete. The proposed BiLAW relies an additional reweighting module in the training stage, though it will not be used in the test stage. But we can still use an independently learned reweighting head for adaptive attack, which is should be considered in the white-box attacks. We do not want to see the new proposed defense will be defeated by other attacks quickly. 3) The true performance for BiLAW is problematic. Table 1 on MNIST is not representative for current development in adversarial community. On Table 2, comparing BiLAW with TRADES, for AA (0.031), 45.3% vs 51.7% on small-CNN and 51.4% vs 52.1% on WRN-32-10. The performance of BiLAW is lower than TRADES, while when combine the two together, the results is very natural higher than TRADES and BiLAW but we are sure which component benefit the gains. For example, we can say TRADES benefits BiLAW because BiLAW+TRADES (52.6%) is much higher than BiLAW (45.3%). Also, the author did not show the results of single BiLAW on CIFAR-100. Considering the around two times running time, this performance is not acceptable in adversarial training methods. Due to the above reasons, I cannot recommend acceptance in the current verison to ICLR.
train
[ "dC4Sjf6bCVy", "VNpv9YOyEkc", "y7dBZ_ZF2bw", "SqVAV5TJ15", "ZlRLKOPb8Sv", "IIcVjWW3W1J", "QXNnAyaaud8", "ifQ-UP0w-U", "02nBRLSgFQl", "kFHDBSQ7kI", "3bjKADziBLQ", "nlE56-kFJfG", "mULYCXvK-vg", "zQqYAQDze4T", "FkmEzN8wkrU", "-oIlyvAnR4G", "fBOxJy5TJxc", "piSb_4EkhV", "Rl9_QlA79Oh" ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ " We thank the reviewer for their constructive and helpful comments. To clarify adaptive attacks, we discuss two cases: when the weights are treated as constants and for the case when the weights are treated as a function of the classifier and labels. We perform a preliminary experiment to explore the feasibility o...
[ -1, 3, -1, -1, 6, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, 4, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "VNpv9YOyEkc", "iclr_2022_7zc05Ua_HOK", "QXNnAyaaud8", "kFHDBSQ7kI", "iclr_2022_7zc05Ua_HOK", "zQqYAQDze4T", "mULYCXvK-vg", "iclr_2022_7zc05Ua_HOK", "fBOxJy5TJxc", "nlE56-kFJfG", "iclr_2022_7zc05Ua_HOK", "Rl9_QlA79Oh", "piSb_4EkhV", "ZlRLKOPb8Sv", "fBOxJy5TJxc", "ifQ-UP0w-U", "iclr_2...
iclr_2022_WnOLO1f50MH
Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups
Group convolutional neural networks (G-CNNs) have been shown to increase parameter efficiency and model accuracy by incorporating geometric inductive biases. In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels. This finding motivates further weight-tying by sharing convolution kernels over subgroups. To this end, we introduce convolution kernels that are separable over the subgroup and channel dimensions. In order to obtain equivariance to arbitrary affine Lie groups we provide a continuous parameterisation of separable convolution kernels. We evaluate our approach across several vision datasets, and show that our weight sharing leads to improved performance and computational efficiency. In many settings, separable G-CNNs outperform their non-separable counterpart, while only using a fraction of their training time. In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations. $\mathrm{Sim(2)}$-equivariance further improves performance on all tasks considered.
Reject
The authors study separable convolutions in the group-convolutional setting, and describe experiments showing them to be more computationally efficient without loss of performance in the setting of some group-augmented MNISTs, and show some promising results on un-augmented CIFAR10, CIFAR100, and Galaxy10. The reviewers are mixed; some of the reviewers have concerns about the completeness of the experiments and the novelty of the work, and in particular to what extent the experiments support the specific novelties claimed. The authors have made some updates to address this in the revision, but my opinion is that the authors should resubmit to the next venue after further experiments and exposition to clarify.
train
[ "oUlRMSoE_x", "Vvf4jf2REsM", "efJjDc4wrJy", "RKbkj9RlhFM", "6EtraR8us0Z", "9Zegu6DE_Uu", "Z-eXUVQStH5", "s0c1IuAsXIB", "0MnZCQB-s0H", "uPjhfeXAyUM", "V2NviBBVvEQ", "Awyy9ZA3lxn", "oJMPPge2eTl", "pon64jH62ZF", "V_TwZFApDVh", "kzW_aZb_J7J", "q4SC2VR1Kg-", "JYekiBIF2t", "PTQfuw_hx3E...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper discusses considerable parameter redundancy in regular group convolution networks and then proposes separable convolution kernels to share the weights over the subgroups. Besides, the authors explored the equivariance of three different groups and presented a continuous parameterization scheme. Evaluati...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2022_WnOLO1f50MH", "xFQ9BH0hCHu", "iclr_2022_WnOLO1f50MH", "oJMPPge2eTl", "JYekiBIF2t", "PTQfuw_hx3E", "iclr_2022_WnOLO1f50MH", "RKbkj9RlhFM", "9Zegu6DE_Uu", "efJjDc4wrJy", "oJMPPge2eTl", "oJMPPge2eTl", "xFQ9BH0hCHu", "V_TwZFApDVh", "N1kqS_Ta9qT", "6EtraR8us0Z", "JYekiBIF2t", ...
iclr_2022_FpKgG31Z_i9
Learning Rate Grafting: Transferability of Optimizer Tuning
In the empirical science of training large neural networks, the learning rate schedule is a notoriously challenging-to-tune hyperparameter, which can depend on all other properties (architecture, optimizer, batch size, dataset, regularization, ...) of the problem. In this work, we probe the entanglements between the optimizer and the learning rate schedule. We propose the technique of optimizer grafting, which allows for the transfer of the overall implicit step size schedule from a tuned optimizer to a new optimizer, preserving empirical performance. This provides a robust plug-and-play baseline for optimizer comparisons, leading to reductions to the computational cost of optimizer hyperparameter search. Using grafting, we discover a non-adaptive learning rate correction to SGD which allows it to train a BERT model to state-of-the-art performance. Besides providing a resource-saving tool for practitioners, the invariances discovered via grafting shed light on the successes and failure modes of optimizers in deep learning.
Reject
The paper proposed Trained ML oracles to find the decent direction and step size in optimization. The process they call grafting. Reviewers raised several concerns about the reliability of ML oracles in general settings which is valid. The rebuttal could not convince the reviewers to change their opinion. Ideally for an empirical only paper with heavy reliability on ML for critical decisions, to meet the high bar of ICLR there must be several experiments (5-10 datasets or more) on diverse datasets and settings. Also, there should be discussions on when and how the method fails and related discussions. In that sense the paper does not meet the bar for publication.
train
[ "oO9VWLfDTaa", "1HPYRLJBdu4", "xg9XffdAO7J", "CCc9qLyWZB", "Ci4QsUCO9xH", "vBbKubhy9k", "GrpHofCjiuF", "XMidHaweD3", "a1KBb1Sh3AT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose learning rate grafting as a method to explore the power and dynamics of optimizers. Learning rate grafting partitions the parameters of the networks into groups, and for each group takes the direction of the weight update from one optimizer and the magnitude from another optimizer. The paper th...
[ 8, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "iclr_2022_FpKgG31Z_i9", "xg9XffdAO7J", "a1KBb1Sh3AT", "oO9VWLfDTaa", "XMidHaweD3", "GrpHofCjiuF", "iclr_2022_FpKgG31Z_i9", "iclr_2022_FpKgG31Z_i9", "iclr_2022_FpKgG31Z_i9" ]
iclr_2022_yhjfOvBvvmz
Weakly-Supervised Learning of Disentangled and Interpretable Skills for Hierarchical Reinforcement Learning
Hierarchical reinforcement learning (RL) usually requires task-agnostic and interpretable skills that can be applicable to various downstream tasks. While many recent works have been proposed to learn such skills for a policy in unsupervised manner, the learned skills are still uninterpretable. To alleviate this, we propose a novel WEakly-supervised learning approach for learning Disentangled and Interpretable Skills (WEDIS) from the continuous latent representations of trajectories. We accomplish this by extending a trajectory variational autoencoder (VAE) to impose an inductive bias with weak labels, which explicitly enforces the trajectory representations to be disentangled into factors of interest that we intend the model to learn. Given the latent representations as skills, a skill-based policy network is trained to generate similar trajectories to the learned decoder of the trajectory VAE. Additionally, we propose to train a policy network with single-step transitions and perform the trajectory-level behaviors at test time with the knowledge on the skills, which simplifies the exploration problem in the training. With a sample-efficient planning strategy based on the skills, we demonstrate that our method is effective in solving the hierarchical RL problems in experiments on several challenging navigation tasks with a long horizon and sparse rewards.
Reject
The authors present a method for learning disentangled skill representation that uses weak supervision. The reviewers mentioned that the paper tackles an important problem, delivers interesting and novel visualizations of the learned skills, and positions the paper well in the context of related work. The reviewers also point out several points of criticism: the complexity of the method, lack of convincing comparisons to baselines that utilize the same amount of data and the quality of writing, among others. I encourage the authors to address these points in the future version of the paper.
train
[ "rAQ0S6WEI3j", "NaCKyIrNXQF", "cWQhNvGGmVj", "_5D1fw1gP9F", "BoBVg6wyVVg", "JjRI8H5QQkV", "WMsGgaYKv0t", "qOmHWaOubiB", "nGcwDLt5-Pe", "4mo2fD3mcyI", "jfEllfls-s-", "iZxk2MphfDO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for carefully addressing my concerns and misunderstandings. The explanations provided are reasonable, but I believe some of them could be made more convincing with simple minimalist experiments. For instance, what happens when feeding both z^single and z^multi in the policy? What happens if we...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "cWQhNvGGmVj", "WMsGgaYKv0t", "iZxk2MphfDO", "4mo2fD3mcyI", "nGcwDLt5-Pe", "4mo2fD3mcyI", "4mo2fD3mcyI", "jfEllfls-s-", "iclr_2022_yhjfOvBvvmz", "iclr_2022_yhjfOvBvvmz", "iclr_2022_yhjfOvBvvmz", "iclr_2022_yhjfOvBvvmz" ]
iclr_2022_dEOeQgQTyvt
Structured Energy Network as a dynamic loss function. Case study. A case study with multi-label Classification
We propose SEAL which utilizes this energy network as a trainable loss function for a simple feedfoward network. Structured prediction energy networks (SPENs) (Belanger & McCallum, 2016; Gygli et al., 2017) have shown that a neural network (i.e. energy network) can learn a reasonable energy function over the candidate structured outputs. We find that rather than using SPEN as a prediction network, using it as a trainable loss function is not only computationally efficient but also results in higher performance. compared to SPENs in both training and inference time. As the energy loss function is trainable, we propose SEAL to be dynamic which can adapt energy function to focus on the region where feedforward model will be affected most. We find this to be effective in ablation study comparing SEAL to the static version (§4) where energy function is fixed after pretraining. We show the relation to previous work on the joint optimization model of energy network and feedforward model (INFNET) as we show that it is equivalent to SEAL using margin-based loss if INFNET relaxes their loss function. Based on the unique architecture of SEAL, we further propose a variant of SEAL that utilizes noise contrastive ranking (NCE) loss that by itself does not perform well as a structured energy network, but embodied in SEAL, it shows the greatest performance among the variants we study. We demonstrate the effectiveness of SEAL on 7 feature-based and 3 text-based multi-label classification datasets. The best version of SEAL that uses NCE ranking method achieves close to +2.85, +2.23 respective F1 point gain in average over cross-entropy and INFNET on the feature-based datasets, excluding one outlier that has an excessive gain of +50.0 F1 points. Lastly, examining whether the proposed framework is effective on a large pre-trained model as well, we observe SEAL achieving +0.87 F1 point gain in average on top of BERT-based adapter model o text datasets.
Reject
This paper proposes a new method for multi-label classification, which leverages the advantage of the emery-based model. However, one reviewer and the area chair have two serious concerns on the experiments: (1) The proposed method is only evaluated on low dimensional datasets; (2) Some important baselines methods are missing, which makes the comparison not convincing. I suggest the authors to evaluate their methods on more datasets, and add the results from well known multi-label classification method for comparison.
train
[ "GvYmybxcyTp", "j-QfuuqcsX", "jJ617M74xHG", "r0OMZB_AK8y", "OZCyXeBDGhw", "_wCaJUePSIj", "H4HFxnNri9g", "N6cTKqgx3h", "pmKLhL_azso", "dcSJcpQVLhp", "8FXq4BdKpc4", "sH9r3VT4csX", "S6rXAMUusb", "N5XEL1EPaZ", "tI65NW1nn_A", "d40DLuvFM2R", "yENSBD1DDgR", "BfUAu8S_Y0f", "Gd00kWbbDd_",...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ " In terms of novelty:\n\nThis work takes the STEN work, but changes it so that both energy and predictive neural network are optimized jointly (as opposed to separately in the past). To me, this represents limited novelty compared to [Tu et al., 2020], as the only difference is to optimize both objective at the sa...
[ -1, -1, 3, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "j-QfuuqcsX", "r0OMZB_AK8y", "iclr_2022_dEOeQgQTyvt", "dcSJcpQVLhp", "iclr_2022_dEOeQgQTyvt", "N6cTKqgx3h", "Gd00kWbbDd_", "OZCyXeBDGhw", "LC4ZDroRMek", "N5XEL1EPaZ", "S6rXAMUusb", "iclr_2022_dEOeQgQTyvt", "sH9r3VT4csX", "jJ617M74xHG", "iclr_2022_dEOeQgQTyvt", "tI65NW1nn_A", "jJ617M7...
iclr_2022_QguFu30t0d
FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion
Today data is often scattered among billions of resource-constrained edge devices with security and privacy constraints. Federated Learning (FL) has emerged as a viable solution to learn a global model while keeping data private, but the model complexity of FL is impeded by the computation resources of edge nodes. In this work, we investigate a novel paradigm to take advantage of a powerful server model to break through model capacity in FL. By selectively learning from multiple teacher clients and itself, a server model develops in-depth knowledge and transfers its knowledge back to clients in return to boost their respective performance. Our proposed framework achieves superior performance on both server and client models and provides several advantages in a unified framework, including flexibility for heterogeneous client architectures, robustness to poisoning attacks, and communication efficiency between clients and server. By bridging FL effectively with larger server model training, our proposed paradigm paves ways for robust and continual knowledge accumulation from distributed and private data.
Reject
This paper proposes a knowledge distillation strategy to enable the use of a large server-side model in federated learning while satisfying the computation constraints of resource-limited clients. The problem is relevant and well-motivated, and the paper presents compelling experimental results to support the proposed strategy. However, reviewers had the following major comments suggestions/: 1) The theoretical analysis section needs improvement in terms of the technical depth and rigor 2) Better explanation of how the proposed strategy compares with previous works/baselines 3) Considering the privacy and scalability properties of the proposed strategy. The paper generated lots of constructive post-rebuttal discussions between the authors and the reviewers, and I believe the authors received several ideas to improve the work and appreciated the reviews. One of the reviewers increased their score. However, based on the current scores, I still recommend rejection. I do think the paper has promise, and with improvements, the revised version will make an excellent contribution.
train
[ "7yLx5kHQ0CM", "Nv6qQgs01Ub", "l5GolQtg2cH", "tWr6G3VA0mR", "BhYRgGQiUzO", "rYWfmJFxDH", "L4Ebj3lbNdD", "YRMvhmDIIhD", "N-gzbsg2glK", "FavOlBNgiJ", "Uf3zhFJR9Pe", "N-heeqogml", "rHqnWk6mhTT", "YNqc8S95sDV", "kWswGnYtI1A", "zGl-ZoRrcjy", "auLehcDTSRO", "xA4YJanwOQ", "SHUaQg1UQXH",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "o...
[ "This paper aims to enable training of higher accuracy models by training a high-capacity model at the server and lower capacity models at the clients. Knowledge is shared via distillation on a weighted average of logits provided by the clients, rather than by averaging model parameters or model differences. The ap...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_QguFu30t0d", "l5GolQtg2cH", "tWr6G3VA0mR", "lY2-d18gOFi", "N-heeqogml", "YNqc8S95sDV", "auLehcDTSRO", "xA4YJanwOQ", "kWswGnYtI1A", "SHUaQg1UQXH", "iclr_2022_QguFu30t0d", "Uf3zhFJR9Pe", "bh7pVgerkfl", "tkiDn4LJVu", "aOI_YFLP9_m", "iclr_2022_QguFu30t0d", "aMgB9gjUeRh", "Eo...
iclr_2022_vdKncX1WclT
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
The pre-training-then-fine-tuning paradigm has been widely used in deep learning. Due to the huge computation cost for pre-training, practitioners usually download pre-trained models from the Internet and fine-tune them on downstream datasets while the downloaded models may suffer backdoor attacks. Different from previous attacks aiming at a target task, we show that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information. Attackers can restrict the output representations of trigger-embedded samples to arbitrary predefined values through additional training, namely Neuron-level Backdoor Attack (NeuBA). Since fine-tuning has little effect on model parameters, the fine-tuned model will retain the backdoor functionality and predict a specific label for the samples embedded with the same trigger. To provoke multiple labels in a specific task, attackers can introduce several triggers with contrastive predefined values. In the experiments of both natural language processing (NLP) and computer vision (CV), we show that NeuBA can well control the predictions for trigger-embedded instances with different trigger designs. Our findings sound a red alarm for the wide use of pre-trained models. Finally, we apply several defense methods to NeuBA and find that model pruning is a promising technique to resist NeuBA by omitting backdoored neurons.
Reject
This work proposed to insert backdoor into pre-trained models, such that down-streaming tasks can be attacked. One of the main issue indicated by most reviewers is that some important and closely related works are missed and not compared, which also studied the backdoor attack to pre-trained models. The authors argued in the rebuttal that these missed works require some instances of down-streaming tasks, while the proposed method in this work doesn't. However, this difference could not be the reason to miss and not compare with them. Besides, most reviewers also indicated the insufficient experiments, such as limited defense methods, and some experimental results are not well explained. After reading the manuscript, reviews and discussions between reviewers and authors, I think this work is not ready for publication. The reviewers' comments are supposed to be helpful to improve this work.
train
[ "zOz1t0URv06", "pHOZ6VNUswt", "KRPIIvo5aV4", "BJG7WT-uxA", "xLtQUU9DSch", "K720aCuzYM", "1UThbWjK55f", "5Ktcla5I8Sf", "rC7U_cPtovI", "0BN_rpWvJYC", "S1A2PvK-qAn" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper shows that a backdoored pre-trained model can behave maliciously in various downstream tasks without foreknowing task information. Instead of building up connections between triggers and target labels, this paper explores to assign predefined output representations to triggers. Also, to avoid all trigge...
[ 6, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_vdKncX1WclT", "iclr_2022_vdKncX1WclT", "zOz1t0URv06", "S1A2PvK-qAn", "S1A2PvK-qAn", "rC7U_cPtovI", "0BN_rpWvJYC", "0BN_rpWvJYC", "iclr_2022_vdKncX1WclT", "iclr_2022_vdKncX1WclT", "iclr_2022_vdKncX1WclT" ]
iclr_2022_XK4GN6UCTfH
MS$^2$-Transformer: An End-to-End Model for MS/MS-assisted Molecule Identification
Mass spectrometry (MS) acts as an important technique for measuring the mass-to-charge ratios of ions and identifying the chemical structures of unknown metabolites. Practically, tandem mass spectrometry (MS/MS), which couples multiple standard MS in series and outputs fine-grained spectrum with fragmental information, has been popularly used. Manually interpreting the MS/MS spectrum into the molecules (i.e., the simplified molecular-input line-entry system, SMILES) is often costly and cumbersome, mainly due to the synthesis and labeling of isotopes and the requirement of expert knowledge. In this work, we regard molecule identification as a spectrum-to-sequence conversion problem and propose an end-to-end model, called MS$^2$-Transformer, to address this task. The chemical knowledge, defined through a fragmentation tree from the MS/MS spectrum, is incorporated into MS$^2$-Transformer. Our method achieves state-of-the-art results on two widely used benchmarks in molecule identification. To our best knowledge, MS$^2$-Transformer is the first machine learning model that can accurately identify the structures (e.g., molecular graph) from experimental MS/MS rather than chemical formula/categories only (e.g., C$_6$H$_{12}$O$_6$/organic compound), demonstrating it the great application potential in biomedical studies.
Reject
Finally, all reviewers leaned towards rejection. The main concerns were missing methodological depth and questions regarding the experimental evaluation (unclear link between experimental outcomes and methodological details). The rebuttal was not perceived as being fully convincing, and finally nobody wanted to champion this paper. I think that this work has some potential, but in its present form, it does not seem to be ready for publication.
train
[ "xUHBo17QLV", "FcRsV_Ak-R_", "9jKHJrCOZ8u", "kvZuq7Y9tlH", "kGPznmEn0T", "pDTViK_xEE2", "NHv1nTVpyYt" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for explaining how you split the spectrum. However, I don't think an average intra-molecule cosine of 0.38 means you should randomly split the dataset. The cosine similarity between intra-molecule spectra could be low due to the different intensity distribution over peaks. However, the occurrence of peaks ...
[ -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, 4, 3, 5 ]
[ "FcRsV_Ak-R_", "NHv1nTVpyYt", "pDTViK_xEE2", "kGPznmEn0T", "iclr_2022_XK4GN6UCTfH", "iclr_2022_XK4GN6UCTfH", "iclr_2022_XK4GN6UCTfH" ]
iclr_2022_ijygjHyhcFp
Anarchic Federated Learning
Present-day federated learning (FL) systems deployed over edge networks consists of a large number of workers with high degrees of heterogeneity in data and/or computing capabilities, which call for flexible worker participation in terms of timing, effort, data heterogeneity, etc. To achieve these goals, in this work, we propose a new FL paradigm called ``Anarchic Federated Learning'' (AFL). In stark contrast to conventional FL models, each worker in AFL has complete freedom to choose i) when to participate in FL, and ii) the number of local steps to perform in each round based on its current situation (e.g., battery level, communication channels, privacy concerns). However, AFL also introduces significant challenges in algorithmic design because the server needs to handle the chaotic worker behaviors. Toward this end, we propose two Anarchic Federated Averaging (AFA) algorithms with two-sided learning rates for both cross-device and cross-silo settings, which are named AFA-CD and AFA-CS, respectively. Somewhat surprisingly, even with general worker information arrival processes, we show that both AFL algorithms achieve the same convergence rate order as the state-of-the-art algorithms for conventional FL. Moreover, they retain the highly desirable {\em linear speedup effect} in the new AFL paradigm. We validate the proposed algorithms with extensive experiments on real-world datasets.
Reject
This paper on 'anarchic' federated learning (FL) envisions an FL framework where edge clients can act independently instead of their participation being controlled by a central server. The idea is certainly promising, however, the reviewers pointed out the following main issues: 1) Technical gaps in the theoretical analysis need to be addressed 2) Bounded delay assumptions are too strong and are mismatching with the 'anarchic' goal of the framework 3) The linear speed-up claim should be better explained and justified. The paper generated lots of post-rebuttal discussions. However, the concerns about the theoretical analysis still remain, and therefore I recommend rejection. I hope the authors will take these constructive comments into account when revising the paper.
train
[ "CpBEVOlu5SF", "uXsEKKc4Elz", "K2PQWksCinS", "85JgdCUc1Yo", "blo2dsO2tvs", "PvXMQ3ZET-q", "w3pqMG3x12c", "WNu38wGeoOE", "wf3Jn02vOGd", "-2md7_uiVpm", "VO1YFv6lcFR", "b3JDzqF34y", "jtJN28sx513", "eBhzKTnT4Sz", "NaZScNWSXdx", "P3ZHSIC93tQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In order to address the issues of FedAvg (e.g. straggler, waste computation, slow convergence), this paper proposes a new federated training scheme \"anarchic federated learning\" (AFL) as an alternative. Instead of uniformly sampling participant clients, AFL let all workers to decide their number of local steps, ...
[ 6, -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_ijygjHyhcFp", "jtJN28sx513", "iclr_2022_ijygjHyhcFp", "iclr_2022_ijygjHyhcFp", "PvXMQ3ZET-q", "w3pqMG3x12c", "K2PQWksCinS", "wf3Jn02vOGd", "CpBEVOlu5SF", "VO1YFv6lcFR", "b3JDzqF34y", "85JgdCUc1Yo", "NaZScNWSXdx", "P3ZHSIC93tQ", "eBhzKTnT4Sz", "iclr_2022_ijygjHyhcFp" ]
iclr_2022_NuzF7PHTKRw
EAT-C: Environment-Adversarial sub-Task Curriculum for Efficient Reinforcement Learning
Reinforcement learning (RL)'s efficiency can drastically degrade on long-horizon tasks due to sparse rewards and the RL policy can be fragile to small changes in deployed environments. To improve RL's efficiency and generalization to varying environments, we study how to automatically generate a curriculum of tasks with coupled environments for RL. To this end, we train two curriculum policies together with RL: (1) a co-operative planning policy recursively decomposing a hard task into coarse-to-fine sub-task sequences as a tree; and (2) an adversarial policy modifying the environment (e.g., position/size of obstacles) in each sub-task. They are complementary in acquiring more informative feedback for RL: the planning policy provides dense reward of finishing easier sub-tasks while the environment policy modifies these sub-tasks to be adequately challenging and diverse so the RL agent can quickly adapt to different tasks/environments. On the other hand, they are trained using the RL agent's dense feedback on sub-tasks so the sub-task curriculum keeps adaptive to the agent's progress via this ``iterative mutual-boosting'' scheme. Moreover, the sub-task tree naturally enables an easy-to-hard curriculum for every policy: its top-down construction gradually increases sub-tasks the planning policy needs to generate, while the adversarial training between the environment policy and the RL policy follows a bottom-up traversal that starts from a dense sequence of easier sub-tasks allowing more frequent modifications to the environment. Therefore, jointly training the three policies leads to efficient RL guided by a curriculum progressively improving the sparse reward and generalization. We compare our method with popular RL/planning approaches targeting similar problems and the ones with environment generators or adversarial agents. Thorough experiments on diverse benchmark tasks demonstrate significant advantages of our method on improving RL's efficiency and generalization.
Reject
I thank the authors for their submission and active participation in the discussions. This papers is borderline with three reviewers leaning towards acceptance [3c96,7T33,Zhvq] and one leaning towards rejection [o38w]. Reviewer o38w's main concerns are around the lack of details about how the baselines were tuned and missing training details (specifically the connectivity test used to reject candidate environments). During discussion both, reviewers Zhvq and 7T33, agree that the paper requires substantial restructuring/rewriting to properly address the reviewer's feedback which is currently mostly addressed in the appendix. Based on the discussion with reviewers, my assessment is that this paper is not ready for publication at this point and that it will benefit greatly from another iteration. I want to very strongly encourage the authors to further improve their paper based on the reviewer feedback.
train
[ "Juf4Y1Jfos6", "9VcbimKro-Z", "RMe3MGacxTh", "mhHOwH90L0", "ueFaQZdxmEP", "fGc-xhX3aTi", "xmIpeAYl_BF", "FxEEN_wBuHM", "Nd3E2vY9ql", "7GQgFh7vYRo", "ejic27wz6aS", "9FHzd4qmbZ9", "XDxBcq7cRZ", "G6v5qtOWWp", "pKg9nj0OJDq", "0NaBjvPw9g", "FvwJy7yv7Pd", "Tobilr3Gc21", "Grqh6FkwIiY", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_rev...
[ " We are glad to hear that several main concerns raised by you have been addressed by our reply and new experiments! Here is our reply to your remaining concerns:\n\n- A complete grid search over all the listed hyperparameters is too costly. Hence, we partitioned these hyperparameters into four groups of strongly r...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ueFaQZdxmEP", "9FHzd4qmbZ9", "xmIpeAYl_BF", "9VcbimKro-Z", "fGc-xhX3aTi", "ejic27wz6aS", "7GQgFh7vYRo", "iclr_2022_NuzF7PHTKRw", "XDxBcq7cRZ", "YRC60vYuaGs", "YRC60vYuaGs", "6MwInvaR32", "0NaBjvPw9g", "0NaBjvPw9g", "iclr_2022_NuzF7PHTKRw", "FvwJy7yv7Pd", "Tobilr3Gc21", "HUkoToTWVo...
iclr_2022_LQCUmLgFlR
On Optimal Early Stopping: Overparametrization versus Underparametrization
Early stopping is a simple and widely used method to prevent over-training neural networks. We develop theoretical results to reveal the relationship between optimal early stopping time and model dimension as well as sample size of the dataset for certain linear regression models. Our results demonstrate two very different behaviors when the model dimension exceeds the number of features versus the opposite scenario. While most previous works on linear models focus on the latter setting, we observe that in common deep learning tasks, the dimension of the model often exceeds the number of features arising from data. We demonstrate experimentally that our theoretical results on optimal early stopping time corresponds to the training process of deep neural network. Moreover, we study the effect of early stopping on generalization and demonstrate that optimal early stopping can help mitigate ''descent'' in various settings.
Reject
This paper studies the problem of characterizing the optimal early stopping time in overparameterized learning as a function of model dimension and sample size. To do this the paper uses an explicit form of the gradient flow from prior work to present high probability bounds in the over-parameterized setting and characterizes various properties of the optimal stopping time. The authors also conduct various experiments to verify the theory. The reviews though the paper was interesting and insightful. They also raised some concerns about the (1)restrictiveness of the distributional assumptions, (2) poor explanation of the theoretical results, and (3) novelty with respect to other work and (4) other technical issues. The discussion and response mitigated these concerns but the reviewers decided to mostly keep their original score. My own reading of the paper is that there are good ideas in this paper and I agree with the authors that some of the technical issues raised by the reviewers is incorrect. However, it is also clear that the paper needs a bit more work to put it into the right context and also the proof need to be more clearly and carefully written before this paper can be accepted. Therefore I recommend rejection but encourage the authors to submit to a future ML venue after a thorough revision.
train
[ "gz4doWSZbCw", "mX29FtG9HIl", "jImYDcJBTee", "LwdIb-qg93", "jFbbnqBqPcb", "t54JdkC_gN9", "JltJRz465PK", "XQI3ZpDLFM1", "rfol-Ipbal1", "kEr68NipMC", "UWDyJ_gn5yN", "ABFzvjn_KtT" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response.\n- We will change the wording \"over/under-parametrization\" in our revised paper. Thank you for your and Reivewer EU5H's suggestion.\n- There are a huge number of different neural networks used in practice, so we don't think it is possible to have a unified model size definition for ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "LwdIb-qg93", "jImYDcJBTee", "JltJRz465PK", "jFbbnqBqPcb", "ABFzvjn_KtT", "UWDyJ_gn5yN", "kEr68NipMC", "rfol-Ipbal1", "iclr_2022_LQCUmLgFlR", "iclr_2022_LQCUmLgFlR", "iclr_2022_LQCUmLgFlR", "iclr_2022_LQCUmLgFlR" ]
iclr_2022_fkjO_FKVzw
Coarformer: Transformer for large graph via graph coarsening
Although Transformer has been generalized to graph data, its advantages are mostly observed on small graphs, such as molecular graphs. In this paper, we identify the obstacles of applying Transformer to large graphs: (1) The vast number of distant nodes distract the necessary attention of each target node from its local neighborhood; (2) The quadratic computational complexity regarding the number of nodes makes the learning procedure costly. We get rid of these obstacles by exploiting the complementary natures of GNN and Transformer, and trade the fine-grained long-range information for the efficiency of Transformer. In particular, we present Coarformer, a two-view architecture that captures fine-grained local information using a GNN-based module on the original graph and coarse yet long-range information using a Transformer-based module on the coarse graph (with far fewer nodes). Meanwhile, we design a scheme to enable message passing across these two views to enhance each other. Finally, we conduct extensive experiments on real-world datasets, where Coarformer outperforms any single-view method that solely applies a GNN or Transformer. Besides, the coarse global view and the cross-view propagation scheme enable Coarformer to perform better than the combinations of different GNN-based and Transformer-based modules while consuming the least running time and GPU memory.
Reject
The paper aims to scale transformers to large graphs. In this regard, authors propose to first obtain a "coarse" version of the large graph using existing algorithms. With reduced number of nodes in the coarse graph, we can employ the transformer efficiently to capture the global information. To capture the local information, GNNs are employed. Finally, authors carry out extensive experiments on a range of graph datasets. Also, reviewers do appreciate reporting the confidence intervals. We thank the reviewers and authors for engaging in an active discussion. Unfortunately, the reviewers are in a consensus that novelty of the proposed method is limited: it is combination of existing techniques and similar ideas have been widely used in the literature. Also, the empirical results are not very significant. Thus, unfortunately I cannot recommend an acceptance of the paper in its current form.
train
[ "OxULdeM7B69", "-u6qu_XLrk", "8Wr-y8MY-6K", "IIxaTDr-WPR", "4iTmjDxswCs", "X3P9AHhGaUk", "FI3iVbzFE9m", "j5x3uVG50sF", "oVFPN5_U4Oe", "oQNyEz4ljx5", "dIv7EiQJrU1", "dtvwOZ_Yx-t", "P7dJHTNmDKV", "A3v9uatP9jr", "e-GOZi1Hxoq", "75iIFTjWJx3" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The paper proposes hierarchical neural network model for processing graphs. The lower level can use any GNN model suitable for processing smaller subgaphs of the original graph while the higher level part realized by a transformer operates on an abstracted 'coarse' graph. Strong points\n* I find the empirical part...
[ 8, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_fkjO_FKVzw", "oVFPN5_U4Oe", "4iTmjDxswCs", "iclr_2022_fkjO_FKVzw", "X3P9AHhGaUk", "FI3iVbzFE9m", "dtvwOZ_Yx-t", "A3v9uatP9jr", "oQNyEz4ljx5", "OxULdeM7B69", "75iIFTjWJx3", "IIxaTDr-WPR", "e-GOZi1Hxoq", "iclr_2022_fkjO_FKVzw", "iclr_2022_fkjO_FKVzw", "iclr_2022_fkjO_FKVzw" ]
iclr_2022_s-b95PMK4E6
Hierarchical Modular Framework for Long Horizon Instruction Following
Robotic agents performing domestic chores using natural language directives re-quire to learn the complex task of navigating an environment and interacting with objects in it. To address such composite tasks, we propose a hierarchical modular approach to learn agents that navigate and manipulate objects in a divide-and-conquer manner for the diverse nature of the entailing tasks. Specifically, our policy operates at three levels of hierarchy. We first infer a sequence of subgoals to be executed based on language instructions by a high-level policy composition controller (PCC). We then discriminatively control the agent’s navigation by a master policy by alternating between navigation policy and various independent interaction policies. Finally, we infer manipulation actions with the corresponding object masks using the appropriate interaction policy. Our hierarchical agent, named HACR (Hierarchical Approach for Compositional Reasoning), generates a human interpretable and short sequence of sub-objectives, leading to efficient interaction with an environment, and achieves the state-of-the-art performance on the challenging ALFRED benchmark.
Reject
This paper led to significant discussion, and the AC is generally on the fence. First of all, thanks to the reviewers for the significant time they invested in the discussion, and thanks for the authors for promptly and patiently answering our questions. Overall, the reviewer recommendations are positive. However, the discussion showed that despite the positive recommendation, the reviewers struggled to distill the general contribution of the paper beyond performance on ALFRED. In discussion, the authors distinguished their contribution from existing work by focusing on using a set of low-level policies at the root of the overall policy. This relies on the discrete set of behaviors that is defined within the ALFRED benchmark. It's not clear how it generalizes to the actual problem of instructing a robot to execute natural language instruction. In realistic scenarios, is it possible to define a set of behaviors in such a clean way, and at scale? And then train/manage a separate model for each behavior? The set of interaction policies in Figure 2 illustrates this challenge well. The answer to this scaling question is not clear. This corresponds to a concern raised repeatedly by the reviewers about the approach too specialized to ALFRED. The AC shares this concern. (which are roughly equal to the SOTA at the time of submission, but show significantly more overfitting to seen environments) On the positive side, this is solid work, with good results. The paper is well written, and the authors largely addressed the concerns raised as much as possible. The results are not SOTA though. The current SOTA was submitted on 09/19/2021, prior to the ICLR deadline -- it's not included in the results table in this paper. (To clarify, the fact that it's not the current SOTA does not affect the final decision, as they are considered as contemporaneous.) With concerns regarding the specificity of the approach, this paper may interest researchers working on ALFRED, but not clear to what depth, despite the clearly significant work and effort the authors put into the paper. (If the paper is accepted, the AC asks the authors to fix the standing errors with regard to previous work, as discussed below, and to include more recent results from the leaderboard)
train
[ "oix6GJ1UFl", "aBpVxqd5qCP", "ZQq8JfGEVoS", "1B1EdcmM59s", "dfvrsrsswX_", "WxZdBJiCxd8", "l5oNam9dAlW", "mhQkbUZ3NY-", "QCXAHDlQ_3z", "DlWJWY6Mqqk", "QazJCMNn9hz", "UQUuiGNO8X", "uWmZujZ6pTK", "crGGrS3mvgp", "qAIToJJKTpv", "QYE4kwMnQcm", "fpJcAY8DKNT", "35O0Vc5wWpo", "WFkMSfJK41"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Thank you for the further clarification on the concerns. We misunderstood some of the questions and respect the reviewer’s opinion.\n\n> **[Overfitting Issues].** The reviewer argues that the better performance of HACR on the overfitting issues come from the usage of additional simulated scenes. Actually, such da...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "1B1EdcmM59s", "fpJcAY8DKNT", "iclr_2022_s-b95PMK4E6", "l5oNam9dAlW", "QazJCMNn9hz", "mhQkbUZ3NY-", "QCXAHDlQ_3z", "35O0Vc5wWpo", "GWdOmv1Dku", "iclr_2022_s-b95PMK4E6", "WFkMSfJK41", "uWmZujZ6pTK", "35O0Vc5wWpo", "iclr_2022_s-b95PMK4E6", "ZQq8JfGEVoS", "ZQq8JfGEVoS", "ZQq8JfGEVoS", ...
iclr_2022_gtvM-nBZEbc
Learning Visual-Linguistic Adequacy, Fidelity, and Fluency for Novel Object Captioning
Novel object captioning (NOC) learns image captioning models for describing objects or visual concepts which are unseen (i.e., novel) in the training captions. Such captioning models need to sufficiently describe such visual data with fluent and natural language expression. In other words, we expect the produced captions being linguistically fluent, containing novel objects of interest, and fitting the visual concept of the image. The above three aspects thus correspond to fluency, fidelity, and adequacy, respectively. However, most novel object captioning models are not explicitly designed to address the aforementioned properties due to the absence of caption annotations. In this paper, we start by providing an insight into the relationship between the above properties and existing visual/language models. Then, we present VLAF2, for learning Visual-Linguistic Adequacy, Fidelity, and Fluency, which utilizes linguistics observed from captions for describing visual information of images with novel objects. More specifically, we revisit BERT and CLIP, and explain how we leverage the intrinsic language knowledge from such popular models to reward captions with precise and rich visual content associated with novel images. To validate the effectiveness of our framework, we conduct extensive experiments on the nocaps dataset. Our method not only performs favorably against state-of-the-art novel captioning models in all caption evaluation metrics, but also surpasses the SPICE scores of human baseline. We perform quantitative and qualitative analysis to demonstrate how our model generates novel object captions with improved fluency, fidelity, and adequacy. Implementation details and code are available in the supplementary materials.
Reject
This paper proposes a framework for novel object captioning by combining BERT and CLIP. The model improves fluency, fidelity, and adequacy of generated captions. However, as reviewers mentioned, the novelty is limited, combining large models and big data to solve a downstream task does not make useful insights at this moment.
train
[ "hh-XnFKF2VZ", "pIzY80bRCc2", "9FNQt5uT23J", "N6OTE3Pu25l", "t9hdm2IGah", "CdwnjV6TFYD", "KZiAIRlCSVT", "7ZnzGs0rwwN", "0kpz1Sd5NyS", "MO2rnSEWgvN", "ibvqyrsIh8", "gVhF9sN-tvK", "PBdoxwZfoHl", "XeHY9USH0m4", "ouEVtUgdNMd", "3Vq0r-96aoV", "iq51bxtmh7w", "pX_JW1qXHoi", "MI9lcf1GXxf...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for providing additional feedback at the last minute, which allows us to understand what the major concerns remain. \n\nWe are sorry that the reviewer still considers the task of NOC not important, while we provided recent SOTA NOC works of ICCV’19 [1], AAAI’21 [2] and CVPR’21 [3] as the sup...
[ -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, 5, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "pIzY80bRCc2", "7ZnzGs0rwwN", "iclr_2022_gtvM-nBZEbc", "CdwnjV6TFYD", "iclr_2022_gtvM-nBZEbc", "t9hdm2IGah", "iclr_2022_gtvM-nBZEbc", "0kpz1Sd5NyS", "bErCDeOTQmR", "t9hdm2IGah", "MO2rnSEWgvN", "ibvqyrsIh8", "XeHY9USH0m4", "ouEVtUgdNMd", "MI9lcf1GXxf", "pX_JW1qXHoi", "9FNQt5uT23J", ...
iclr_2022_u6sUACr7feW
DPP-TTS: Diversifying prosodic features of speech via determinantal point processes
With the rapid advancement in deep generative models, recent neural text-to-speech models have succeeded in synthesizing human-like speech, even in an end-to-end manner. However, many synthesized samples often have a monotonous speaking style or simply follow the speaking style of their ground-truth samples. Although there have been many proposed methods to increase the diversity of prosody in speech, increasing prosody variance in speech often hurts the naturalness of speech. Determinantal point processes (DPPs) have shown remarkable results for modeling diversity in a wide range of machine learning tasks. However, their application in speech synthesis has not been explored. To enhance the expressiveness of speech, we propose DPP-TTS: a text-to-speech model based on a determinantal point process. The extent of prosody diversity can be easily controlled by adjusting parameters in our model. We demonstrate that DPP-TTS generates more expressive samples than baselines in the side-by-side comparison test while not harming the naturalness of the speech.
Reject
This paper proposes the use of the determinantal point process to introduce the diversity in the prosodic features, including intonation, stress, and rhythm, in text to speech synthesis. The proposed approach is certainly new, but the experimental support is of critical importance for this work. One of the major points of discussion was the reliability of the experimental results. In the original submission, the mean opinion score (MOS) of the proposed approach was inferior to the baseline. The authors updated the experiments, which significantly (more than the confidence interval) lowers the MOS of a baseline. This however makes the experimental results questionable.
val
[ "r92Hv04eK-R", "PPyp8hGvaq1", "Nb2cvALlPeM", "VxxTdV_MU_l", "Y6-h8NgokVE", "eMxKkdvTKY8", "VEeONpb8cPR", "Rj1y2qmvSgZ", "dcAugH9xBFT" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank for the reviewer carefully reading our responses and the updated manuscript. Here are our responses regarding the post rebuttal.\n\n**About diversifying prosodic features of neighboring words**\n\nFor addressing the monotonous pattern of speech, we have carefully considered the scale or scope of targets ...
[ -1, 5, -1, -1, -1, -1, -1, 3, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "PPyp8hGvaq1", "iclr_2022_u6sUACr7feW", "iclr_2022_u6sUACr7feW", "iclr_2022_u6sUACr7feW", "Rj1y2qmvSgZ", "PPyp8hGvaq1", "dcAugH9xBFT", "iclr_2022_u6sUACr7feW", "iclr_2022_u6sUACr7feW" ]
iclr_2022_q1QmAqT_4Zh
Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Offline reinforcement learning leverages large datasets to train policies without interactions with the environment. The learned policies may then be deployed in real-world settings where interactions are costly or dangerous. Current algorithms over-fit to the training dataset and as a consequence perform poorly when deployed to out-of-distribution generalizations of the environment. We aim to address these limitations by learning a Koopman latent representation which allows us to infer symmetries of the system's underlying dynamic. The latter is then utilized to extend the otherwise static offline dataset during training; this constitutes a novel data augmentation framework which reflects the system's dynamic and is thus to be interpreted as an exploration of the environments phase space. To obtain the symmetries we employ Koopman theory in which nonlinear dynamics are represented in terms of a linear operator acting on the space of measurement functions of the system and thus symmetries of the dynamics may be inferred directly. We provide novel theoretical results on the existence and nature of symmetries relevant for control systems such as reinforcement learning settings. Moreover, we empirically evaluate our method on several benchmark offline reinforcement learning tasks and datasets including D4RL, Metaworld and Robosuite and find that by using our framework we consistently improve the state-of-the-art for Q-learning methods.
Reject
This paper proposes to improve offline RL by a data augmentation technique that exploits the symmetry of the dynamics using Koopman operator. The idea is interesting but the draft at its current form has several weaknesses as pointed out by the reviewers. The scores are borderlines at this point. I read the paper and find myself agree with reviewer ohJ3 in both the lack of clarity and the gap in theory and empirical results. The math presentation still a careful check and improvement. Eq(1)-(4) are already fairly confusing (should $Q_i$ and $\pi_i$ be replaced by $Q$ and $\pi$ in Eq(1)-(4), and $\hat Q$ by $\hat Q_i$ in Eq (2)-(3)?). I would like to suggest the authors to add a self-contained algorithm box for the practical algorithm procedure. Do the readers really need to understand the full Koopman theory (section 3.1) before understanding the algorithm? The authors could think about if it is better to present the practical algorithm first with minimum math, and then analyze the property of the algorithm using the math tools (and in this case, make it clear what theoretical guarantees we get exactly). I think making the paper more accessible can help the paper gain more popularity in ML readers.
train
[ "ZVSFwbuZgXN", "0hsBz_TlkI", "GF9_M3Fngsh", "51_nhuTx9TC", "U9XINDUQvdI", "G_a5y0x4uEk", "XaXBL1oGz6L", "2NpedqfqGsRT", "hg62dr5qFnT", "cO633_ZXV2U", "wcAEe7gag82", "3qj8y44RY44", "39c2eN9zgmt", "NqCCHUz_LN7", "wYExqi1gbX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their detailed response. I appreciate the pointer to the ablations in the appendix, and I think the proposed revisions to the text will improve the paper. I disagree with the claim that DBC and DeepMDP would not be relevant baselines, as neither learns a model of the environment and, whi...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "51_nhuTx9TC", "GF9_M3Fngsh", "wYExqi1gbX", "wYExqi1gbX", "iclr_2022_q1QmAqT_4Zh", "2NpedqfqGsRT", "iclr_2022_q1QmAqT_4Zh", "hg62dr5qFnT", "XaXBL1oGz6L", "NqCCHUz_LN7", "39c2eN9zgmt", "iclr_2022_q1QmAqT_4Zh", "iclr_2022_q1QmAqT_4Zh", "iclr_2022_q1QmAqT_4Zh", "iclr_2022_q1QmAqT_4Zh" ]
iclr_2022_fwsdscicqUm
Improving Fairness via Federated Learning
Recently, lots of algorithms have been proposed for learning a fair classifier from centralized data. However, how to privately train a fair classifier on decentralized data has not been fully studied yet. In this work, we first propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness. Our analysis reveals that federated learning can strictly boost model fairness compared with all non-federated algorithms. We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data. To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol. Our extensive experimental results show that FedFB significantly outperforms existing approaches, sometimes achieving a similar tradeoff as the one trained on centralized data.
Reject
The authors consider the problem of training a fair classifier on decentralized data, and compare three methods: training locally, training the proposed FedAvg algorithm with local fairness, and a global fairness approach. The reviewers agreed that the setting was interesting and novel, but had concerns about the writing quality, experimental setup, and, most importantly, the organization of the paper, with several reviewers complaining that necessary information was relegated to the appendix. Overall, this work is not quite ready for publication. With that said, the reviewers agreed that it was interesting and highly promising (it just needs refinement). Please seriously consider the reviewers' recommendations, which on the whole were very constructive and, if followed, should lead to a significant improvement in your manuscript.
train
[ "yX80FvoYlvE", "T0CX37Oc945", "Mj8CIoIPWO3", "DUIK6UatBaz", "ZyqqkkC2pJh", "4LFjyw9ga70", "CG-KFK7k6K2", "D_JZuq4XrJ", "FPkI6gurLck", "IjNmuD7EJQC", "Vj_W05_aXs", "nDDHlK-NFB", "wOzU3fcEla0", "-hpK0G7dqGI", "sFFuAfHiZjM" ]
[ "public", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This work studies the fairness issues in federated learning. There is a very relevant reference that is missed. \"Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning\" with arxiv link https://arxiv.org/pdf/2108.08435.pdf\n\n***\n\nMany thanks for you. After careful learning, I'm s...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2022_fwsdscicqUm", "ZyqqkkC2pJh", "DUIK6UatBaz", "IjNmuD7EJQC", "D_JZuq4XrJ", "nDDHlK-NFB", "wOzU3fcEla0", "FPkI6gurLck", "sFFuAfHiZjM", "-hpK0G7dqGI", "yX80FvoYlvE", "iclr_2022_fwsdscicqUm", "iclr_2022_fwsdscicqUm", "iclr_2022_fwsdscicqUm", "iclr_2022_fwsdscicqUm" ]
iclr_2022_vXGcHthY6v
Invariance Through Inference
We introduce a general approach, called invariance through inference, for improving the test-time performance of a behavior agent in deployment environments with unknown perceptual variations. Instead of producing invariant visual features through memorization, invariance through inference turns adaptation at deployment-time into an unsupervised learning problem by trying to match the distribution of latent features to the agent's prior experience without relying on paired data. Although simple, we show that this idea leads to surprising improvements on a variety of adaptation scenarios without access to task reward, including changes in camera poses from the challenging distractor control suite.
Reject
Meta Review for Invariance Through Inference The motivation of this work is to address the problem of learning a model that generalizes well on a test distribution that samples outside of the training data distribution. Reviewer X3P2 wrote a good summary of the paper: In this paper, the transfer of a reinforcement model (RL) setting from an idealized (training) environment to a more realistic environment with distractors in the observations is considered. Instead of augmenting the training environment with more data so as to make the system more resilient to variations and distractors, the system is adapted at test time to be invariant to the specific distractors found in the environment. Experiments in simulation show the benefits of the proposed approach. Crucially, the agent is not able to access any reward data at test time. Reviewers, including myself, recognize the novelty of the approach, in particular appreciate the authors' motivation to provide a more principled way to model environment invariance that can possibly scale well in comparison to data driven approaches. However, the initial round of feedback is generally negative, in particular, most reviewers raise concerns regarding the lack of clarity in presentation, and also have issues with the narrow range of experimental evaluation. Clearly, this is promising work, but possibly had to be rushed for submission. To the authors' credit, they devoted substantial efforts to completely revamp their paper, addressing many of the issues head on. The resulting updated manuscript is almost a complete rewrite of the paper. All reviewers acknowledge (and praise) the effort from the authors' to improve the paper, and 3 out of 4 reviewers had improved (or maintained) their scores from rejection to a 6. But as the paper is a complete revamp, reviewers did not have the time to assess the entire rewrite of the paper (it's like the need to review a paper from scratch), so the confidence is reduced. While X3P2 did not change their score, they did lead a discussion amongst myself and other reviewers, and they spent the time to take a detailed look at the completely revised draft. Here are the comments from that discussion, for full transparency: --- *The paper has been completely revamped, to the point in which the presented technique is actually different (the dynamics loss now includes a new forward term). The changes are overall welcome since it significantly improves the clarity of the presentation. Experiments still show promise.* *I still have problems with the theoretical aspect of it, though. I think that it is unclear why the proposed system is working and fails to provide the minimal system that works.* *Equation (4) is dimensionally incorrect. It's summing squared error over actions with squared error over latents. Both of these are arbitrary units that can lead to the forward or backward losses dominating. It's also unclear why both of these losses are necessary and not just the inverse one.* *Equation (7) is similarly dimensionally incorrect. Again, units are arbitrary and for all we know the joint loss could be ignoring the dynamics loss or the adversarial loss.* *The use of a GAN and the corresponding loss is unjustified. As the authors acknowledge, if the dynamic loss is very small, then the system should already work. They argue that finding that parameterization without the adversarial loss is challenging. There's a difference between using the right loss and finding the right way to optimize it, but here it seems that the right loss is being modified for optimization purposes. Is the adversarial loss something we really want to minimize or just something that helps find the best dynamics loss? What if we remove the adversarial loss after being close to convergence? What if we use multiple restarts or other techniques to help with the optimization of only the dynamics loss? My take from the theory is that the adversarial loss term shouldn't be needed and that the challenging optimization problem should be addressed (rather than modifying the loss).* *The new ablation experiments are also confusing: If the dynamics loss is the actual driver, and the adversarial loss only helps with finding a good solution, how come that we get almost equally good results when we remove the dynamic loss? Matching the latent distribution shouldn't be enough to have aligned latents. Maybe there's something about the architecture of F that matches the ground truth, so that matching the distribution aligns the latents. This hints towards the adversarial loss actually playing an important role beyond helping with the optimization problem. This is not supported at all by the theory, since matching distributions should result in arbitrary latents and potentially performance of a random system. In fact, given the problems with units, the joint loss might be dominated by the adversarial loss, which would explain this result.* *It seems like there's something here, but I think more work is necessary to really understand which pieces are necessary in this system and whether there's some sort of adaptation between the experimental setup and F that would explain why distribution matching results in latent alignment, which is not expected (Zhu et al. 2017). Also, the units problem makes the ablation results even harder to interpret: Maybe the dynamics loss is playing a small role in the joint loss, and that's why removing it completely has a small impact.* --- After much assessment, while I do find this work to be interesting and potentially highly impactful (since they introduce an alternative approach to data-centric one OOD), the final manuscript's assessment is still borderline (the reviewers all mentioned that while they recognize the improvement, they list issues from preventing their full endorsement), and X3P2 still found several issues with the revision (which I do believe can be addressed in due time). While I'm fully confident that with additional work, this paper could have the potential to be an impactful one, I am currently on the side of not recommending it for acceptance for ICLR 2022. Note to the PC's, that this is a borderline decision. If the PC's want to flip the decision to an accept, and think the post rebuttal issues are small enough, I'll be fine with that. But in any case, I look forward to seeing a further improved version (of the revamped manuscript) published in a journal or presented at a future conference. Good luck!
train
[ "ImGJVTP3vNB", "VK9wG1i-mF-", "RKebZdpYf48", "rlIVKg8I1ty", "wYmc9iltyAZ", "n6J0g6qfYXq", "Lxd3CNyODC1", "cumwF2UKClH", "ivmp1vxah7t", "Y7tlO0dwBu8", "A2wA4i9J_Wo", "VGHV04jexpj", "BxiIgIcj4c4", "2Jy-OQDoAvt", "iBgqqrnSis2", "noEYiWNwTQ8", "1i7alFfygSK", "PgQW8iffDKI", "z9vAp8t9b...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n**Introduction:**\n\nIn this paper the authors aim to tackle the problem of learning a model that generalizes well on a test distribution that samples outside of the training data distribution. They aim to do this not via data augmentation but by using an inference model over the latent space of the input. Suc...
[ 6, -1, -1, -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, 2, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_vXGcHthY6v", "z9vAp8t9bBo", "z9vAp8t9bBo", "z9vAp8t9bBo", "ImGJVTP3vNB", "ImGJVTP3vNB", "ImGJVTP3vNB", "iclr_2022_vXGcHthY6v", "VGHV04jexpj", "iclr_2022_vXGcHthY6v", "iclr_2022_vXGcHthY6v", "2Jy-OQDoAvt", "Y7tlO0dwBu8", "cumwF2UKClH", "Y7tlO0dwBu8", "ImGJVTP3vNB", "z9vAp8t...
iclr_2022_ajOSNLwqssu
Generating Antimicrobial Peptides from Latent Secondary Structure Space
Antimicrobial peptides (AMPs) have shown promising results in broad-spectrum antibiotics and resistant infection treatments, which makes it attract plenty of attention in drug discovery. Recently, many researchers bring deep generative models to AMP design. However, few studies consider structure information during the generation, though it has shown crucial influence on antimicrobial activity in all AMP mechanism theories. In this paper, we propose LSSAMP that uses the multi-scale VQ-VAE to learn the positional latent spaces modeling the secondary structure. By sampling in the latent secondary structure space, we can generate peptides with ideal amino acids and secondary structures at the same time. Experimental results show that our LSSAMP can generate peptides with multiply ideal physical attributes and a high probability of being predicted as AMPs by public AMP prediction models.
Reject
Reviewers agreed that taking into account the secondary structure in addition to the amino acid sequence, although not new in bioinformatics, may be a good idea in the context of deep generative models of peptides. On the other hand, all reviewers also agreed that the experimental results do not allow concluding about the potential benefit of the method, i.e., whether it is likely to produce potent AMPs (and whether it does it better than existing methods). Indeed, the proposed computational criteria can not replace a proper experimental validation, and it is not clear whether a "better method" on the computational criteria will be "better" in the real world. Second, the results on the computational criteria are not convincing: regarding the physical properties, it remains debatable to claim that a method is good if it outputs many AMPs that fulfill the criterion, while less than 7% of the true AMPs do; and regarding the computational prediction of being an AMP, the proposed method is outperformed by existing ones. In conclusion, we consider that the paper is not ready for publication at ICLR, since there is no significant methodological novelty nor significant experimental results if this is an application paper, and we encourage the authors to consider a publication with wet lab experiments to demonstrate the relevance of the method.
test
[ "IOCCGjhBbon", "cK51OFoun7P", "0AeAjmL3Jan", "qi7KmkkH3h", "_zcKHMdStti", "hAhQeKPtEa-", "jno4uLCA6O3", "Cd5chuXzvz5", "nOIY7ZPEacF", "1r-Ytmb7QLV", "d_h_9eNXRr1", "KZvsv0wJ1lP" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The antimicrobial mechanisms are various for different target microbes and by limiting the range of our attributes, we only focus on a specific one. The three physical attributes are also widely used in related works (Capecchi et al., 2021; Das et al., 2020; Van Oort et al., 2021). Besides, the classifiers are le...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 1 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "qi7KmkkH3h", "0AeAjmL3Jan", "jno4uLCA6O3", "nOIY7ZPEacF", "iclr_2022_ajOSNLwqssu", "jno4uLCA6O3", "KZvsv0wJ1lP", "_zcKHMdStti", "d_h_9eNXRr1", "iclr_2022_ajOSNLwqssu", "iclr_2022_ajOSNLwqssu", "iclr_2022_ajOSNLwqssu" ]
iclr_2022_Ck_iw4jMC4l
Logical Activation Functions: Logit-space equivalents of Boolean Operators
Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence (versus absence) of features within the stimulus. Under this interpretation, we can derive the probability $P(x_0 \cap x_1)$ that a pair of independent features are both present in the stimulus from their logits. By converting the resulting probability back into a logit, we obtain a logit-space equivalent of the AND operation. However, since this function involves taking multiple exponents and logarithms, it is not well suited to be directly used within neural networks. We thus constructed an efficient approximation named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits) utilizing only comparison and addition operations, which can be deployed as an activation function in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ is a generalization of ReLU to two-dimensions. Additionally, we constructed efficient approximations of the logit-space equivalents to the OR and XNOR operators. We deployed these new activation functions, both in isolation and in conjunction, and demonstrated their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning.
Reject
The paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approxiThe paper is well written and deals with a simple yet interesting introduction of approximate Boolean logic activation functions. A number of comparison experiments showed intriguing differences for problems with potential logical structures. Authors suggest a probabilistic rational/motivation, i.e., computation in logit-space, however, more theoretical investigation is critically needed to answer why they perform the way they do. There are a lot of activations in the literature, so perhaps it is not easy to make a distinct contribution in performance. Despite the large number of experiments the reviewers were not convinced on how they support authors claims and contributions. The reviewers and AC strongly encourage the authors to keep the direction and improve the paper for another conference.
test
[ "Fjnr2pAQGSS", "CvFlJ284dmH", "zIN6pI8IX1", "qqczKEUJ-FI", "aFX7ZwjKAyX", "b4un211xJoK", "GMCWLEpBzv", "5_4V6BYAZm0", "0_3Crc01_Ny", "77vrazEdiEP", "19Kq-bHyWVu", "1rsWFpKghET", "W-q5izungRZ", "gEilDIAxLn", "DRirzUtG_h6" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes new activation functions based on the approximation of AND, OR, XOR operations. In addition, ensembling strategies are proposed to combine features from these operators. The experiments on image classification, transfer learning, abstract reasoning, and zero-shot learning tasks show that the pro...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2022_Ck_iw4jMC4l", "Fjnr2pAQGSS", "Fjnr2pAQGSS", "19Kq-bHyWVu", "gEilDIAxLn", "gEilDIAxLn", "5_4V6BYAZm0", "0_3Crc01_Ny", "W-q5izungRZ", "1rsWFpKghET", "1rsWFpKghET", "DRirzUtG_h6", "iclr_2022_Ck_iw4jMC4l", "iclr_2022_Ck_iw4jMC4l", "iclr_2022_Ck_iw4jMC4l" ]
iclr_2022_1oEvY1a67c1
If your data distribution shifts, use self-learning
In this paper, we demonstrate that self-learning techniques like entropy minimization or pseudo-labeling are simple, yet effective techniques for increasing test performance under domain shifts. Our results show that self-learning consistently increases performance under distribution shifts, irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few training epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
Reject
The main contribution of the paper is to perform a systematic and large study of self-training as a method to deal with distribution shifts. Reviewers have appreciated the clarity in the overall writing of the paper, and rigor in the empirical analysis. However the main concern from two of the reviewers is that the technical contributions of the paper are only marginal and incremental in nature. The premise that self-learning improves robustness is already somewhat well-established (Reviewer PUq6 has pointed out papers that focus on how self-training / self-learning improves distribution shift and how self-training and pre-training stack together), and the main contribution of the paper is a systematic application to different datasets. Given the existing work on the relevance of self-training in distribution shift, the paper falls below the acceptance bar for ICLR in my opinion.
train
[ "4FaXXuvYtNf", "bYLx75dfoLP", "zK4XWp0hqEV", "yLldNUDx3Hg", "2f-EMBGbWMu", "y5J5_10_Khn", "9CnvW9yPpNr", "dmin7DraT-p", "9b3i7yuqdhF", "qJ-_KwQYM_-", "MOukwCb7h8-", "-8k6YinOWrX", "8iANtp4NwiV", "19vgZuQ2h05", "RMnXMQq_BZM" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer **b5KJ**, thanks again for your comments. We [revised our manuscript](https://openreview.net/forum?id=1oEvY1a67c1&noteId=dmin7DraT-p) to address your main concerns. Especially your comment “This study can significantly get boosted by diversifying the range of datasets and tasks under study” encourag...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "8iANtp4NwiV", "iclr_2022_1oEvY1a67c1", "yLldNUDx3Hg", "y5J5_10_Khn", "MOukwCb7h8-", "19vgZuQ2h05", "iclr_2022_1oEvY1a67c1", "8iANtp4NwiV", "19vgZuQ2h05", "19vgZuQ2h05", "RMnXMQq_BZM", "19vgZuQ2h05", "iclr_2022_1oEvY1a67c1", "iclr_2022_1oEvY1a67c1", "iclr_2022_1oEvY1a67c1" ]
iclr_2022_34mWBCWMxh9
Blur Is an Ensemble: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness
Bayesian neural networks (BNNs) have shown success in the areas of uncertainty estimation and robustness. However, a crucial challenge prohibits their use in practice. Bayesian NNs require a large number of predictions to produce reliable results, leading to a significant increase in computational cost. To alleviate this issue, we propose spatial smoothing, a method that ensembles neighboring feature map points of CNNs. By simply adding a few blur layers to the models, we empirically show that spatial smoothing improves accuracy, uncertainty estimation, and robustness of BNNs across a whole range of ensemble sizes. In particular, BNNs incorporating spatial smoothing achieve high predictive performance merely with a handful of ensembles. Moreover, this method also can be applied to canonical deterministic neural networks to improve the performances. A number of evidences suggest that the improvements can be attributed to the stabilized feature maps and the flattening of the loss landscape. In addition, we provide a fundamental explanation for prior works—namely, global average pooling, pre-activation, and ReLU6—by addressing them as special cases of spatial smoothing. These not only enhance accuracy, but also improve uncertainty estimation and robustness by making the loss landscape smoother in the same manner as spatial smoothing.
Reject
This paper proposed a spatial smoothing layer for CNNs which is composed out of a feature range bounding layer (referred to as prob) and a bluring layer (referred to as blur). An empirical analyses shows that the proposed layer improves the accuracy and uncertainty of both deterministic CNNs and Bayesian NN (BNNs) approximated by MC-dropout. The paper further provides theoretical arguments for the hypothesis that bluring corresponds to an ensemble and represents the proposed method as a strategy to reduce the sample amount during inference in BNNs. Reviewers valued the extensive (theoretical as well as practical) analyses. However, the theoretical analysis should still be improved. First of all, the the proposed technique is motivated in the context of BNNs, which is not very strongly supported. Second, the argument that „the smoothing layer is an ensemble“ is based on the observation that it has some properties ensembles have as well: (1) they reduce feature map variances, (2) filter out high-frequency signals, and (3) flatten the loss landscape. But two things sharing the same properties do not need to be the same thing. Moreover, the proofs of the prepositions stating the properties are difficult to follow and may contain some flaws. Furthermore, the paper is not well self-contained and highly depends on the appendix. Given these, the paper can not be accepted in its current state. A future version could improve over the current manuscript by making the theoretical statements and proofs more clear. Another option would be to analyze the contribution without connecting it to a Bayesian setting and ensembles, and instead focus on showing that the proposed smoothing layer has those good properties, doing detailed empirical studies, and showing that CNN components like global average pooling and ReLU + BN are special cases of the propose method.
val
[ "xQepBYBp1Y7", "JO3uV2pxN1d", "hQDxOM5Sdc", "Fsjw3Qd9gwB", "Pp7iF5nfqQv", "gwL4C8EDqUv", "Ff_ciUl5C1q", "AeYZa_R0hNj", "YAvHzIfFBuy", "aFpTVCW-Fwd", "vcOKdluJlBm", "JWYyN_a7DvT", "Jg_Wc2CyMTa", "s7KU9ztVHQl", "735g5SGxPSI", "0BlPPp5K-Uv", "bdXuyzGj0rJ", "m-wMSsDSNDg" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n​​We appreciate your time and efforts in reviewing our responses.\n\n**A.** What we wanted to emphasize in this paper with the terminology \"ensemble\" is as follows. First, prior works motivated by anti-aliasing, such as CBS, behave differently than expected. Therefore, their intuitions are not so well justifi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 4 ]
[ "JO3uV2pxN1d", "735g5SGxPSI", "Fsjw3Qd9gwB", "Pp7iF5nfqQv", "aFpTVCW-Fwd", "Ff_ciUl5C1q", "iclr_2022_34mWBCWMxh9", "aFpTVCW-Fwd", "m-wMSsDSNDg", "YAvHzIfFBuy", "735g5SGxPSI", "735g5SGxPSI", "0BlPPp5K-Uv", "bdXuyzGj0rJ", "iclr_2022_34mWBCWMxh9", "iclr_2022_34mWBCWMxh9", "iclr_2022_34m...
iclr_2022_wNsNT56zDkG
Adversarial Rademacher Complexity of Deep Neural Networks
Deep neural networks are vulnerable to adversarial attacks. Adversarial training is one of the most effective algorithms to increase the model's robustness. However, the trained models cannot generalize well to the adversarial examples on the test set. In this paper, we study the generalization of adversarial training through the lens of adversarial Rademacher complexity. Current analysis of adversarial Rademacher complexity is up to two-layer neural networks. In adversarial settings, one major difficulty of generalizing these results to deep neural networks is that we cannot peel off the layer as the classical analysis for standard training. We provide a method to overcome this issue and provide upper bounds of adversarial Rademacher complexity of deep neural networks. Similar to the existing bounds of standard Rademacher complexity of neural nets, our bound also includes the product of weight norms. We provide experiments to show that the adversarially trained weight norms are larger than the standard trained weight norms, thus providing an explanation for the bad generalization performance of adversarial training.
Reject
The paper made a solid theoretical contribution on the adversarial generalization bounds of multi-layer neural networks. However, the paper, at the current form, has many issues in the claim that "the product of the norm can explain the generalization gap": (1). Weight decay. The authors uses the weight norm as the proxy for generalization gap, however, it is unclear to me that "adversarial trained networks have a larger generalization gap" can be explained by the product of weight norms. To carefully verify this, the authors have to at least carefully tune the weight decay, to the largest possible extend so the generalization error is not hurt, and compare the product of the weight norms in this scenario. Without weight decay, the neural networks might learn a lot of redundancies in the weights (especially with adversarial training) which makes the product of the norm to be too large. The authors do perform experiments showing that with weight decay, the generalization gap becomes smaller and the norms become smaller, however, it is totally unclear to me that the weight decay considered in the experiments are actually optimal -- It could still be the case that with proper weight decay, the product of the norms in adversarial training is actually smaller comparing to that of the clean training. Moreover, the authors should also clarify that **the product of the norms, according to the experiments, are simply too large and they can not be used in the theoretical result to get any meaningful generalization bounds**. (2). The product of the norm in Rademacher complexity is tight: This claim only holds for neural networks with 1 neurons per layer. Once there are more than one neurons, there can simply be one neuron that learns f(x) and the other learns -f(x) and they completely cancels each other. So the product of the norm is obviously NOT TIGHT for any neural network with MORE than ONE NEURON per layer. In fact, the gap can be INFINITELY large. Unfortunately, I like the paper very much and I hope this paper could be published, however, the claims "the product of the norm can explain the generalization gap" is simply too misleading and ill-supported. I encourage the authors to completely remove this claim and submit the paper to COLT.
train
[ "b3wD0QPMEs", "pPsdzzGKpmn", "vyYb7rWGeag", "86m2LzkBH8a", "XLjHYxYEBDO", "yvQo0XhQ0CA", "W0oxXLlRXZu", "mo9mOsdFzUd", "LMks_dg6v1D", "j84L6QQvyOp", "yucZ7qyj4f9", "XBu2HuRJSj", "oENgAZQ58A", "C986e8Z0ach", "9LnLFyUo1gc" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ " Thank you for the positive feedback. We really appreciate it. \n\nFollowing your suggestion, we will add a comparison to the bounds for linear and 2-layers neural nets to the paper; we also attach the comparison below for your convenience. We understand that the comparison would help readers better understand the...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, -1, 6, -1, -1, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, 3, -1, -1, 4 ]
[ "86m2LzkBH8a", "9LnLFyUo1gc", "iclr_2022_wNsNT56zDkG", "W0oxXLlRXZu", "iclr_2022_wNsNT56zDkG", "vyYb7rWGeag", "vyYb7rWGeag", "LMks_dg6v1D", "yucZ7qyj4f9", "iclr_2022_wNsNT56zDkG", "j84L6QQvyOp", "iclr_2022_wNsNT56zDkG", "XBu2HuRJSj", "j84L6QQvyOp", "iclr_2022_wNsNT56zDkG" ]
iclr_2022_qEGBB9YB31
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Conventional saliency maps highlight input features to which neural network predictions are highly sensitive. We take a different approach to saliency, in which we identify and analyze the network parameters, rather than inputs, which are responsible for erroneous decisions. We first verify that identified salient parameters are indeed responsible for misclassification by showing that turning these parameters off improves predictions on the associated samples, more than pruning the same number of random or least salient parameters. We further validate the link between salient parameters and network misclassification errors by observing that fine-tuning a small number of the most salient parameters on a single sample results in error correction on other samples which were misclassified for similar reasons -- nearest neighbors in the saliency space. After validating our parameter-space saliency maps, we demonstrate that samples which cause similar parameters to malfunction are semantically similar. Further, we introduce an input-space saliency counterpart which reveals how image features cause specific network components to malfunction.
Reject
For this paper initially the reviews were 6,8,5,5. All the reviewers have provided constructive and substantial feedback. The authors have incorporated changes to address some of these comments and some of the comments could not be addressed. The main criticism of the reviewers have been that the Reviewer tkQp finds two clear limitations in the paper, Reviewer 3o7Z finds that the proposed idea is similar to the parameter-space adversarial attacks and Reviewer sCeW questions the generalisability of the method to other tasks. After the rebuttal the reviewers have reached the consensus that the paper may not be above the acceptance threshold (final scores: 6,6,5,5). Following the reviewers' recommendations, the meta reviewer recommends rejection.
test
[ "ohHC0KJMm6K", "X3buIs1b8ii", "RChSRKbbSwY", "jIL2ge3UtY1", "T36rWuMOLb3", "fjql2msYvda", "ZF1_hvbQ-kI", "TBNXfClwHdW", "aehv-s7Hk-8", "tJtsWH448n8", "l9vik8VWEA0", "TyG0DtJ7a5", "xkfdQaNFH8R", "NU-ztIAtjEf", "H7r_mj3pB8E", "v-Hofo75rUe", "k5VSqenWf3-", "VA6NcLqM32r" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to emphasize that our work already contains large-scale evaluations of our parameter-saliency method in other sections, while Section 3.4 is intended to be a how-to guide for practitioners that read our paper and want to see an example of how to use our method. We believe that from this perspective,...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "jIL2ge3UtY1", "fjql2msYvda", "iclr_2022_qEGBB9YB31", "T36rWuMOLb3", "NU-ztIAtjEf", "xkfdQaNFH8R", "l9vik8VWEA0", "tJtsWH448n8", "iclr_2022_qEGBB9YB31", "VA6NcLqM32r", "k5VSqenWf3-", "v-Hofo75rUe", "RChSRKbbSwY", "v-Hofo75rUe", "v-Hofo75rUe", "iclr_2022_qEGBB9YB31", "iclr_2022_qEGBB9...
iclr_2022_-xhk0O7iAc0
A Topological View of Rule Learning in Knowledge Graphs
Inductive relation prediction is an important learning task for knowledge graph completion. One can use the existence of rules, namely a sequence of relations, to predict the relation between two entities. Previous works view rules as paths and primarily focus on the searching of paths between entities. The space of paths is huge, and one has to sacrifice either efficiency or accuracy. In this paper, we consider rules in knowledge graphs as cycles and show that the space of cycles has a unique structure based on the theory of algebraic topology. By exploring the linear structure of the cycle space, we can improve the searching efficiency of rules. We propose to collect cycle bases that span the space of cycles. We build a novel GNN framework on the collected cycles to learn the representations of cycles, and to predict the existence/non-existence of a relation. Our method achieves state-of-the-art performance on benchmarks.
Reject
The paper proposes a new approach to inductive rule prediction for knowledge graph completion. Reviewers highlighted as strengths that the paper proposes an interesting approach to an important problem that is relevant for the ICLR community. However, reviewers raised also concerns regarding model design and correctness as well as clarity of presentation (e.g., motivation, analysis, comparison to related work, evaluation). After author response and discussion, all reviewers and the AC agree that the paper is not yet ready for publication at ICLR due to the aforementioned issues.
train
[ "r9BuvjtVlO1", "zdS6qsPgjgQ", "_knfJTf-7xN", "5Y14ZNmsofd", "0twPYnpvGmC", "-OGt5LH30rS", "WGHlmw2B_1q", "VMSGGJilhvo", "4iIM4KldgT7", "PLCRh9wbZGY", "0eSz9Vodh12", "Mp6aUxno3DC", "3mdcKKZHVHN", "erbR2JS1qEf", "AFZTnEIfi6U", "ORKUiXQx6jR", "BkyCap1VE0h" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal and the revised draft. The new experiments definitely go in the right direction. My concerns on the necessity of each proposed module and model scalability are partially addressed. While I understand it's difficult to conduct all ablation studies during the response phase, the current d...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 1 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "VMSGGJilhvo", "Mp6aUxno3DC", "iclr_2022_-xhk0O7iAc0", "0twPYnpvGmC", "-OGt5LH30rS", "BkyCap1VE0h", "3mdcKKZHVHN", "erbR2JS1qEf", "0eSz9Vodh12", "AFZTnEIfi6U", "iclr_2022_-xhk0O7iAc0", "_knfJTf-7xN", "BkyCap1VE0h", "ORKUiXQx6jR", "iclr_2022_-xhk0O7iAc0", "iclr_2022_-xhk0O7iAc0", "icl...
iclr_2022_AVShGWiL9z
Tractable Dendritic RNNs for Identifying Unknown Nonlinear Dynamical Systems
In many scientific disciplines, we are interested in inferring the nonlinear dynamical system underlying a set of observed time series, a challenging task in the face of chaotic behavior and noise. Previous deep learning approaches toward this goal often suffered from a lack of interpretability and tractability. In particular, the high-dimensional latent spaces often required for a faithful embedding, even when the underlying dynamics lives on a lower-dimensional manifold, can hamper theoretical analysis. Motivated by the emerging principles of dendritic computation, we augment a dynamically interpretable and mathematically tractable piecewise-linear (PL) recurrent neural network (RNN) by a linear spline basis expansion. We show that this approach retains all the theoretically appealing properties of the simple PLRNN, yet boosts its capacity for approximating arbitrary nonlinear dynamical systems in comparatively low dimensions. We introduce two frameworks for training the system, one based on fast and scalable variational inference, and another combining BPTT with teacher forcing. We show that the dendritically expanded PLRNN achieves better reconstructions with fewer parameters and dimensions on various dynamical systems benchmarks and compares favorably to other methods, while retaining a tractable and interpretable structure.
Reject
Inspired by dendritic nonlinearity, this paper extends previous work on PLRNN/PWL dynamical system modeling by Durstewitz's group. The extension replaces the ReLU nonlinearity with a linear combination of ReLUs. This preserves the theoretical properties of PLRNN, however, the dimensionality of the latent dynamics remains the same, increasing the expressive power of prior PLRNNs. I (area chair) actually read this paper since not all reviewers provided high-quality reviews and one key reviewer is having a personal emergency. Though I appreciate the premise, detailed numerical evaluations, and the inference approach, the novelty is marginal and I do not buy the theoretical advantage of this class of models as presented (see below). Therefore I cannot recommend this paper to appear at ICLR at this time. Some additional weaknesses that reviewers did not point out: 1. Dendritic nonlinearity is summarized as a point nonlinearity; It lacks the interesting phenomena of dendrites such as nonlinear summation and calcium spikes with its own internal dynamics. 2. The many analytical properties of PLRNN may sound nice on paper, but very impractical. To search for the fixed points and cycles, the amount of required computation exponentially increases as the number of neurons and cycle length increases. In addition the boundary effects cannot always be ignored. In general detailed analysis can become quite non-trivial quickly, e.g., https://arxiv.org/abs/2109.03198 3. High-dimensional PLRNN that approximates a low-dimensional dynamical system due to model mismatch won't have the same topological stability structures. Theoretical analysis of higher-dimensional DS may be very misleading.
train
[ "FS_psdtXdqH", "ewU5x_LdWw0", "pHzrspstkTe", "iQRsBr2xUdx", "6ebXgvQIlR", "EGYeoxwZcO", "D9tUf1qjFmU", "A8FKMDz3ttQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to modify the nonlinearity of neuron couplings in PLRNN with a linear spline basis expansion. This modification is claimed to boost PLRNN's capacity of capturing arbitrary nonlinear dynamics in low-dimensions. The paper also introduces two training frameworks, variational inference and BPTT. The...
[ 6, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_AVShGWiL9z", "A8FKMDz3ttQ", "FS_psdtXdqH", "6ebXgvQIlR", "D9tUf1qjFmU", "iclr_2022_AVShGWiL9z", "iclr_2022_AVShGWiL9z", "iclr_2022_AVShGWiL9z" ]
iclr_2022_YDud6vPh2V
Xi-learning: Successor Feature Transfer Learning for General Reward Functions
Transfer in Reinforcement Learning aims to improve learning performance on target tasks using knowledge from experienced source tasks. Successor features (SF) are a prominent transfer mechanism in domains where the reward function changes between tasks. They reevaluate the expected return of previously learned policies in a new target task and to transfer their knowledge. A limiting factor of the SF framework is its assumption that rewards linearly decompose into successor features and a reward weight vector. We propose a novel SF mechanism, $\xi$-learning, based on learning the cumulative discounted probability of successor features. Crucially, $\xi$-learning allows to reevaluate the expected return of policies for general reward functions. We introduce two $\xi$-learning variations, prove its convergence, and provide a guarantee on its transfer performance. Experimental evaluations based on $\xi$-learning with function approximation demonstrate the prominent advantage of $\xi$-learning over available mechanisms not only for general reward functions, but also in the case of linearly decomposable reward functions.
Reject
The reviewers uniformly suggested rejecting the current paper. I concur and remain especially somewhat unconvinced by the authors comments on learning features. In particular, any argument based simply on (current) performance seems rather weak. There are methodological reasons one might want to keep features fixed, and there are a small subset of problems with well-defined known useful features. But in the long term surely we should want to be able to learn the features, and efficiently and elegantly handle the case where they are learnt continually. I want to thank the authors for engaging. This work has the potential to be improved and I would encourage the authors to carefully consider and incorporate the provided feedback by the reviewers into their work.
train
[ "FmpgqPRc2Y6", "3BtJzR5igGb", "1r76jhFKf4i", "AkZxyT3iIwp", "SAMGrn8wESD", "3eiU7SEDMr", "OQxz8myhYO", "HQCDhX1IbRg", "LVdVvenM29B", "TJwqkwM6Zx", "J76ZvP30XX7", "vwwcjGfDq21", "nhvc6P9CIp", "XUDNtC1ooP5", "goPjWS8EJB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Further comments in regard to our final rebuttal version of the paper.\n\n> Authors should specify how the weights \\tilde{w_i} were learned for SFQL in the general case.\n\nThe procedure is explained in Section C.3 (page 20). We added now references to this section from the sections were we describe the SFQL alg...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "HQCDhX1IbRg", "LVdVvenM29B", "TJwqkwM6Zx", "iclr_2022_YDud6vPh2V", "3eiU7SEDMr", "OQxz8myhYO", "nhvc6P9CIp", "goPjWS8EJB", "XUDNtC1ooP5", "vwwcjGfDq21", "iclr_2022_YDud6vPh2V", "iclr_2022_YDud6vPh2V", "iclr_2022_YDud6vPh2V", "iclr_2022_YDud6vPh2V", "iclr_2022_YDud6vPh2V" ]
iclr_2022_M2sNIiCC6C
Self-supervised regression learning using domain knowledge: Applications to improving self-supervised image denoising
Regression that predicts continuous quantity is a central part of applications using computational imaging and computer vision technologies. Yet, studying and understanding self-supervised learning for regression tasks -- except for a particular regression task, image denoising -- have lagged behind. This paper proposes a general self-supervised regression learning (SSRL) framework that enables learning regression neural networks with only input data (but without ground-truth target data), by using a designable operator that encapsulates domain knowledge of a specific application. The paper underlines the importance of domain knowledge by showing that under some mild conditions, the better designable operator is used, the proposed SSRL loss becomes closer to ordinary supervised learning loss. Numerical experiments for natural image denoising and low-dose computational tomography denoising demonstrate that proposed SSRL significantly improves the denoising quality over several existing self-supervised denoising methods.
Reject
This paper focuses on unsupervised image denoising and proposes a method to do so. It shows that using a designed operator based on domain knowledge can help improve unsupervised image denoising. The authors also provide experimental results demonstrating that the proposed methods outperform existing unsupervised denoising and behave similar in performance to supervised methods. The reviewers liked the improvements but (1) limited novelty/simple extension of noise2self, (2) example not convincing, (3) lack of clarity in 2.3, (4) a variety of other technical concerns. The authors partially addressed these concerns. However, I concur with the reviewers that the paper still requires more work and is not ready for publication in its current form.
train
[ "EYq0vJd9kbY", "aaJlh8rmjF-", "JcNgEqGjOKe", "6-V1S-TGzMe", "0uYsvHz9Dym", "4uxtbbqVKWd", "QVE_VY4q96N", "TTuj7tqmw8W", "F1HCVj4y8Mr", "pBqy7y-LZNT", "4fBAjn2gX00", "4X7PvrECs_a", "rxWa5CTSnEJ", "SQ6xqp5AKgm" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This comment lists the changes in the revised paper, appendix, and supplement that were made to resolve the reviewer's concerns (changes are in blue):\n\n+ To resolve Comment \\#1, we refined our motivation to design good $g$ in Sections 2.3 (Theorem 2 and below), 2.4 (below Theorem 3), 3.3 and 5. We also clarifi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 5 ]
[ "4X7PvrECs_a", "SQ6xqp5AKgm", "rxWa5CTSnEJ", "4uxtbbqVKWd", "4uxtbbqVKWd", "SQ6xqp5AKgm", "4X7PvrECs_a", "SQ6xqp5AKgm", "SQ6xqp5AKgm", "SQ6xqp5AKgm", "rxWa5CTSnEJ", "iclr_2022_M2sNIiCC6C", "iclr_2022_M2sNIiCC6C", "iclr_2022_M2sNIiCC6C" ]
iclr_2022_J8P7g_mDpno
Search Spaces for Neural Model Training
While larger neural models are pushing the boundaries of what deep learning can do, often more weights are needed to train models rather than to run inference for tasks. This paper seeks to understand this behavior using search spaces -- adding weights creates extra degrees of freedom that form new paths for optimization (or wider search spaces) rendering neural model training more effective. We then show how we can augment search spaces to train sparse models attaining competitive scores across dozens of deep learning workloads. They are also are tolerant of structures targeting current hardware, opening avenues for training and inference acceleration. Our work encourages research to explore beyond massive neural models being used today.
Reject
The reviewers were generally split on this paper. On the one hand, reviewers generally appreciated the clear presentation, discussion, and explanations, and the experiments. On the other hand, most reviewers commented on the lack of comparative evaluation to other works, including works that are related conceptually. While the authors have a potentially reasonable argument for omitting such comparisons, in the balance I do not believe that the reviewers were actually convinced by this. Particularly when the novelty of the contribution is not crystal clear, such comparisons are important, so I am inclined to not recommend acceptance at this point (though I acknowledge that the paper is clear borderline and could be accepted).
val
[ "d9HRKr-sTw", "Rgydut7BXFI", "d6stKCaLWOX", "EYqVxp1Hrb", "xL1Esai0QJW", "2_SgdaQ2EOn", "wliPaIr4scW", "s9Kuzvp35wY", "_FI8drZmJyi", "Yu00bkldJh", "YcdSXgAupgo", "-hxWCZMyNRx" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper seeks to explore and understand the improved performance of Neural Networks trained with additional parameters even though a majority of the weights can be pruned during inference. They analyze the magnitude, correlation, and movement of weights during training and propose a hypothesis that the reason i...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_J8P7g_mDpno", "iclr_2022_J8P7g_mDpno", "YcdSXgAupgo", "-hxWCZMyNRx", "2_SgdaQ2EOn", "wliPaIr4scW", "Rgydut7BXFI", "_FI8drZmJyi", "d9HRKr-sTw", "iclr_2022_J8P7g_mDpno", "iclr_2022_J8P7g_mDpno", "iclr_2022_J8P7g_mDpno" ]
iclr_2022_hOaYDFpQk3g
Taking ROCKET on an efficiency mission: A distributed solution for fast and accurate multivariate time series classification
Nowadays, with the rising number of sensors in sectors such as healthcare and industry, the problem of multivariate time series classification (MTSC) is getting increasingly relevant and is a prime target for machine and deep learning solutions. Their expanding adoption in real-world environments is causing a shift in focus from the pursuit of ever higher prediction accuracy with complex models towards practical, deployable solutions that balance accuracy and parameters such as prediction speed. An MTSC solution that has attracted attention recently is ROCKET, based on random convolutional kernels, both because of its very fast training process and its state-of-the-art accuracy. However, the large number of features it utilizes may be detrimental to inference time. Examining its theoretical background and limitations enables us to address potential drawbacks and present LightWaveS: a distributed solution for accurate MTSC, which is fast both during training and inference. Specifically, utilizing a wavelet scattering transformation of the time series and distributed feature selection, we manage to create a solution which employs just 2,5% of the ROCKET features, while achieving accuracy comparable to recent deep learning solutions. LightWaveS also scales well with more nodes and large numbers of channels. In addition, it can give interpretability into the nature of an MTSC problem and allows for tuning based on expert opinion. We present three versions of our algorithm and their results on training time, accuracy, inference speedup and scalability. We show that we achieve speedup ranging from 8x to 30x compared to ROCKET during inference on an edge device, on datasets with comparable accuracy.
Reject
This paper proposes an algorithm called LightWaveS to improve the ROCKET (and mini-ROCKET) algorithm for multivariate time series classification, by using wavelet scattering instead of the kernel function. More than the usual number of reviewers were invited to provide independent reviews on the paper. A concern was raised regarding the lack of hyperparameter search in the paper. The authors responded that this was intentional to avoid overfitting the solution to the tested datasets. This response is not convincing. Note that other important reasons to vary the hyperparameter values (as commonly adopted by ML researchers) are to study the sensitivity of the proposed method to hyperparameter settings and to perform more holistic performance comparison with other methods. Other concerns on both novelty and significance have also been raised. Although 2 of the 7 reviews show a weak support for acceptance, other reviewers have pointed out legitimate concerns that make this paper not ready for publication in ICLR in its current form. We appreciate the authors for clarifying some points in their responses and discussions and even including further results, but addressing all the concerns raised really needs a more substantial revision of the paper. We hope the comments and suggestions made by us can help the authors prepare a revised version that will be more ready for publication.
train
[ "VLJ5eW5gQQ", "88ngKmt7AzB", "XS_RhqMrZGh", "ncbDjxFYGJv", "hSVZdhW1K5N", "gwzUfZqNPd0", "33hwrndKBOk", "16g1LohZoEF", "cMuTA3hFbjG", "gSBV9iGfEKO", "hWXYYd6RXs", "uKLzLZUP2Lf", "u8NmfoS24-L", "Q583eLQRatp", "1XvRT1yAIlW", "AVI83Ol-Y1E", "3X4v1fIQmLr", "DiSd69-v9cJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " I believe you are focusing on a very specific scenario (train with an abundance of resources but the inference is constrained resources), which is common and that is what creates issues. There are so many ways in which you can speed up inference, from hardware acceleration to parallelism to feature engineering tr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 5, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3, 4, 4, 4 ]
[ "gwzUfZqNPd0", "hWXYYd6RXs", "gSBV9iGfEKO", "iclr_2022_hOaYDFpQk3g", "DiSd69-v9cJ", "3X4v1fIQmLr", "AVI83Ol-Y1E", "1XvRT1yAIlW", "Q583eLQRatp", "u8NmfoS24-L", "uKLzLZUP2Lf", "iclr_2022_hOaYDFpQk3g", "iclr_2022_hOaYDFpQk3g", "iclr_2022_hOaYDFpQk3g", "iclr_2022_hOaYDFpQk3g", "iclr_2022_h...
iclr_2022_aOX3a9q3RVV
Divisive Feature Normalization Improves Image Recognition Performance in AlexNet
Local divisive normalization provides a phenomenological description of many nonlinear response properties of neurons across visual cortical areas. To gain insight into the utility of this operation, we studied the effects on AlexNet of a local divisive normalization between features, with learned parameters. Developing features were arranged in a line topology, with the influence between features determined by an exponential function of the distance between them. We compared an AlexNet model with no normalization or with canonical normalizations (Batch, Group, Layer) to the same models with divisive normalization added (before the canonical normalization, when those were used). The normalization was performed after the RELU in all five convolutional layers. Divisive normalization always improved performance for models with batch or group or no normalization, gen- erally by 1-2 percentage points, on both the CIFAR-100 and ImageNet databases. Divisive followed by batch normalization showed best performance. To gain in- sight into mechanisms underlying the improved performance, we examined several aspects of network representations. In the early layers both canonical and divisive normalizations reduced manifold capacities and increased average dimension of the individual categorical manifolds. In later layers the capacity was higher and manifold dimension lower for models roughly in order of their performance im- provement. We also use the Gini index, a measure of the inequality of a distribution, as a metric for sparsity of the distribution of activities within a given layer. Divisive normalization layers increase the Gini index (i.e. increase sparsity), whereas the other normalizations decrease the Gini index in their respective layers. Nonetheless, in the final layer, the sparseness of activity increases in the order of no normal- ization, divisive, combined, and canonical. We also investigate how the receptive fields (RFs) in the first convolutional layer (where RFs are most interpretable) change with normalization. Divisive normalization enhances RF Fourier power at low wavelengths, and divisive+canonical enhances power at mid (batch, group) or low (layer) wavelengths, compared to canonical alone or no normalization. In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields.
Accept (Poster)
This paper explores addition of a version of divisive normalization to AlexNets and compares performance and other measures of these networks to those with more commmonly used normalization schemes (batch, group, and layer norm). Various tests are performed to explore the effect of their divisive normalization. Scores were initially mixed but after clarifications for design and experiment decisions, and experiments run in response to comments by the reviewers the paper improved significantly. While reviewers still had several suggestions for further improvements, after the authors' revisions reviewers were in favor of acceptance which I support.
train
[ "9J8hm0MgtS", "buFJZ07H1es", "SMw6xhV-QiH", "BsKySFV684H", "FtRXYdr-_mB", "CRULY7oRF-2", "sAQfIbpACK", "eDpyjLn1CLx", "o3VjKoKFnzX", "KzkngyMWtR", "t9DBd8RiGpy", "bPnOZ--_Ev8", "s0JeE096nD", "YggirAkdhDj", "IRNYVeXSf-", "KqrMych_i5S", "pTITTEWK2U", "Jw286YUGaRz", "zo4D608sAGm", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_r...
[ "The authors study the role of the biologically realistic divisive normalization computation in the context of deep learning models trained to perform image classification tasks. The authors compare divisive normalization (scaling neuronal response by exponentially weighted sum of its neighbors) with those normaliz...
[ 6, -1, -1, 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_aOX3a9q3RVV", "s0JeE096nD", "KoTkJb-Qj3f", "iclr_2022_aOX3a9q3RVV", "sAQfIbpACK", "iclr_2022_aOX3a9q3RVV", "YggirAkdhDj", "pTITTEWK2U", "BsKySFV684H", "bPnOZ--_Ev8", "Jw286YUGaRz", "rAuiDV1VD2d", "eDpyjLn1CLx", "IRNYVeXSf-", "KqrMych_i5S", "CRULY7oRF-2", "9J8hm0MgtS", "z...
iclr_2022_F2r3wYar3Py
Learning from One and Only One Shot
Humans can generalize from one or a few examples, and even from very little pre-training on similar tasks. Machine learning (ML) algorithms, however, typically require large data to either learn or pre-learn to transfer. Inspired by nativism, we directly model very basic human innate priors in abstract visual tasks like character or doodle recognition. The result is a white-box model that learns transformation-based topological similarity akin to how a human would naturally and unconsciously ``distort'' an object when first seeing it. Using the simple Nearest-Neighbor classifier in this similarity space, our model approaches human-level character recognition using only one to ten examples per class and nothing else (no pre-training). This is in contrast to one-shot and few-shot settings that require significant pre-training. On standard benchmarks including MNIST, EMNIST-letters, and the harder Omniglot challenge, our model outperforms both neural-network-based and classical ML methods in the ``tiny-data'' regime, including few-shot learning models that use an extra background set to perform transfer learning. Moreover, mimicking simple clustering methods like $k$-means but in a non-Euclidean space, our model can adapt to an unsupervised setting and generate human-interpretable archetypes of a class.
Reject
Meta Review of Learning from One and Only One Shot The motivation of this work is to address the problem of learning from very few samples, which is of high relevance for many machine learning problems. The paper proposes an (interpretable) approach for one or few-shot learning, which tries to simulate the human recognition ability for “distort” objects. To achieve few-shot learning, they first model the topological distance with training data points while minimizing the distortions, to find neighbors that are conceptually similar to the input image. Their experimental results show that this simple method can achieve good performance when only very few samples are available and no pre-training is allowed. All reviewers, including myself, agree that this paper is well motivated, nicely written, and appreciated how they connected ideas from neuro-psychology to their ML model, and the novelty is recognized. But there are issues raised by reviewers with the paper that prevent it from meeting the bar for me to recommend it for acceptance at ICLR 2022. The main issues raised by all reviewers is that the proposed method is only experimentally verified on simple datasets such as MNIST, EMNIST and Omniglot (and to some extent, Quick, Draw!). The authors (to their credit) in the rebuttal noted that the narrative of the paper is to focus on abstract images, and the purpose is more from a scientific investigation perspective (rather than proposing an algorithm that is immediately useful for ML practitioners), and this is a fair point. However, I do believe the issue here is beyond the simple criticism of "it works on MNIST, how about ImageNet?" as I think some reviewers genuinely think there are fundamental aspects of the approach that might prevent it from scaling (as planned in future work, even by the authors in the last section). For instance, as gUCx noted: 1) The proposed approach is based on topological similarity. It seems that it only is suitable for images with simple topological structure, such as the character images. Maybe it is hardly used to classifier complex nature images since we need more information for natural image classification, not only use topological structure. 2) The authors did not provide the experimental comparison with enough training data, such as the whole training set in MNIST. The reviewer wonders about the upper performance of this approach with enough data. I tend to agree with these points. Non-topological similarity can be displayed in abstract images / datasets. Even in "abstract" images, the paper should describe limitations of the approach, and whether it breaks down (like in the "abstract" Quick, Draw! dataset, there are different types of distinct "yoga" poses in the yoga class. And likewise, in the cat or pig class, there are animals with only the heads, and animals with the head and the full body). Conveniently, Quick, Draw! had not been used in any of their classification experiments [1], and only for a simple clustering example. And for the other points, reporting the terminal MNIST performance will be useful, even if it doesn't look good, so the readers have an idea of the limitations of the approach, where it is good, where it is not, and what needs to be improved. I would love to see improvements (either in the writing or in the experiments) in future work where the paper can effectively convince the readers that the direction has the promise of being able to scale to "real" or "complex" images. (Perhaps even performing the approach on the output of a pre-trained self-supervised autoencoder on ImageNet, as a method to get "abstract" versions of real photos, like a parallel of the giraffe experiment, though this may distract from the narrative of no-pre-training). All in all, I don't want to discourage the authors as we are all excited about the direction of this work. I hope to see an updated version of this work published in a future venue, good luck! [1] https://www.kaggle.com/c/quickdraw-doodle-recognition
train
[ "C_46y38heU3", "pfMNjXc5tyd", "KAIkOptvaQu", "Ut7sZ_Y6ev", "DPoowDAtEG1", "00lJ8Fk3ExV", "6CoK54yx5p", "6Db_y4RdZBU", "9iQp8yBFUJ4", "aVn1lz2UGEl", "kHhalDTc-SN", "v6l65_ty88", "8SPTRC6ufRS", "UhmUoLx-ChV", "AxtvJw71nEg", "8scDNRBrpYw", "i0bnNuYAy0K", "-DXNGxtOoA", "aqCN9n9lbo-",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " We are very grateful for these discussions with the Reviewer. All suggestions and advice therein have been inspiring and greatly helped us improve the experiments and better present the effectiveness of our mathematical model. We are finalizing another substantial pass of revision of the paper based on insights f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "pfMNjXc5tyd", "KAIkOptvaQu", "Ut7sZ_Y6ev", "00lJ8Fk3ExV", "6CoK54yx5p", "6Db_y4RdZBU", "v6l65_ty88", "AxtvJw71nEg", "iclr_2022_F2r3wYar3Py", "iclr_2022_F2r3wYar3Py", "NkBaio_qxt8", "NkBaio_qxt8", "qapSJV3_hAr", "aqCN9n9lbo-", "aqCN9n9lbo-", "-DXNGxtOoA", "-DXNGxtOoA", "iclr_2022_F...
iclr_2022_RjMtFbmETG
Resmax: An Alternative Soft-Greedy Operator for Reinforcement Learning
Soft-greedy operators, namely $\varepsilon$-greedy and softmax, remain a common choice to induce a basic level of exploration for action-value methods in reinforcement learning. These operators, however, have a few critical limitations. In this work, we investigate a simple soft-greedy operator, which we call resmax, that takes actions proportionally to their suboptimality gap: the residual to the estimated maximal value. It is simple to use and ensures coverage of the state-space like $\varepsilon$-greedy, but focuses exploration more on potentially promising actions like softmax. Further, it does not concentrate probability as quickly as softmax, and so better avoids overemphasizing sub-optimal actions that appear high-valued during learning. Additionally, we prove it is a non-expansion for any fixed exploration hyperparameter, unlike the softmax policy which requires a state-action specific temperature to obtain a non-expansion (called mellowmax). We empirically validate that resmax is comparable to or outperforms $\varepsilon$-greedy and softmax across a variety of environments in tabular and deep RL.
Reject
This paper proposes a new softmax like operator, to be used instead of eps-greedy or softmax in Q learning algorithms. There has been some previous work in this direction, most notably Mellowmax, but the proposed operator is more computationally efficient, and there is some experimental evidence that it improves DQN performance. The reviews were mixed, with two mildly positive reviewers (6), who found the work interesting, and two negative reviewers (3,5), who raised issues about the impact of the work when taken as a part of a larger RL algorithm, and about the generality of the work w.r.t. to other RL algorithms like policy gradients. During the discussion, the reviewers did not reach an agreement. My decision to reject the paper is based on the following: while the idea is novel, and the contraction analysis is appropriate, the main interest to the community in such an idea is either experimental - can it be used to push the state of the art RL algorithms? or theoretical - can we glean new theoretical insights using this method? In its current presentation, there is not enough evidence in the paper to support either of these. I encourage the authors to either dig deeper into the experimental evaluation and produce more convincing results, or dig deeper into the theory and show some theoretical benefit of Resmax.
train
[ "DUULLKFPvCQ", "n_ss2Ub7H_n", "3uLolM5g4DP", "NUoMFNi2Ks", "q8iBZ0BOXis", "yJubRw0T3S", "6XL65V0v2J-", "5aaNBoTdEa4", "t4Mr4jx9T0", "_qijYKDYLY_" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new soft operator, resmax, for mapping Q-values to action probabilities. This operator is designed to replace softmax in Boltzmann-style policies while having the non-expansion property which enables the convergence of Q learning. The paper provides theory demonstrating the coverage and no...
[ 5, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2022_RjMtFbmETG", "3uLolM5g4DP", "6XL65V0v2J-", "t4Mr4jx9T0", "_qijYKDYLY_", "DUULLKFPvCQ", "5aaNBoTdEa4", "iclr_2022_RjMtFbmETG", "iclr_2022_RjMtFbmETG", "iclr_2022_RjMtFbmETG" ]
iclr_2022_YYHXJOawkPb
The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning
Although machine learning models typically experience a drop in performance on out-of-distribution data, accuracies on in- versus out-of-distribution data are widely observed to follow a single linear trend when evaluated across a testbed of models. Models that are more accurate on the out-of-distribution data relative to this baseline exhibit “effective robustness” and are exceedingly rare. Identifying such models, and understanding their properties, is key to improving out-of-distribution performance. We conduct a thorough empirical investigation of effective robustness during fine-tuning and surprisingly find that models pre-trained on larger datasets exhibit effective robustness during training that vanishes at convergence. We study how properties of the data influence effective robustness, and we show that it increases with the larger size, more diversity, and higher example difficulty of the dataset. We also find that models that display effective robustness are able to correctly classify 10% of the examples that no other current testbed model gets correct. Finally, we discuss several strategies for scaling effective robustness to the high-accuracy regime to improve the out-of-distribution accuracy of state-of-the-art models.
Reject
Thank you for your submission to NeurIPS. The reviewers are quite split on this paper, but some remain substantially negative even after discussion. I'm a bit more optimistic about the paper: the observed increase then decrease in ER during fine-tuning _does_ strike me as a fundamentally interesting phenomenon, and I believe that papers that present such phenomena can be valuable contributions even without more fundamental "explanations" of the observations. My recommendation, therefore, ultimately rests largely on the fact that I think (as is honestly evidenced by the reviews to a large degree), the presentation and contextualization of these results can be substantially improved in a future revision of the paper. Specifically, the fact that several reviewers found the results obvious and/or not sufficiently substantiated suggests that the basic premises here are still failing to land. I would strongly suggest revisions that clarified these points in a resubmission.
train
[ "AJfZMLnWW0t", "kqHAA0FV4bI", "ifpIjM6pki3", "iTwhMw28rWC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors conduct a thorough empirical investigation of effective robustness during fine-tuning and have several observations: 1. models pre-trained on larger datasets in the middle of fine-tuning, as well as zero-shot pre-trained models, exhibit high amounts of effective robustness, but the effec...
[ 3, 8, 3, 3 ]
[ 4, 3, 3, 4 ]
[ "iclr_2022_YYHXJOawkPb", "iclr_2022_YYHXJOawkPb", "iclr_2022_YYHXJOawkPb", "iclr_2022_YYHXJOawkPb" ]
iclr_2022_bgAS1ZvveZ
Faster Reinforcement Learning with Value Target Lower Bounding
We show that an arbitrary lower bound of the optimal value function can be used to improve the Bellman value target during value learning. In the tabular case, value learning under the lower bounded Bellman operator converges to the same optimal value as under the original Bellman operator, at a potentially faster speed. In practice, discounted episodic return from the training experience or discounted goal return from hindsight relabeling can serve as the value lower bound when the environment is deterministic. This is because the empirical episodic return from any state can always be repeated through the same action sequence in a deterministic environment, thus a lower bound of the optimal value from the state. We experiment on Atari games, FetchEnv tasks and a challenging physically simulated car push and reach task. We show that in most cases, simply lower bounding with the discounted episodic return performs at least as well as common baselines such as TD3, SAC and Hindsight Experience Replay (HER). It learns much faster than TD3 or HER on some of the harder continuous control tasks, requiring minimal or no parameter tuning.
Reject
In the end, this paper essentially proposes a minor variation on an idea that 1) has been published before, 2) is not used extensively at all, and 3) seems applicable (in its current form) only on deterministic environments. This, without additional insights or analyses, seems too marginal a contribution for acceptance. The paper is not poorly executed, and the authors engaged well during discussion, for which I would like to thank them. I would like to encourage the authors to consider the reviewers comments, and in particularly perhaps answer more clearly and directly what they are adding to the literature. It could be that there is something particularly insightful in the detailed differences with past work, but this has not become sufficiently clear to me during this discussion phase.
train
[ "Z4w2FSshVsZ", "uYz4s4_T2kq", "owV2cEdnej", "y67-6NjDAuM", "cF4yQMdu3Ja", "JWtjsbKVgdi", "FT2VhHkYjM", "ngwpD79wkEf", "uioCPDtIq4Q", "fJ46SB5xJbB", "YKWNmyFUxa4", "I9ROzmvPDP", "PrWSKdq5enx", "z-cg3Ac66IB", "bSlx4qactYE", "5mY54SLhb7j", "6zhfsEbEweL", "JwJhakINA1", "iUxu1xI8WvX",...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "a...
[ " Thank you for the response, and I apologize for not being more involved during the end of the discussion period. I appreciate the number of changes by the authors. I have looked through the adjustments made by the author to the paper as well as the other reviews. I stand by my original review. \n\nMy suggestion f...
[ -1, -1, 6, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ -1, -1, 4, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "5mY54SLhb7j", "y67-6NjDAuM", "iclr_2022_bgAS1ZvveZ", "6zhfsEbEweL", "iclr_2022_bgAS1ZvveZ", "iclr_2022_bgAS1ZvveZ", "YKWNmyFUxa4", "uioCPDtIq4Q", "fJ46SB5xJbB", "bSlx4qactYE", "I9ROzmvPDP", "PrWSKdq5enx", "z-cg3Ac66IB", "JwJhakINA1", "cF4yQMdu3Ja", "qgQANIQyBd2", "owV2cEdnej", "JW...
iclr_2022_XhMa8XPHxpw
Low-Precision Stochastic Gradient Langevin Dynamics
Low-precision optimization is widely used to accelerate large-scale deep learning. Despite providing better uncertainty estimation and generalization, sampling methods remain mostly unexplored in this space. In this paper, we provide the first study of low-precision Stochastic Gradient Langevin Dynamics (SGLD), arguing that it is particularly suited to low-bit arithmetic due to its intrinsic ability to handle system noise. We prove the convergence of low-precision SGLD on strongly log-concave distributions, showing that with full-precision gradient accumulators, SGLD is more robust to quantization error than SGD; however, with low-precision gradient accumulators, SGLD can diverge arbitrarily far from the target distribution with small stepsizes. To remedy this issue, we develop a new quantization function that preserves the correct variance in each update step. We demonstrate that the resulting low-precision SGLD algorithm is comparable to full-precision SGLD and outperforms low-precision SGD on deep learning tasks.
Reject
The paper investigates the performance of low-precision Stochastic Gradient Langevin Dynamics (SGLD). While similar low-precision techniques have been widely used in optimization, much less is known for Markov Chain Monte Carlo (MCMC) methods. The paper develops a new quantization function to make SGLD suitable for low-precision setups and argues for its use in deep learning. The main concerns among the reviewers were related to the paper presentation (separation and comparison between optimization and sampling), comparison to Dalalyan-Karagulyan'19 and overview of this work, technical depth, and numerical experiments. The authors have adequately responded to the reviewers' comments and addressed them to the extent possible. However, there was ultimately not enough support to lead this paper to acceptance. I find low-precision sampling a worthy topic of study and the contributions of the paper are interesting. The authors are encouraged to revise the paper based on the reviewers' comments, more clearly highlight the contributions, and resubmit.
train
[ "Aqi7vUlXvH", "39GB87-aYj", "mQB---_SlWE", "dCtn5-uNTJc", "Cq2VNHoE4H4", "8ufPLqdGho", "N3YoCsMCVSQ", "9ObRIKCBmY0", "iYsfnM1vrWH" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their constructive review and have revised the paper accordingly. The main changes are the following:\n\n- Section 4. We revised the mathematical statements and notations to avoid confusion.\n\n- Appendix A. We added more background information of [Dalalyan & Karagulyan, 2019].\n\n- App...
[ -1, 6, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_XhMa8XPHxpw", "iclr_2022_XhMa8XPHxpw", "iYsfnM1vrWH", "9ObRIKCBmY0", "39GB87-aYj", "N3YoCsMCVSQ", "iclr_2022_XhMa8XPHxpw", "iclr_2022_XhMa8XPHxpw", "iclr_2022_XhMa8XPHxpw" ]
iclr_2022_6kruvdT0yfY
C+1 Loss: Learn to Classify C Classes of Interest and the Background Class Differentially
There is one kind of problem all around the classification area, where we want to classify C+1 classes of samples, including C semantically deterministic classes which we call classes of interest and the (C+1)th semantically undeterministic class which we call background class. In spite of most classification algorithm use softmax-based cross-entropy loss to supervise the classifier training process without differentiating the background class from the classes of interest, it is unreasonable as each of the classes of interest has its own inherent characteristics, but the background class dosen’t. We figure out that the background class should be treated differently from the classes of interest during training. Motivated by this, firstly we define the C+1 classification problem. Then, we propose three properties that a good C+1 classifier should have: basic discriminability, compactness and background margin. Based on them we define a uniform general C+1 loss, composed of three parts, driving the C+1 classifier to satisfy those properties. Finally, we instantialize a C+1 loss and experiment it in semantic segmentation, human parsing and object detection tasks. The proposed approach shows its superiority over the traditional cross-entropy loss.
Reject
This paper presents work on classification with a background class. The reviewers appreciated the important, standard problem the paper considers. However, concerns were raised regarding presentation, empirical evaluation, clarity, novelty, and signficance of the work. The reviewers considered the authors' response in their subsequent discussions but felt the concerns were not adequately addressed. Based on this feedback the paper is not yet ready for publication in ICLR.
train
[ "RHN_C1lCpCb", "C6VUZSVWgI1", "MUa8s0j5v-w", "w7njNrWWAo-", "u5bHrl5yANR", "H_3L0-SCfqq", "G3mB1OgVsQ3", "qZvCHpK16VH", "9h01HZfT8O_", "S6xHOVCu2QL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' response. However, as agreed by the majority of reviewers, the paper lacks novelty and the quality needs significant improvement. The response did not add extra evidence to address these concerns. Thus, I would like to keep my original rating. \nI mention Open-Set-Recognition(OSR) becaus...
[ -1, 6, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "MUa8s0j5v-w", "iclr_2022_6kruvdT0yfY", "S6xHOVCu2QL", "iclr_2022_6kruvdT0yfY", "C6VUZSVWgI1", "9h01HZfT8O_", "qZvCHpK16VH", "iclr_2022_6kruvdT0yfY", "iclr_2022_6kruvdT0yfY", "iclr_2022_6kruvdT0yfY" ]
iclr_2022_dHJtoaE3yRP
NAFS: A Simple yet Tough-to-Beat Baseline for Graph Representation Learning
Recently, graph neural networks (GNNs) have shown prominent performance in graph representation learning by leveraging knowledge from both graph structure and node features. However, most of them have two major limitations. First, GNNs can learn higher-order structural information by stacking more layers but can not deal with large depth due to the over-smoothing issue. Second, it is not easy to apply these methods on large graphs due to the expensive computation cost and high memory usage. In this paper, we present node-adaptive feature smoothing (NAFS), a simple non-parametric method that constructs node representations without parameter learning. NAFS first extracts the features of each node with its neighbors of different hops by feature smoothing, and then adaptively combines the smoothed features. Besides, the constructed node representation can further be enhanced by the ensemble of smoothed features extracted via different smoothing strategies. We conduct experiments on four benchmark datasets on two different application scenarios: node clustering and link prediction. Remarkably, NAFS with feature ensemble outperforms the state-of-the-art GNNs on these tasks and mitigates the aforementioned two limitations of most learning-based GNN counterparts.
Reject
The paper proposes NAFS (Node-Adaptive Feature Smoothing), which constructs node representations by only using smoothing without parameter learning. The authors first provide a formulation for the smoothing operator. They then define over-smoothing distance to assess how much a node is close to the stationary state. Finally, they use the over-smoothing distance to calculate a smoothing weight for each node. Experiments are conducted to verify the efficacy. Strength * The paper tackles the problem of over-smoothing, which is a well-known issue in GNN. * The solution appears to be effective. * The paper is generally clearly written. Weakness * The novelty and significance of the work might not be enough. Aspects of the contributions exist in prior work. --- Additional experiments have been conducted during the rebuttal. The reviewers appreciate the efforts. After rebuttal: Reviewer SHxg increased the score accordingly. Reviewer w2Qg says “Given the concerns proposed by the other reviewers, I adjusted my score.” Reviewer YM4P says “I read the rebuttal and slightly increased my score.”
val
[ "n57s8fBSnXt", "KNwhe5gRywR", "QkBl6Emo9fX", "vcqcSXBIOjR", "_djLa1sfIBS", "j6Ft2DGVzCO", "PVpxuVjr2O9", "tZHz1pgNNgy", "-Cbe1lWgS9H", "m0YF7XJ6J3", "SnKjV80XzsK", "Bzp9LJNIQFk", "F3It1jaK1fN", "0x2ZZLu7z8A", "-W-f_56Jdu5" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " Thanks for your helpful and valuable reviews! We particularly appreciate the advice of\n1) Adding more theoretical analysis; \n2) Evaluation on large OGB datasets;\n3) Scalability analysis on real-world datasets;\n4) Discussions about Linear GNNs & Scalable GNN;\n5) Discussions about the liftability to other GNNs...
[ -1, -1, 6, -1, -1, -1, -1, 6, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "j6Ft2DGVzCO", "PVpxuVjr2O9", "iclr_2022_dHJtoaE3yRP", "QkBl6Emo9fX", "QkBl6Emo9fX", "m0YF7XJ6J3", "tZHz1pgNNgy", "iclr_2022_dHJtoaE3yRP", "iclr_2022_dHJtoaE3yRP", "F3It1jaK1fN", "QkBl6Emo9fX", "vcqcSXBIOjR", "-W-f_56Jdu5", "Bzp9LJNIQFk", "-Cbe1lWgS9H" ]
iclr_2022_Cm08egNmrl3
BLOOD: Bi-level Learning Framework for Out-of-distribution Generalization
Empirical risk minimization (ERM) based machine learning algorithms have suffered from weak generalization performance on the out-of-distribution (OOD) data when the training data are collected from separate environments with unknown spurious correlations. To address this problem, previous works either exploit prior human knowledge for biases in the dataset or apply the two-stage process, which re-weights spuriously correlated samples after they were identified by the biased classifier. However, most of them fail to remove multiple types of spurious correlations that exist in training data. In this paper, we propose a novel bi-level learning framework for OOD generalization, which can effectively remove multiple unknown types of biases without any prior bias information or separate re-training steps of a model. In our bi-level learning framework, we uncover spurious correlations in the inner-loop with shallow model-based predictions and dynamically re-group the data to leverage the group distributionally robust optimization method in the outer-loop, minimizing the worst-case risk across all batches. Our main idea applies the unknown bias discovering process to the group construction method of the group DRO algorithm in a bi-level optimization setting and provides a unified de-biasing framework that can handle multiple types of biases in data. In empirical evaluations on both synthetic and real-world datasets, our framework shows superior OOD performance compared to all other state-of-the-art OOD methods by a large margin. Furthermore, it successfully removes multiple types of biases in the training data groups that most other OOD models fail.
Reject
The paper studies the problem of OOD classification: the test data and training data distribution can have different spurious feature-class dependencies. The reviewers have stated that the proposed procedure is a natural choice, with simple implementation. Another positive point is that it could easily be incorporated in many off-the-shelf machine learning training algorithms. Yet, the technical novelty was mentioned to be limited. The bilevel optimization point of view and the connection with min-max optimization problems raised some concerns, as the vocabulary used could be misleading. It was also raised that the paper lacks theoretical supports: no formal analysis, most explanations are ad hoc, etc.
test
[ "aZx3H9vUWIp", "LTXW8a4MmMc", "Kg3x_sdLef_", "mDRhCNoZkAS", "qnte0DUlkDs", "3QeCEePAy4R", "eqa29GZOA4L", "jtF3l8a8bak", "tn2ktkJNFEd", "RucIcDnDLh1", "RPhmx7pt_fe", "iCpf5SdNtC1" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your quick comment.\n\nSince the Colored MNIST task is not computationally complex, we did grid search over inner learning step $\\alpha$, outer learning step $\\beta$, and group weights $\\gamma$.\nWe considered $\\alpha$, $\\beta$ $\\in$ {0.1, 0.01, 0.001, 0.0001} and $\\gamma$ $\\in$ { 0.1, 0.01}...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "LTXW8a4MmMc", "jtF3l8a8bak", "RucIcDnDLh1", "iCpf5SdNtC1", "RPhmx7pt_fe", "iCpf5SdNtC1", "RucIcDnDLh1", "tn2ktkJNFEd", "iclr_2022_Cm08egNmrl3", "iclr_2022_Cm08egNmrl3", "iclr_2022_Cm08egNmrl3", "iclr_2022_Cm08egNmrl3" ]
iclr_2022_THMafOyRVpE
Fully Online Meta-Learning Without Task Boundaries
While deep networks can learn complex classifiers and models, many applications require models that continually adapt to changing input distributions, changing tasks, and changing environmental conditions. Indeed, this ability to continuously accrue knowledge and use the past experience to learn new tasks quickly in continual settings is one of the key properties of an intelligent system. For complex and high-dimensional problems, simply updating the model continually with standard learning algorithms such as gradient descent may result in slow adaptation. Meta-learning can provide a powerful tool to accelerate adaptation but is conventionally studied in batch settings. In this paper, we study how meta-learning can be applied to tackle online problems of this nature, simultaneously adapting to online to changing tasks and input distributions and meta-training the model in order to adapt more quickly in the future. Extending meta-learning into the online setting presents its own challenges, and although several prior methods have studied related problems, they generally require a discrete notion of tasks, with known ground-truth task boundaries. Such methods typically adapt to each task in sequence, resetting the model between tasks, rather than adapting continuously across tasks. In many real-world settings, such discrete boundaries are unavailable, and may not even exist. To address these settings, we propose a Fully Online Meta-Learning (FOML) algorithm, which does not require any ground truth knowledge about the task boundaries and stays fully online without resetting back to pre-trained weights. Our experiments show that FOML was able to learn new tasks faster than the state-of-the-art online learning methods on Rainbow-MNIST, and CIFAR100 datasets.
Reject
The paper propose a Fully Online Meta-Learning (FOML) method which extend MAML for continual learning in a fully online learning without requiring the knowledge of the task boundaries. Experiments show that FOML was able to learn new tasks faster than several existing online learning methods on Rainbow-MNIST, and CIFAR100 datasets. There are a few major concerns from reviewers. One concern is about the lack of clarity on the problem statement: The authors cast the problem as meta-learning that must be done in a fully online setting, but it requires to store all the training data in a buffer storing all the training data seen so far, which contradicts to the principle of “online learning”. Another major weakness is the poorly written literature survey, which missed to cite a large body of related work in continual learning and online-meta-learning (such as Online Continual Learning, task-free continual learning, continual learning without task boundaries, etc). These should at least be discussed carefully if not fully compared in the empirical studies. Also experiments are quite weak in both settings, datasets and rather out-of-date baselines. Finally, there also lacks of theoretical justification or analysis. Therefore, the paper is not recommended for acceptance in its current form. I hope authors found the review comments informative and can improve their paper by addressing these review comments carefully in future submissions.
val
[ "cCYN_OG3CHG", "X7lWyu7t0C", "LjEuM7w-1eu", "8wRvJ64zi3a", "6yPWyT28Er", "5j0Meropsu8", "XoS_c6QCMu", "dl0mbvK7vd3", "EriNxFhua0q", "SuXnoifadT", "sT_6Z-qOViy" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe will definitely add comparisons to MOCA on the other tasks in the final -- there is nothing preventing us from doing this, but unfortunately during the rebuttal period, we simply did not have the time to run these experiments to completion, since the CVPR deadline was in the middle of the reb...
[ -1, 5, -1, -1, 6, 6, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, -1, 5, 4, -1, -1, -1, -1, 3 ]
[ "LjEuM7w-1eu", "iclr_2022_THMafOyRVpE", "XoS_c6QCMu", "dl0mbvK7vd3", "iclr_2022_THMafOyRVpE", "iclr_2022_THMafOyRVpE", "X7lWyu7t0C", "6yPWyT28Er", "5j0Meropsu8", "sT_6Z-qOViy", "iclr_2022_THMafOyRVpE" ]
iclr_2022_G7PfyLimZBp
Understanding the Generalization of Adam in Learning Neural Networks with Proper Regularization
Adaptive gradient methods such as Adam have gained increasing popularity in deep learning optimization. However, it has been observed in many deep learning applications such as image classification, Adam can converge to a different solution with a worse test error compared to (stochastic) gradient descent, even with a fine-tuned regularization. In this paper, we provide a theoretical explanation for this phenomenon: we show that in the nonconvex setting of learning over-parameterized two-layer convolutional neural networks starting from the same random initialization, for a class of data distributions (inspired from image data), Adam and gradient descent (GD) can converge to different global solutions of the training objective with provably different generalization errors, even with weight decay regularization. In contrast, we show that if the training objective is convex, and the weight decay regularization is employed, any optimization algorithms including Adam and GD will converge to the same solution if the training is successful. This suggests that the generalization gap between Adam and SGD is fundamentally tied to the nonconvex landscape of deep learning optimization, which cannot be covered by the recent neural tangent kernel (NTK) based analysis.
Reject
The paper considers the difference between GD and ADAM in terms of implicit bias. It considers a specific distribution and architecture where the two algorithms converge to different solutions while perfectly fitting the training data. The authors highlight the fact that this happens while adding regularization, which does not happen in the linear case. The reviewers found some of the insights and analysis interesting. However, they also had reservations about the impact of the results given that it is known that GD and ADAM have different implicit biases, and that the distribution appears specifically crafted towards showing this effect for the architecture studied. In future versions, the authors are encouraged to better motivate the chosen distribution, use more standard neural architecture (e.g., standard relu), and provide more explanation about the role of regularization in their result.
val
[ "xKT1BsYxNuT", "GxhJpvVlxi9", "298MKT0Hk4X", "AX1kCBnHYAd", "jhrCgU6NCgk", "KcdVMKAo9cO", "r92KO_PsnmP", "MxhJs17okU", "ZEG1dRiygm3", "OIY-GhBI5WS", "VnwXpsDMLnj", "KE3LjLLY2os", "Lk2boI5v8kx", "qCKyy1Hb1b9", "oCPklNiF3PJ", "PpDzfz5QrlY", "jDCYInPzmB", "-QRejmaVj5P", "c7LiMRSoZKb...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thank you for your reply. We answer your additional comments and questions as follows.\n \n**Q:** I understand the difference between feature noise and random noise. What I don't understand is why the variance of random noise has to decrease as the number of samples (n) increases.\n\n**A:** The scaling of the va...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "298MKT0Hk4X", "jhrCgU6NCgk", "ZEG1dRiygm3", "r92KO_PsnmP", "qCKyy1Hb1b9", "iclr_2022_G7PfyLimZBp", "MxhJs17okU", "OIY-GhBI5WS", "T4XKWWHcVyR", "KE3LjLLY2os", "iclr_2022_G7PfyLimZBp", "KcdVMKAo9cO", "KcdVMKAo9cO", "c7LiMRSoZKb", "-QRejmaVj5P", "jDCYInPzmB", "iclr_2022_G7PfyLimZBp", ...
iclr_2022_Uy6YEI9-6v
Object-Centric Neural Scene Rendering
We present a method for composing photorealistic scenes from captured images of objects. Traditional computer graphics methods are unable to model objects from observations only; instead, they rely on underlying computer graphics models. Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene. While NeRFs synthesize realistic pictures, they only model static scenes and are closely tied to specific imaging conditions. This property makes NeRFs hard to generalize to new scenarios, including new lighting or new arrangements of objects. Instead of learning a scene radiance field as a NeRF does, we propose to learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network. This enables rendering scenes even when objects or lights move, without retraining. Combined with a volumetric path tracing procedure, our framework is capable of rendering light transport effects including occlusions, specularities, shadows, and indirect illumination, both within individual objects and between different objects. We evaluate our approach on synthetic and real world datasets and generalize to novel scene configurations, producing photorealistic, physically accurate renderings of multi-object scenes.
Reject
All three reviewers recommend borderline rejection based on limited novelty, missing comparisons with other methods, and runtime inefficiency. The authors’ response helped clarify other questions but did not eliminate the main concerns about the paper. The AC agrees with the reviewers that, in its current form, the paper does not pass the acceptance bar of ICLR. The reviews have detailed comments and suggestions that should help the authors to improve the work for another conference.
test
[ "0iUh1Na18tn", "qP6lkshn9wU", "n0QKNAaSXXW", "uEs18qQe4DA", "9hRKtXfV3r0", "8sjZWL1ecWJ", "TmN4f8-IP48", "yPqXpheRwId", "hfbUkqiPRyf", "vDz-N-_in3", "68fiyJrrY9A" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new neural implicit representation called object-centric scattering function for scene compositing application. The major extension is to add the lighting direction as an input so that the new representation can be used for relighting. To train this representation, the authors propose minimiz...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2022_Uy6YEI9-6v", "yPqXpheRwId", "9hRKtXfV3r0", "8sjZWL1ecWJ", "68fiyJrrY9A", "vDz-N-_in3", "hfbUkqiPRyf", "0iUh1Na18tn", "iclr_2022_Uy6YEI9-6v", "iclr_2022_Uy6YEI9-6v", "iclr_2022_Uy6YEI9-6v" ]
iclr_2022_r88Isj2alz
NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs
Recently, Neural ODE (Ordinary Differential Equation) models have been proposed, which use ordinary differential equation solving to predict the output of neural network. Due to the low memory usage, Neural ODE models can be considered as an alternative that can be deployed in resource-constrained devices (e.g., IoT devices, mobile devices). However, to deploy a Deep Learning model in resource-constrained devices, low inference energy cost is also required along with low memory cost. Unlike the memory cost, the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers. Attackers can leverage the adaptive behaviour of Neural ODEs to attack the energy consumption of Neural ODEs. However, energy-based attack scenarios have not been explored against Neural ODEs. To show the vulnerability of Neural ODEs against adversarial energy-based attack, we propose NODEAttack. The objective of NODEAttack is to generate adversarial inputs that require more ODE solvers computations, therefore increasing neural ODEs inference-time energy consumption. Our extensive evaluation on two datasets and two popular ODE solvers show that the samples generated through NODEAttack can increase up to 168% energy consumption than average energy consumption of benign test data during inference time. Our evaluation also shows the attack transferability is feasible across solvers and architectures. Also, we perform a case study showing the impact of the generated adversarial examples, which shows that NODEAttack generated adversarial examples can decrease 50% efficiency of an object-recognition-based mobile application.
Reject
The paper proposes an energy consumption attack to neural ODE models. There are two complains from the reviewers: - Although this is a new application to energy consumption attack, most of the attack techniques are simple extensions to the previous attack papers, so the novelty is questioned by some of the reviewers. - The paper is poorly written. We therefore decide to reject the paper and encourage the authors to address the concerns in their next submission. Reviewers also think a careful discussion about the defense or detection mechanism against the proposed attack will be a good thing to add.
test
[ "q1wm7rYcQk", "IOwjVeiCF0G", "maw_DVDxj2M", "R6pioqzB4fA", "oBmhsbHmOM", "39OqI1G_jXP", "UI8BOj5pq3L", "lZG7gFQVQSr", "BcbY6ZHLla", "lN4PHa4FTbu" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewer for doing the requested comparison and adding it to the appendix.", " Thank you for your feedback on our work. Here are our comments for the asked queries.\nFirst we apologize for the confusion caused by our writing in some places. While we have updated the paper to avoid these misundersta...
[ -1, -1, -1, -1, -1, -1, 3, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "R6pioqzB4fA", "lN4PHa4FTbu", "BcbY6ZHLla", "lZG7gFQVQSr", "UI8BOj5pq3L", "iclr_2022_r88Isj2alz", "iclr_2022_r88Isj2alz", "iclr_2022_r88Isj2alz", "iclr_2022_r88Isj2alz", "iclr_2022_r88Isj2alz" ]
iclr_2022_uB12zutkXJR
GRAPHIX: A Pre-trained Graph Edit Model for Automated Program Repair
We present GRAPHIX, a pre-trained graph edit model for automatically detecting and fixing bugs and code quality issues in Java programs. Unlike sequence-to-sequence models, GRAPHIX leverages the abstract syntax structure of code and represents the code using a multi-head graph encoder. Along with an autoregressive tree decoder, the model learns to perform graph edit actions for automated program repair. We devise a novel pre-training strategy for GRAPHIX, namely deleted sub-tree reconstruction, to enrich the model with implicit knowledge of program structures from unlabeled source code. The pre-training objective is made consistent with the bug fixing task to facilitate the downstream learning. We evaluate GRAPHIX on the Patches in The Wild Java benchmark, using both abstract and concrete code. Experimental results show that GRAPHIX significantly outperforms a wide range of baselines including CodeBERT and BART and is as competitive as other state-of-the-art pre-trained Transformer models despite using one order of magnitude fewer parameters. Further analysis demonstrates strong inductive biases of GRAPHIX in learning meaningful structural and semantic code patterns, both in abstract and concrete source code.
Reject
This paper presents an approach for machine learning to fix programming errors via edits to abstract syntax trees. The main contributions are a pretraining scheme based on masking out subtrees and some minor architectural modifications compared to previous work. Reviewers found the paper to contain a significant amount of work, but there are some questions about significance relative to previous work that framed the problem similarly, and about experimental methodology. Authors did a great deal of work in the rebuttal to address many of the experimental methodology questions, but this also introduced substantial unreviewed changes to the model, the pretraining approach, and the experiments. In total, the remaining concerns about significance and the substantial changes lead us to recommend that this paper be revised and resubmitted to the next conference.
train
[ "p7F4dC0ZT-O", "7EqFzmK7NT1", "SpO2SANjmQI", "nFbMJvYc0Yp", "3-oeEUw5MN", "h2l55ITYl1n", "BhW7XLEGTD", "CeuYUk6crOw", "bCmh5OfP1L-", "4IZf-9FETO", "4BMMcG7CXqp", "5i0QMUgu3t", "63rwv9BXF84", "4zYW0y-PnE" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for their detailed response -- and apologies for the delay in response.\n\nAs other reviewers have noted, the changes suggested by other reviewers -- and made by the authors -- are substantial, likely enough to require re-review of a revised paper.\n\nFurther, I actually would maintain my...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "4IZf-9FETO", "4BMMcG7CXqp", "3-oeEUw5MN", "iclr_2022_uB12zutkXJR", "CeuYUk6crOw", "nFbMJvYc0Yp", "nFbMJvYc0Yp", "nFbMJvYc0Yp", "63rwv9BXF84", "5i0QMUgu3t", "4zYW0y-PnE", "iclr_2022_uB12zutkXJR", "iclr_2022_uB12zutkXJR", "iclr_2022_uB12zutkXJR" ]
iclr_2022_E9z2A1-O7e
HyperTransformer: Attention-Based CNN Model Generation from Few Samples
In this work we propose a HyperTransformer, a transformer based model that generates all weights of a CNN model directly from the support samples. This approach allows to use a high-capacity model for encoding task-dependent variations in the weights of a smaller model. We show for multiple few-shot benchmarks with different architectures and datasets that our method beats or matches that of the traditional learning methods in a few-shot regime. Specifically, we show that for very small target models, our method can generate significantly better performing models than traditional few-shot learning methods. For larger models we discover that applying generation to the last layer only, allows to produce competitive or better results while being end-to-end differentiable. Finally, we extend our approach to semi-supervised regime utilizing unlabeled samples in the support set and further improving few-shot performance in the presence of unlabeled data.
Reject
The paper uses a transformer model to generate CNN models and use it for few shot learning. Although the reviewers appreciate the ideas and the good benchmarking results presented in the paper they are find the paper somewhat incremental compared to previous work in the hyper network literature. This also despite the authors thorough rebuttal with additional results. This shows that the authors could have done a better job in presenting their work. Rejection is therefore recommended with a strong encouragement to rework the paper to counter future reviewers having similar reservations.
train
[ "MVEXimJd1dK", "_PIgXAJOy4R", "IRpRVxQrtEu", "IQvwMmtE-Fg", "eIUnGBruCL9", "cLWxtl-BR_e", "4HY3YTCmgq", "q25lFazV-3", "rkzheQhPwxg", "Qze2W2W-Kvw", "ozyCYG4Annx" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the time and effort it took to read the rebuttal and all the reviews. While we agree with an overall description of our method, we would like to point out that the advantage of using HyperTransformer for small models is very significant: in many cases the accuracy increases...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "_PIgXAJOy4R", "cLWxtl-BR_e", "ozyCYG4Annx", "eIUnGBruCL9", "q25lFazV-3", "ozyCYG4Annx", "Qze2W2W-Kvw", "rkzheQhPwxg", "iclr_2022_E9z2A1-O7e", "iclr_2022_E9z2A1-O7e", "iclr_2022_E9z2A1-O7e" ]
iclr_2022_vruwp11pWnO
Improving and Assessing Anomaly Detectors for Large-Scale Settings
Detecting out-of-distribution examples is important for safety-critical machine learning applications such as detecting novel biological phenomena and self-driving cars. However, existing research mainly focuses on simple small-scale settings. To set the stage for more realistic out-of-distribution detection, we depart from small-scale settings and explore large-scale multiclass and multi-label settings with high-resolution images and thousands of classes. To make future work in real-world settings possible, we create new benchmarks for three large-scale settings. To test ImageNet multiclass anomaly detectors, we introduce a new dataset of anomalous species. We leverage ImageNet-22K to evaluate PASCAL VOC and COCO multilabel anomaly detectors. Third, we introduce a new benchmark for anomaly segmentation by introducing a segmentation benchmark with road anomalies. We conduct extensive experiments in these more realistic settings for out-of-distribution detection and find that a surprisingly simple detector based on the maximum logit outperforms prior methods in all the large-scale multi-class, multi-label, and segmentation tasks, establishing a simple new baseline for future work.
Reject
The authors focus on large scale out-of-distribution (OOD) detection for which they propose three benchmarks with multiclass and multi-label high-resolution images. In these settings, they find that a simple extension, using maximum logits (MaxLogit), of a common baseline maximum softmax probability (MSP), is surprisingly competitive to prior methods. Five knowledgeable reviewers found the idea of having these novel benchmarks potentially interesting, but highlighted some issues that needs to be taken into account before the paper can be publishable. First, reviewers highlighted how the presentation can be better organized (more structure on and stronger overall motivation for the three different contributions) as to present the three ideas in a more cohesive way and more formal in introducing methods (e.g. the MaxLogic and MSP) as to clearly highlight the technical contributions and the differences with other models. Second, more baselines need to be introduced and therefore experiments extended. For example, the comparison with another detector, LogSumExpLogit, a relaxation of MaxLogit already used for (small-scale) OOD in the context of generative models. Authors provided in the rebuttal some preliminary experimental results but promised more (for a camera-ready) that could not be evaluated by the reviewers. Third, the scope of the proposed benchmarks raised some concerns by some reviewers. If not in the motivation behind the task of treating whole objects as anomalies, additional care shall be put into the provided annotations. As one reviewer highlighted a certain percentage of images are mis-annotated. While this percentage is somehow low (3.9%) and should not change the empirical conclusions drawn in the paper, it highlights that the core contributions of the paper might have been rushed.
train
[ "-qpIKnCYEp", "3_YAyiLcKer", "0YXSOyx_rHs", "lwsTWNMG-4j", "mZ8eiFv149Q", "UJyBDCeyK5c", "nYrHIx3M-RL", "Nk4nlsdI9Y", "mAgmEAht7B4", "HlTE6a6hzNi", "SXZbewTBDW6", "BSPgJ9D1y-L", "Yceey13mGzT", "kETdsW2pUyJ", "89X3VggTpc", "7DMokPIMIRi", "g5kcDjPY7Xt", "NGbpo4sQtWK", "MQpqNhxXdgH"...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Thank you for considering our rebuttal. Here are a few remaining points that we hope are informative.\n\n**Near or Far OOD**\n\nWe introduce a total of 4 datasets to help foster new work in anomaly detection for large-scale settings. As large-scale settings are already challenging, we do not specifically design o...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "nYrHIx3M-RL", "0YXSOyx_rHs", "mZ8eiFv149Q", "HlTE6a6hzNi", "mAgmEAht7B4", "Nk4nlsdI9Y", "iclr_2022_vruwp11pWnO", "Yceey13mGzT", "Yceey13mGzT", "Yceey13mGzT", "nYrHIx3M-RL", "NGbpo4sQtWK", "g5kcDjPY7Xt", "MQpqNhxXdgH", "7DMokPIMIRi", "iclr_2022_vruwp11pWnO", "iclr_2022_vruwp11pWnO", ...
iclr_2022_GdPZJxjk46V
Dataset transformations trade-offs to adapt machine learning methods across domains
Machine learning-based methods have been proved to be quite successful in different domains. However, applying the same techniques across disciplines is not a trivial task with benefits and drawbacks. In the literature, the most common approach is to convert a dataset into the same format as the original domain to employ the same architecture that was successful in the original domain. Although this approach is fast and convenient, we argue it is suboptimal due to the lack of tailoring to the specific problem at hand. To prove our point, we examine dataset transformations used in the literature to adapt machine learning-based methods across domains and show that these dataset transformations are not always beneficial in terms of performance. In addition, we show that these data transformations open the door to unforeseen vulnerabilities in the new applied different domain. To quantify how different the original dataset is with respect to the transformed one, we compute the dataset distances via Optimal Transport. Also, we present simulations with the original and transformed data to show that the data conversion is not always needed and exposes the new domain to unsought menaces.
Reject
Unfortunately, reviewers unanimously agreed that this paper does not meet the ICLR acceptance standards, citing generally unpolished experiments. I would recommend substantially expanding the experimental results in the future.
train
[ "y-kuAlzILgo", "oHHftYxrBqb", "7Ee_5N0l7NQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper attempts to study the effect of different transformations of input datasets, and the performance of the resulting models thereof. In particular, authors consider time-series data which is then represented in (1) time-series format, (2) vectorized form, (3) tensorized form. For each of the representation...
[ 1, 3, 3 ]
[ 4, 4, 5 ]
[ "iclr_2022_GdPZJxjk46V", "iclr_2022_GdPZJxjk46V", "iclr_2022_GdPZJxjk46V" ]
iclr_2022_gmxgG6_BL_N
Variational Component Decoder for Source Extraction from Nonlinear Mixture
In many practical scenarios of signal extraction from a nonlinear mixture, only one (signal) source is intended to be extracted. However, modern methods involving Blind Source Separation are inefficient for this task since they are designed to recover all sources in the mixture. In this paper, we propose supervised Variational Component Decoder (sVCD) as a method dedicated to extracting a single source from nonlinear mixture. sVCD leverages the sequence-to-sequence (Seq2Seq) translation ability of a specially designed neural network to approximate a nonlinear inverse of the mixture process, assisted by priors of the interested source. In order to maintain the robustness in the face of real-life samples, sVCD combines Seq2Seq with variational inference to form a deep generative model, and it is trained by optimizing a variant of variational bound on the data likelihood concerning only the interested source. We demonstrate that sVCD has superior performance on nonlinear source extraction over a state-of-the-art method on diverse datasets, including artificially generated sequences, radio frequency (RF) sensing data, and electroencephalogram (EEG) results.
Reject
This work has generated a lot of discussion between authors and reviewers and among reviewers. Overall it is reported that the results on EEG are not conclusive and directly relevant for this field. Besides the theoretical contribution is not reported as a strong point of the work and the comparison with alternative baseline methods is judged too limited. For all these reasons the paper cannot be endorsed for publication at ICLR this year.
val
[ "WmZIBkZ0BT", "i1NKWTn4Pda", "PiqAIzr6FMY", "nlslf0BHcMR", "sMmgxIsdWU1", "GQvh72MmTaD", "sQMT5RFN8nk", "Xqt5c75nmyA", "ruWZqnCKC60", "dNDpU4NLVAY", "MLkXBrJ72DQ", "cRZMsx6hV18", "h2nEk5IKg1x", "PZrSdSpmO3E", "QN12nuQAkZ5", "BvmpsdsLNt1", "lzs2CtoaTU5", "no9JESUJDWX", "ML7wedKCQW...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ "The paper proposes an approach for supervised non-linear regression for multivariate time-series using a sequence-to-sequence approach with self-attention and generative prior on the latent codes. The paper poses this as a source extraction from a nonlinear mixture.\n\nThe paper shows how this can be applied to a ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "iclr_2022_gmxgG6_BL_N", "sQMT5RFN8nk", "GQvh72MmTaD", "sMmgxIsdWU1", "Xqt5c75nmyA", "ruWZqnCKC60", "dNDpU4NLVAY", "cRZMsx6hV18", "cRZMsx6hV18", "MLkXBrJ72DQ", "PZrSdSpmO3E", "QN12nuQAkZ5", "WmZIBkZ0BT", "JCa59TFzg0q", "WmZIBkZ0BT", "ML7wedKCQWY", "no9JESUJDWX", "iclr_2022_gmxgG6_B...
iclr_2022_D1TYemnoRN
Short optimization paths lead to good generalization
Optimization and generalization are two essential aspects of machine learning. In this paper, we propose a framework to connect optimization with generalization by analyzing the generalization error based on the length of optimization trajectory under the gradient flow algorithm after convergence. Through our approach, we show that, with a proper initialization, gradient flow converges following a short path with an explicit length estimate. Such an estimate induces a length-based generalization bound, showing that short optimization paths after convergence indicate good generalization. Our framework can be applied to broad settings. For example, we use it to obtain generalization estimates on three distinct machine learning models: underdetermined $\ell_p$ linear regression, kernel regression, and overparameterized two-layer ReLU neural networks.
Reject
The paper presents several interesting generalization results for Uniform-LGI loss functions (a generalization of PL functions). Some of these bounds seem useful, but the overall connection with the optimization length remains unclear. This concern and other points of criticism remain present after the rebuttal phase. Other minor concerns seem fixable, but in a larger timeline compared to the camera ready one. The paper should be revised for a future venue.
train
[ "2eC-f-pbHOa", "3f8gDS-tNEa", "8H3FC5P3pWh", "c9q05DSLUEw", "_aA7iIZrAfC", "0wSz5U6p1N7", "UAVhsh997n2", "lzRqOjNBhIF", "dbFr8xgelCx", "_2ui05hdf9m", "WdZgKeJBSya", "54quulh2VwS", "E8DxyFQ3y9", "wRVDi8zyBxA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a framework to analyze both optimization and generalization properties under the Uniform-LGI condition (Def 1). From my understanding, the main results consist of two parts:\n\n* Optimization: define the Uniform-LGI as an extension of the PL condition, prove the corresponding convergence result...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2022_D1TYemnoRN", "UAVhsh997n2", "_2ui05hdf9m", "_aA7iIZrAfC", "0wSz5U6p1N7", "E8DxyFQ3y9", "wRVDi8zyBxA", "dbFr8xgelCx", "2eC-f-pbHOa", "54quulh2VwS", "iclr_2022_D1TYemnoRN", "iclr_2022_D1TYemnoRN", "iclr_2022_D1TYemnoRN", "iclr_2022_D1TYemnoRN" ]
iclr_2022_JZrETJlgyq
Exploring Non-Contrastive Representation Learning for Deep Clustering
Existing deep clustering methods rely on contrastive learning for representation learning, which require negative examples to form an embedding space where all instances are well-separated. However, the negative examples inevitably give rise to the class collision issue, compromising the representation learning for clustering. In this paper, we explore the non-contrastive representation learning for deep clustering, termed NCC, which is based on BYOL, a representative method without negative examples. First, we propose a positive sampling strategy to align one augmented view of instance with the neighbors of another view so that we can avoid the class collision issue caused by the negative examples and hence improve the within-cluster compactness. Second, we propose a novel prototypical contrastive loss, ProtoCL, which can encourage prototypical alignment between two augmented views and prototypical uniformity, hence maximizing the inter-cluster distance. Moreover, we formulate NCC in an Expectation-Maximization (EM) framework, in which E-step utilizes spherical k-means to estimate the pseudo-labels of instances and distribution of prototypes from the target network and M-step leverages the proposed losses to optimize the online network. As a result, NCC is able to form an embedding space where all clusters are well-separated and within-cluster examples are compact. Experimental results on several clustering benchmark datasets as well as ImageNet-1K demonstrate that the proposed NCC outperforms the state-of-the-art methods by a significant margin.
Reject
This paper received the initial scores with large variance. During the intensive discussion (Number of Forum replies is up to 60), the opinions reached the consensus. I have read all the materials of this paper including manuscript, appendix, comments and response. Based on collected information from all reviewers and my personal judgement, I can make the recommendation on this paper, *rejection*. Here are the comments that I summarized, which include my opinion and evidence. **Research Problem and Motivation** (1) It seems that the authors aimed to address the question that “are negative examples necessary for deep clustering?” This research problem has been proposed and addressed in BYOL and SimSiam (If I remembered correctly, some reviewer pointed this out). What the authors actually did is to add two more components, positive sampling strategy and prototypical contrastive loss, on the top of BYOL. In my eyes, it is like putting two patches on BYOL, where one of them does not work (I will explain later). (2) Moreover, the authors failed to clearly illustrate the drawback of BYOL. In the last sentence of the third paragraph in the Introduction part, the authors mentioned that “BYOL only optimize the alignment term, leading to unstable training and suffering from the representation collapse.” This sentence is too general, which lacks strong motivation. Therefore, the research problem addressed here is an incremental problem over BYOL, rather than brings new insights into the contrastive learning community. **Philosophy** Without a clear motivation, it is difficult to catch the philosophy of this paper, i.e., how their proposed components tackle BYOL’s drawbacks. Moreover, the relationship between two components is also unclear. **Novelty** I believe Reviewer Na7g has a thorough analysis of the novelty of this paper. I will not go into details here. The difference does not mean novelty. **Technique** Positive sampling strategy does not work. If we take a closer look at Table 4, the rows of BYOL and NCC with PS, there is no significant performance gain. The p-values of t-test on ACC results on CIFAR-10 and CIFAR-20 are 0.92 and 0.32, respectively. Actually, the prototypical contrastive loss is the key element to boost performance over BYOL. **Misleading Title** Based on the above point, the title is misleading. Although no negative data sample pairs are used in the training, the contractiveness on the cluster-level should also belong to the scope of contrastive learning. **Experiments** (1) In the Introduction part, the authors mentioned that SimSiam is in the same non-contrastive category with this paper. However, this paper is not included in the comparison. (2) The competitive methods in Table 2 and 3 are not consistent. The authors even did not report the performance of BYOL on ImageNet-1K. (3) Positive sampling strategy does not work. See the above Technique point. (4) The authors only reported the running time on CIFAR-10 and CIFAR-20. Therefore, the experimental results are not very convincing and solid to me. **Presentation** I believe the presentation also needs many efforts to smooth the logic. For example, “Even though Grill et al. (2020); Richemond et al. (2020) have proposed to use some tricks such as SyncBN (Ioffe & Szegedy, 2015) and weight normalization (Qiao et al., 2019) to alleviate this issue, the additional computation cost is significant.” Actually, Grill et al. (2020) is BYOL, where the authors added their components on. The computational cost of the proposed method should be heavier than BYOL. Based on the above points, this paper suffers from several severe issues, which makes it not self-standing.
train
[ "z097g_s_g6", "fMl9RlQMN04", "OZTSzRM7SE0", "O3h2iFDst3I", "zPAExacmq3D", "9OKJ-FqSBqZ", "PUUTiwWpp3H", "nh5DGDOQx3a", "SmADm8IU0cG", "P4OW1XTEu8p", "Nmn7UiISpH", "ma01lFy8ze", "XIf_nOOBTLM", "HLYoKc-MaWd", "uDClw-f3wv", "qatEmw-OZC9", "NcR0gT5wZ2q", "fzx4a6ddAl8", "W95-Npp8M4j",...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", ...
[ "For deep clustering, this paper explores the non-contrastive representation learning based on BYOL to handle the issue of the class collision caused by inaccurate negative samples. ProtoCL is proposed to encourage prototypical alignment between two augmented views and prototypical uniformity, hence maximizing the ...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_JZrETJlgyq", "iclr_2022_JZrETJlgyq", "iclr_2022_JZrETJlgyq", "I8hXFDCq4v", "P4OW1XTEu8p", "PUUTiwWpp3H", "nh5DGDOQx3a", "XIf_nOOBTLM", "ma01lFy8ze", "qsh4TYo-bAo", "iclr_2022_JZrETJlgyq", "sZYIJLyT0Vu", "I8hXFDCq4v", "qatEmw-OZC9", "iclr_2022_JZrETJlgyq", "uDClw-f3wv", "fz...
iclr_2022_R-piejobttn
Mixture Representation Learning with Coupled Autoencoders
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
Reject
This paper proposes cpl-mixVAE, a method for fitting discrete-continuous latent variable models based on mixture representations and a novel consensus clustering constraint. After extensive discussion, no one was willing to argue in favor of acceptance, and a majority of the reviewers felt another round of revision is needed. Ultimately, I concur that while the ideas are novel and potentially interesting, more effort is needed to convincingly demonstrate the efficacy of the method. Valid concerns were also raised regarding the claimed "unsupervised" nature of the proposed method, a claim which at the very least requires some additional context. At this point, these outstanding issues require an additional round of revision.
train
[ "jbzJOlu4TUz", "oiZOfkgDQG5", "5jjuPFD-FKL", "ejQL_r65a1e", "UHjP08P8BeS", "HAcaQt9Ujuv", "OF1ufUzDspr", "dqxk8eByuUp", "CNQlK254kPj", "DhP1N0ZEoP", "1ZSqk4EXnic", "a0qBVfLW2c", "3ToH7w65GH3", "FyXO05Gdp1f", "KKj8JZKgQh", "HTJXLGaGE_n", "H4K_eASEYGz", "FaJ0jKDQM9h", "YVFGthW4bOV"...
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " We just noticed the reviewer’s post-rebuttal comment was posted as a modification in the main review section of the initial comment. Here we would like to address the reviewer’s concerns.\n\n-**“unclear whether the model is applicable/generalizable for the real-world datasets”:** Single-cell datasets are real-wor...
[ -1, -1, -1, -1, 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "UHjP08P8BeS", "FyXO05Gdp1f", "H4K_eASEYGz", "OF1ufUzDspr", "iclr_2022_R-piejobttn", "iclr_2022_R-piejobttn", "dqxk8eByuUp", "1ZSqk4EXnic", "iclr_2022_R-piejobttn", "a0qBVfLW2c", "a0qBVfLW2c", "YVFGthW4bOV", "kV7IQUwJIML", "wd-oRJio0sG", "UHjP08P8BeS", "eut8MyMYgU", "eut8MyMYgU", "...
iclr_2022_Xa8sKVPnDJq
Composing Features: Compositional Model Augmentation for Steerability of Music Transformers
Music is a combinatorial art. Given a starting sequence, many continuations are possible, yet often only one is written down. With generative models, we can explore many. However, finding a continuation with specific combinations of features (such as rising pitches, with block chords played in syncopated rhythm) can take many trials. To tackle the combinatorial nature of composing features, we propose a compositional approach to steering music transformers, building on lightweight fine-tuning methods such as prefix tuning and bias tuning. We introduce a novel contrastive loss function that enables us to steer compositional models over logical features using supervised learning. We examine the difficulty in steering based on whether features musically follow a prime or not, using existing music as a proxy. We show that with a relatively small number of extra parameters, our method allows bias tuning to perform successful fine-tuning in both the single-feature and compositional setting.
Reject
This paper proposed a compositional approach to (conditionally) steer pre-trained music transformers to the direction intended by the user. Overall the scores are mostly negative. The reviewers pointed out some interesting aspects of the paper (e.g., using hard binary constraints as opposed to the soft ones, the contrastive approach). However, one common issue shared by all the reviewers is the clarity of the presentation, which led to many reviewers being confused about various aspects of the paper especially the empirical evaluation. The authors did provide a detailed response to address some of the concerns, but to fully address all the points I anticipate it would require quite substantial change to the paper. A couple reviewers also raised the concerns regarding the limited contribution of the paper. Finally, there appears to be some disagreement between the authors and reviewers regarding how to interpret the listening test results. I hope the authors can take the comments into consideration to further improve this paper for the next submission.
train
[ "IXEXD1DMjJp", "VyJh-OqDkAp", "GjxrVIYEwx", "TgSBFNsatI", "1Vw7e8NfLmV", "Lvh5yTnBLRD", "U0t1ypf0Krv", "UXebL9ecoI", "r3TX7CWmTS", "GEShAzi6OGq", "MBmTl5iR3Bo" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " # Overall response\n\nWe would like to thank all the reviewers for the careful reading of the paper and their thoughtful comments and suggestions. While we are responding to each reviewer separately, we wanted to highlight our response to a few common concerns raised by multiple reviewers. When making changes to ...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "iclr_2022_Xa8sKVPnDJq", "MBmTl5iR3Bo", "GEShAzi6OGq", "r3TX7CWmTS", "r3TX7CWmTS", "r3TX7CWmTS", "UXebL9ecoI", "iclr_2022_Xa8sKVPnDJq", "iclr_2022_Xa8sKVPnDJq", "iclr_2022_Xa8sKVPnDJq", "iclr_2022_Xa8sKVPnDJq" ]
iclr_2022_FS0XKbpkdOu
Sphere2Vec: Self-Supervised Location Representation Learning on Spherical Surfaces
Location encoding is valuable for a multitude of tasks where both the absolute positions and local contexts (image, text, and other types of metadata) of spatial objects are needed for accurate predictions. However, most existing approaches do not leverage unlabeled data, which is crucial for use cases with limited labels. Furthermore, the availability of large-scale real-world GPS coordinate data demands representation and prediction at global scales. However, existing location encoding models assume that the input coordinates are in Euclidean space, which can lead to modeling errors due to distortions introduced when mapping coordinates from other manifolds (e.g., spherical surfaces) to Euclidean space. We introduceSphere2Vec, a location encoder, which can directly encode spherical coordinates while preserving spherical distances.Sphere2Vecis trained with a self-supervised learning framework which pre-trains deep location representations from unlabeled geo-tagged images with contrastive losses, and then fine-tunes to perform super-vised geographic object classification tasks.Sphere2Vecachieves the performances of state-of-the-art results on various image classification tasks ranging from species, Point of Interest (POI) facade, to remote sensing. The self-supervised pertaining significantly improves the performance ofSphere2Vecespecially when the labeled data is limited
Reject
In spite of some slightly mixed scores (with one borderline positive review), scores are ultimately lukewarm and tend toward negative (and furthermore, reviews are broadly in agreement as to the issues they raise). Main issues center around low significance of the results, and issues with the presentation that need to be addressed.
train
[ "-WWgioKCtCe", "kf5rmuXS87", "-xRL7IwbMl", "Tx3mKDsEE1J", "Sa1gxJDTZAi", "KAZjTAyL-5D", "hYI1rtPWRQb", "M1rrTWceb0k", "gyjUFFDCg7", "a67YN4VDwy", "WuQ8ibJP5vT", "mOR7cxdl9N6", "DlMYHzPTLIn", "pWmBfNQFyc", "6M1XXcKpAD_", "mvHRm7ZAmdY", "9tGUcZJqIVa", "DGAN2QxSStZ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 6cCW, please let us know if our response addressed your concerns or if you have any further questions.\n\n", " Dear Reviewer 6Te4, please let us know if our response addressed your concerns or if you have any further questions.\n\n", " Dear Reviewer toM9, please let us know if our response addre...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "Sa1gxJDTZAi", "6M1XXcKpAD_", "DGAN2QxSStZ", "Sa1gxJDTZAi", "WuQ8ibJP5vT", "pWmBfNQFyc", "gyjUFFDCg7", "gyjUFFDCg7", "a67YN4VDwy", "iclr_2022_FS0XKbpkdOu", "mvHRm7ZAmdY", "6M1XXcKpAD_", "DGAN2QxSStZ", "9tGUcZJqIVa", "iclr_2022_FS0XKbpkdOu", "iclr_2022_FS0XKbpkdOu", "iclr_2022_FS0XKbp...
iclr_2022_HHpWuWayMo
Evaluating Robustness of Cooperative MARL
In recent years, a proliferation of methods were developed for multi-agent reinforcement learning (MARL). In this paper, we focus on evaluating the robustness of MARL agents in continuous control tasks. In particular, we propose the first model-based approach to perform adversarial attacks for cooperative MARL. We design effective attacks to degrade the MARL agent's performance by adversarially perturbing the states of agent(s) and solving an optimization problem. In addition, we also developed several strategies to select the most vulnerable agents that help to further decrease the team reward of MARL. Extensive numerical experiments on multi-agent Mujoco tasks verify the effectiveness of our proposed approach.
Reject
Although the problem studied in the paper is interesting, all the reviewers believe that the current draft has limited technical contributions. Moreover, there are serious issues with the writing and presentation of the work. Also the experiments are rather limited and their results are not significant. I strongly recommend the authors to take the reviewers' comments into account and improve different aspects of their work for future conferences.
train
[ "IAs8aRs8I_o", "-3a_Kb3MvcE", "SURj1R57Tz7", "adxnROTLio", "_eKLX88Wwjj", "vh9JNl71bVx", "9iqp3jQ654W", "DWNO4lBB3_Q", "ypF4gZ1Qq-x", "WKsd7N2vS1c", "QvY4ulKBXr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presented a novel method for cooperative MARL to evaluate the robustness under the adversarial states. Specifically, the authors leveraged a model-based approach to perform adversarial attacks on states for MARL with continuous action and state spaces. To represent well the environment, they used a deep...
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_HHpWuWayMo", "IAs8aRs8I_o", "IAs8aRs8I_o", "QvY4ulKBXr", "QvY4ulKBXr", "ypF4gZ1Qq-x", "WKsd7N2vS1c", "WKsd7N2vS1c", "iclr_2022_HHpWuWayMo", "iclr_2022_HHpWuWayMo", "iclr_2022_HHpWuWayMo" ]
iclr_2022_dtYnHcmQKeM
Physics-Informed Neural Operator for Learning Partial Differential Equations
Machine learning methods have recently shown promise in solving partial differential equations (PDEs). They can be classified into two broad categories: solution function approximation, and operator learning. The Physics-Informed Neural Network (PINN) is an example of the former while the Fourier neural operator (FNO) is an example of the latter. Both these approaches have shortcomings. The optimization in PINN is challenging and prone to failure, especially on multi-scale dynamic systems. FNO does not suffer from this optimization issue since it carries out supervised learning on a given dataset, but obtaining such data may be too expensive or infeasible. In this work, we propose the physics-informed neural operator (PINO), where we combine the operating-learning and function-optimization frameworks, and this improves convergence rates and accuracy over both PINN and FNO models. In the operator-learning phase, PINO learns the solution operator over multiple instances of the parametric PDE family. In the test-time optimization phase, PINO optimizes the pre-trained operator ansatz for the querying instance of the PDE. Experiments show PINO outperforms previous ML methods on many popular PDE families while retaining the extraordinary speed-up of FNO compared to solvers. In particular, PINO accurately solves long temporal transient flows and chaotic Kolmogorov flows, while PINN and other methods fail to converge to a reasonable accuracy.
Reject
The paper proposes the physics-informed neural operator. It combines the operating-learning and function-optimization frameworks, which improves convergence rates and accuracy over traditional methods. While the paper was well written, several reviewers raised their concerns on the novelty of the paper, especially regarding the difference from PINN-DeepONet (Wang et. al.). Following this, there have been a long discussion between the authors and the reviewers, as well as among the reviewers. As a consequence, we think the authors somehow overclaimed their contributions on combining PINN and operator learning, and there are some important references missing and baselines not compared empirically. With this, the conclusion is that we cannot accept this paper in its current form, and we hope that authors can take all the review feedback into consideration and better position the novelty and impact of their work in the future submissions.
train
[ "HflM8A2c1ya", "ucPyEeZfuh", "bDLdQE6huIz", "EnlcWJghjmV", "p3yIz1iUN1X", "VYwu-1R98uC", "qxvfg-djsQY", "sb_P9inrDbq", "5O4b5i0JfJA", "H8yUzxiUz3b", "ZcPL9OZAyV8", "cafHRZSbFMN", "KlIJc8shQyL", "zVYHqbftU_", "OcQL2HetJob", "xehsaFgz0S_", "Ugm-ZbHFyBN", "AWHhO2yFYCM", "m9RdAtHDlke...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Dear reviewer NBhE, we have seen you update your comment. Could you clarify which two papers should be added? If you mean PINN-DeepONet (Wang et. al.) and Physics-constrained DL modeling (Zhu et. al.), please see our response in the first paragraph. If you mean LAAF-PINN and SA-PINN, please see the second paragra...
[ -1, -1, 5, 6, -1, -1, 5, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 3, -1, -1, 2, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "bDLdQE6huIz", "p3yIz1iUN1X", "iclr_2022_dtYnHcmQKeM", "iclr_2022_dtYnHcmQKeM", "Ugm-ZbHFyBN", "sb_P9inrDbq", "iclr_2022_dtYnHcmQKeM", "xehsaFgz0S_", "m9RdAtHDlke", "iclr_2022_dtYnHcmQKeM", "EnlcWJghjmV", "OcQL2HetJob", "H8yUzxiUz3b", "7su9WhGViU", "KlIJc8shQyL", "qxvfg-djsQY", "ZcPL...
iclr_2022_x4tkHYGpTdq
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., $175$B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many fine-tuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed $\textbf{D}$ually $\textbf{S}$parsity-$\textbf{E}$mbedded $\textbf{E}$fficient Tuning (DSEE), aims to achieve two key objectives: (i) $\textit{parameter efficient fine-tuning}$ - by enforcing sparsity-aware weight updates on top of the pre-trained weights; and (ii) $\textit{resource-efficient inference}$ - by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and $\ell_1$ sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about $35\%$ inference FLOPs savings with $<0.1\%$ trainable parameters and comparable performance to conventional fine-tuning.
Reject
The paper integrates several dimensionality reduction and sparsity methods for improving the efficiency of large pre-trained models. Overall, the paper is interesting and discusses an important topic. However, it seems that it is not ready to be published at the current stage. I would encourage the authors to take reviewers comments into account and further improve the paper The pros and cons of the papers are summarized in the following: Pros: + Improving the efficiency of large pre-trained models is an essential research issue. + The idea is interesting although the technical novelty is a bit limited. Cons: - The key concern is that the technical and practical benefit of the proposed approach is not clear based on the results demonstrated in the experiments. - The writing of the paper can be further improved in general to make the motivation more clear.
train
[ "BtkHYM6OZ2Z", "Ta2VPVgeJ99", "Qkt3xvCpwUG", "10D2hrM9kzw", "0OMFz3tR_D", "KBRBPHU4Y0", "0JzKe2m-p_9", "5yvYU1PNfY", "1XRgM3IyYFj", "mM9WV5R6oU3", "DstCs_CHddm", "prYhA0E_mzF", "VhieWNZcoI4", "LEP5CqQTRDm", "F16ks68RSjq", "cgg7cEMhbFZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper tries to solve both efficient fine-tuning and model size compression simultaneously. It adopts the previous framework to model W and \\delta W together and use different existing approaches to learn 2 parts in order. Strength:\n\n1. The paper is presented in an easy to read manner.\n\n2. I believe it's...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "iclr_2022_x4tkHYGpTdq", "0JzKe2m-p_9", "DstCs_CHddm", "KBRBPHU4Y0", "1XRgM3IyYFj", "cgg7cEMhbFZ", "F16ks68RSjq", "BtkHYM6OZ2Z", "BtkHYM6OZ2Z", "LEP5CqQTRDm", "VhieWNZcoI4", "VhieWNZcoI4", "iclr_2022_x4tkHYGpTdq", "iclr_2022_x4tkHYGpTdq", "iclr_2022_x4tkHYGpTdq", "iclr_2022_x4tkHYGpTdq...
iclr_2022_Mlwe37htstv
Efficient Wasserstein and Sinkhorn Policy Optimization
Trust-region methods based on Kullback-Leibler divergence are pervasively used to stabilize policy optimization in reinforcement learning. In this paper, we examine two natural extensions of policy optimziation with Wasserstein and Sinkhorn trust regions, namely Wasserstein policy optimization (WPO) and Sinkhorn policy optimization (SPO). Instead of restricting the policy to a parametric distribution class, we directly optimize the policy distribution and derive their close-form policy updates based on the Lagrangian duality. Theoretically, we show that WPO guarantees a monotonic performance improvement, and SPO provably converges to WPO as the entropic regularizer diminishes. Experiments across tabular domains and robotic locomotion tasks further demonstrate the performance improvement of both approaches, more robustness of WPO to sample insufficiency, and faster convergence of SPO, over state-of-art policy gradient methods.
Reject
This paper proposes two extensions of the TRPO algorithm in which the trust region is defined using the Wasserstein distance and the Sinkhorn divergence. The proposed methods do not restrict the policy to belong to a parametric distribution class and the authors provide closed-form policy updates and a performance improvement bound for the Wasserstein policy optimization. The authors provide an empirical evaluation of their approaches on tabular domains and some discrete locomotion tasks, comparing the performance with some state-of-the-art policy optimization approaches. After reading the authors' feedback and interacting with the authors, the reviewers did not reach a consensus: one of the reviewers votes for rejection, while the other three reviewers are slightly positive. In particular, the reviewer that voted for rejection raised a number of concerns that have been discussed at length with the authors, who were able to clarify some of the issues, but some of the answers did not satisfy the reviewer. I went through the paper and I found the paper solid from a technical point of view, but I share some of the reviewers' concerns and I think that the authors should better position their contribution with respect to the state of the art. Overall, this paper is borderline and I feel it needs still some work to deserve clear acceptance (which I think will be soon).
train
[ "3OCf1PQo583", "SPBolS6Kq6F", "VKtbW6qb_op", "NypiVRMc8t", "E8ZjdPP22Q0", "HV3oy6stLHb", "L-wvqRnCMCx", "K8lNxinyM8G", "I5eUb0ygPyg", "h9GwyM53O-t", "7RsyR1GOwjL", "h-Yt40stir", "RUztyFjvaJD", "obdpm4DlXHk", "3vUujdH6t1c", "i_DZyCJq65l", "hMnkmKLfD4u", "EpJyH9FfZoZ", "JT2xCrdKpd2...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_rev...
[ " I have modified my point 1(2). My other comments remain unchanged.", " To avoid further confusion, let me highlight my major concerns about the claimed contribution.\n\n0. *(From abstract) Instead of restricting the policy to a parametric distribution class, we directly optimize the policy distribution...(From ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "VKtbW6qb_op", "NXZhzF17sws", "SPBolS6Kq6F", "iclr_2022_Mlwe37htstv", "HV3oy6stLHb", "L-wvqRnCMCx", "K8lNxinyM8G", "SPBolS6Kq6F", "d-ceaCIxFMt", "h-Yt40stir", "obdpm4DlXHk", "JT2xCrdKpd2", "3vUujdH6t1c", "i_DZyCJq65l", "hMnkmKLfD4u", "hMnkmKLfD4u", "EpJyH9FfZoZ", "iWl7KCFZ8G8", "...
iclr_2022_SuKTLF9stD
Data-Efficient Augmentation for Training Neural Networks
Data augmentation is essential to achieve state-of-the-art performance in many deep learning applications. However, modern data augmentation techniques become computationally prohibitive for large datasets. To address this, we propose a rigorous technique to select subsets of data points that when augmented, closely capture the training dynamics of full data augmentation. We first show that data augmentation, modeled as additive perturbations, speeds up learning by enlarging the smaller singular values of the network Jacobian. Then, we propose a framework to iteratively extract small subsets of training data that when augmented, closely capture the alignment of the fully augmented Jacobian with label/residual vector. We prove that stochastic gradient descent applied to augmented subsets found by our approach have similar training dynamics to that of fully augmented data. Our experiments demonstrate that our method outperforms state-of-the-art max-loss strategy by 7.7% on CIFAR10 while achieving 6.3x speedup, and by 4.7% on SVHN while achieving 2.2x speedup, using 10% and 30% subsets, respectively.
Reject
The work presents a theoretical analysis of data augmentation, presenting evidence that data augmentation enlarges the smaller the singular values of the network Jacobian. Based on this theory the authors present a method for selecting a subset of training data to use with augmentation that decently approximates performance of training w/ augmentation on the full dataset. Reviewers overall agreed that the theoretical analysis was interesting, and did not find any flaws (though it is worth noting that the theory is restricted to additive perturbations). However, multiple reviewers found the presented experiments unconvincing, and questioned the stated motivation. The AC agrees with reviewers that most simple augmentations are not prohibitive in training speed. Certainly training on less data with a fixed epoch budget would require less compute time, but this is has nothing to do with augmentation and instead is a result of fewer steps taken in training. In the rebuttal, the authors argued that training on Imagenet is prohibitive with a single GPU (taking 2 weeks to do full training). However, given the authors claim their method speeds up training by a factor of 6.3x, then reducing ImageNet training from 2 weeks to 2 days would be a more convincing application of their method and would strengthen the work.
train
[ "niOUk0PS70f", "ldQ1vY5eV4", "MDjDTvHZQHn", "mLYEA9ypjT", "tXC8H20QB-p", "RKuHIZM7chA", "FUEQirveGui", "f-66XIT_qLs", "x_Wv5dKcx1", "ZiO7_PLhy9", "oqaoyy-Me37", "_Fy0z6wR1BO", "fvNJoaoeqKk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses. I appreciate the explanations but my main two concerns remain.\n\nRegarding the first point, the limitations of the additive perturbations model, I still believe this is in an important limitation of the theoretical analysis that is not discussed in sufficient depth in the paper and ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "f-66XIT_qLs", "FUEQirveGui", "x_Wv5dKcx1", "RKuHIZM7chA", "iclr_2022_SuKTLF9stD", "fvNJoaoeqKk", "_Fy0z6wR1BO", "oqaoyy-Me37", "ZiO7_PLhy9", "iclr_2022_SuKTLF9stD", "iclr_2022_SuKTLF9stD", "iclr_2022_SuKTLF9stD", "iclr_2022_SuKTLF9stD" ]
iclr_2022_n6Bc3YElODq
Model-Based Opponent Modeling
When one agent interacts with a multi-agent environment, it is challenging to deal with various opponents unseen before. Modeling the behaviors, goals, or beliefs of opponents could help the agent adjust its policy to adapt to different opponents. In addition, it is also important to consider opponents who are learning simultaneously or capable of reasoning. However, existing work usually tackles only one of the aforementioned types of opponent. In this paper, we propose model-based opponent modeling (MBOM), which employs the environment model to adapt to all kinds of opponent. MBOM simulates the recursive reasoning process in the environment model and imagines a set of improving opponent policies. To effectively and accurately represent the opponent policy, MBOM further mixes the imagined opponent policies according to the similarity with the real behaviors of opponents. Empirically, we show that MBOM achieves more effective adaptation than existing methods in competitive and cooperative environments, respectively with different types of opponent, i.e., fixed policy, naive learner, and reasoning learner.
Reject
This paper tackles the challenging problem of learning against an opponent that may or may not be simultaneously learning as well. The key contribution of this paper is a learning algorithm that accounts for how the opponents may update their policies from past interactions. The proposed algorithm, MBOM, relies on the environment model to model a hierarchy of opponents using different depths of recursive reasoning (from non-learning agents to deep recursive agents). It is agreed that this papers studies an important problem and shows promise. However, the current results aren't convincing enough. In particular, since there is no theoretical analysis, more empirical validation of the method is expected. The current experiments only considers a single opponent, and it is unclear how well the method works given accumulated errors through the recursion. Future submissions would benefit from additional empirical analysis (e.g., ablations) to help understand when and why MBOM works.
train
[ "109ufeISlE3", "wDB5E72JjrS", "Bym4YeTNDXO", "Qyj1aB4aAB", "5OQtOn8ejq_", "driqD2AAar9", "Iae2psYMDT", "H_gaJLquv-a", "7ruDEi65VJM", "rSG_JigEExu", "O6Ochnle3FV", "_uSST710u16" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your answers and clarifications. Thanks for sharing the computation times against the fixed policy opponent. Depending on the problem, the computation times might be too high, and, given the increase with k, they will be significantly higher with multiple opponents. The approach proposed is interestin...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "H_gaJLquv-a", "5OQtOn8ejq_", "_uSST710u16", "_uSST710u16", "_uSST710u16", "O6Ochnle3FV", "rSG_JigEExu", "7ruDEi65VJM", "iclr_2022_n6Bc3YElODq", "iclr_2022_n6Bc3YElODq", "iclr_2022_n6Bc3YElODq", "iclr_2022_n6Bc3YElODq" ]
iclr_2022_-h5rboREox7
Double Descent in Adversarial Training: An Implicit Label Noise Perspective
Here, we show that the robust overfitting shall be viewed as the early part of an epoch-wise double descent --- the robust test error will start to decrease again after training the model for a considerable number of epochs. Inspired by our observations, we further advance the analyses of double descent to understand robust overfitting better. In standard training, double descent has been shown to be a result of label flipping noise. However, this reasoning is not applicable in our setting, since adversarial perturbations are believed not to change the label. Going beyond label flipping noise, we propose to measure the mismatch between the assigned and (unknown) true label distributions, denoted as \emph{implicit label noise}. We show that the traditional labeling of adversarial examples inherited from their clean counterparts will lead to implicit label noise. Towards better labeling, we show that predicted distribution from a classifier, after scaling and interpolation, can provably reduce the implicit label noise under mild assumptions. In light of our analyses, we tailored the training objective accordingly to effectively mitigate the double descent and verified its effectiveness on three benchmark datasets.
Reject
The paper suggests that robust overfitting could be viewed as the early-part of a double descent phenomenon for adversarial training. The authors identify implicit label noise, i.e. the label distribution mismatch between the true example and the generated adversarial example as a possible explanation for this phenomenon in adversarial training. This claim is empirically supported by experiments using static adversarial examples. The authors propose a method using temperature scaling and interpolation to mitigate the effects caused by implicit label noise for robust overfitting. This method is evaluated on CIFAR 10/100 and tiny-Imagenet. Concerns have been raised in the reviews about sufficient justification for the claim that implicit label noise leads to adversarial overfitting. The rebuttal answers this question to some extent. Concerns have also be raised about the writing and whether sufficient details of the experimental setup are present in the main paper. While I acknowledge the difficulty of fitting all details within page limits, I would think that these details are crucial given that primary support for the claims made are from empirical observations.
train
[ "o88MrgGNCgS", "mpMnltM4GdD", "Lb_-EWdjS6J", "2D3yNeuWmG-", "9QKRi3491IH", "rpzFqlcm62J", "4OBEmSp4tpn", "H9wwUzpMutn", "uKvqka66N4F", "mLJdpeijo_p", "HdYs_vk1i-5", "rNtRAmNIUpW", "wA-AhQJWNv6", "PNEjHgNmXM", "oX7iJH_vF9q", "_L2ADe-5VQn" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the clarification about the relationship between implicit label noise and double descent in adversarial training. However, it is still not clear how the results in Figure 3 support the claims. The authors may need to give a detailed explanation for this concern.", " We thank the reviewer for the ad...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "wA-AhQJWNv6", "9QKRi3491IH", "4OBEmSp4tpn", "mpMnltM4GdD", "H9wwUzpMutn", "iclr_2022_-h5rboREox7", "HdYs_vk1i-5", "_L2ADe-5VQn", "H9wwUzpMutn", "rpzFqlcm62J", "mLJdpeijo_p", "oX7iJH_vF9q", "PNEjHgNmXM", "iclr_2022_-h5rboREox7", "iclr_2022_-h5rboREox7", "iclr_2022_-h5rboREox7" ]
iclr_2022_bxiDvWZm6zU
Influence-Based Reinforcement Learning for Intrinsically-Motivated Agents
Discovering successful coordinated behaviors is a central challenge in Multi-Agent Reinforcement Learning (MARL) since it requires exploring a joint action space that grows exponentially with the number of agents. In this paper, we propose a mechanism for achieving sufficient exploration and coordination in a team of agents. Specifically, agents are rewarded for contributing to a more diversified team behavior by employing proper intrinsic motivation functions. To learn meaningful coordination protocols, we structure agents’ interactions by introducing a novel framework, where at each timestep, an agent simulates counterfactual rollouts of its policy and, through a sequence of computations, assesses the gap between other agents’ current behaviors and their targets. Actions that minimize the gap are considered highly influential and are rewarded. We evaluate our approach on a set of challenging tasks with sparse rewards and partial observability that require learning complex cooperative strategies under a proper exploration scheme, such as the StarCraft Multi-Agent Challenge. Our methods show significantly improved performances over different baselines across all tasks.
Reject
At this time, this work is not yet ready for publication. The core idea--influence functions--was poorly explained in the initial submission, and although major changes to the paper were made to rectify this, at least some reviewers of the remain unconvinced and it is unclear that the paper has been fully evaluated with this confusion resolved. There are a sufficient number of other concerns around the paper, that having rectified these more fully and outside the tight time constraints of the rebuttal period, I hope for an interesting resubmission in future.
test
[ "1g_KJ5iR1SL", "VmiMwNw2O9_", "8zsnJkYGno2", "PRi1V3umEd", "0D-LoriECVT", "MpiTeFTU7L", "FnKnadIxXp", "X3bt7u5t084", "S0jsh6mVhQT", "RWaAxAN1UHV", "9wga2yCHMig", "paAJ8mo7m6b" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a new exploration method for cooperative multi-agent reinforcement learning (MARL), which utilizes influence-based regularization and curiosity-driven incentives to encourage coordinated and diverse exploration. This paper formulates the dissimilarity between other agents' behaviors and their...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_bxiDvWZm6zU", "X3bt7u5t084", "0D-LoriECVT", "MpiTeFTU7L", "paAJ8mo7m6b", "9wga2yCHMig", "RWaAxAN1UHV", "1g_KJ5iR1SL", "iclr_2022_bxiDvWZm6zU", "iclr_2022_bxiDvWZm6zU", "iclr_2022_bxiDvWZm6zU", "iclr_2022_bxiDvWZm6zU" ]