paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2021_xppLmXCbOw1 | Self-supervised Visual Reinforcement Learning with Object-centric Representations | Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal for an agent to discover new skills. Nevertheless, in compositional/multi-object environments it is difficult to disentangle all the factors of variation into such a fixed-length representation of the whole scene. We propose to use object-centric representations as a modular and structured observation space, which is learned with a compositional generative world model.
We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills. These skills can be further combined to address compositional tasks like the manipulation of several different objects. | spotlight-presentations | This paper proposes a self supervised learning algorithm to compute object-centric representations for efficient RL in the context of robot manipulation tasks.
The key idea is to learn an object-centric representation (using prior work on SCALOR) and use this to intrinsically generate goals for a SAC policy to achieve. The policy is a goal-conditioned attention policy. The evaluation metric is a set of tasks to manipulate objects for a visual rearrangement task.
${\bf Pros}: $
1. The baselines are reasonable and consist of other unsupervised RL algorithms in recent literature.
2. Object-oriented RL is a growing area of interest and this paper proposes a reasonably novel and validated set of ideas in this domain. I believe it will be of significant interest and potentially make an impact on research in robotics and deep reinforcement learning.
3. The goal-conditioned attention policy can handle realistic scenarios, namely -- multi-object manipulation tasks
4. The attention mechanism also provides a reasonable solution to mitigate combinatorial hardness in multi-object environments
${\bf Cons}$:
1. Some of the reviewers felt that the experimental results from pixel inputs could have been pushed further. However, since the setup and algorithm is relatively novel, there are already many moving parts and this paper seems like a step in that direction
2. Experiments with larger set of objects would have been interesting to investigate and report.
| train | [
"NG9YZfmgdh",
"Y-k9S-NiB5J",
"BdHSLGJnp3-",
"LmH31ZH8-OD",
"FG1tgmJK3z",
"-RZY6NOEA9D",
"PgXrL96qlba",
"pyCK24SAygT",
"k7tx7t28rje",
"wU8z9qQ8Si",
"Yxid9OkGhtX",
"2KrV0RH7yY",
"AJF2Pptj7Ou",
"1KyMSB9iEek"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for pointing out those references. We included them into our related work section.",
"Dear reviewers, we uploaded a final revision where we cleaned up some minor details and added a few more references. \nThank you for engaging with us during the review period!",
"Summary:\n\nThe paper combines an ex... | [
-1,
-1,
8,
7,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
-1,
-1,
3,
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"FG1tgmJK3z",
"iclr_2021_xppLmXCbOw1",
"iclr_2021_xppLmXCbOw1",
"iclr_2021_xppLmXCbOw1",
"-RZY6NOEA9D",
"LmH31ZH8-OD",
"iclr_2021_xppLmXCbOw1",
"2KrV0RH7yY",
"iclr_2021_xppLmXCbOw1",
"BdHSLGJnp3-",
"1KyMSB9iEek",
"AJF2Pptj7Ou",
"PgXrL96qlba",
"iclr_2021_xppLmXCbOw1"
] |
iclr_2021__XYzwxPIQu6 | Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies | A main theoretical interest in biology and physics is to identify the nonlinear dynamical system (DS) that generated observed time series. Recurrent Neural Networks (RNN) are, in principle, powerful enough to approximate any underlying DS, but in their vanilla form suffer from the exploding vs. vanishing gradients problem. Previous attempts to alleviate this problem resulted either in more complicated, mathematically less tractable RNN architectures, or strongly limited the dynamical expressiveness of the RNN.
Here we address this issue by suggesting a simple regularization scheme for vanilla RNN with ReLU activation which enables them to solve long-range dependency problems and express slow time scales, while retaining a simple mathematical structure which makes their DS properties partly analytically accessible. We prove two theorems that establish a tight connection between the regularized RNN dynamics and their gradients, illustrate on DS benchmarks that our regularization approach strongly eases the reconstruction of DS which harbor widely differing time scales, and show that our method is also en par with other long-range architectures like LSTMs on several tasks. | spotlight-presentations | This paper describes a clever new class of piecewise-linear RNNs that contains a long-time scale memory subsystem. The reviewers found the paper interesting and valuable, and I agree. The four submitted reviews were unanimous in their vote to accept. The theoretical insights and empirical results are impactful and would be suitable for spotlight presentation. | train | [
"F6thXffhEj",
"uj5izQy6-D",
"gbLVhpSbGuO",
"lyxMHEiwV-",
"zL_CGqs57HE",
"HRLYCrDJFb",
"ZP5YG1v-F1n",
"4g6MywRcC3M",
"z7hYojHb6za",
"4EzKb0DNJ7O",
"ifrVpptmvpI",
"xOpmHg2uF0p",
"nliRA1lTkcE",
"BVQgVSSJN5d",
"Gddl40eQTz"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a regularization scheme for training vanilla Relu RNN to tackle the exploding and vanishing gradients issue. The work eases the analysis of RNN in the dynamical system point of view and connects the RNN dynamics and gradient theoretically. The experiments show the competitive performance compar... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021__XYzwxPIQu6",
"lyxMHEiwV-",
"HRLYCrDJFb",
"ZP5YG1v-F1n",
"iclr_2021__XYzwxPIQu6",
"xOpmHg2uF0p",
"4EzKb0DNJ7O",
"z7hYojHb6za",
"iclr_2021__XYzwxPIQu6",
"F6thXffhEj",
"BVQgVSSJN5d",
"zL_CGqs57HE",
"Gddl40eQTz",
"iclr_2021__XYzwxPIQu6",
"iclr_2021__XYzwxPIQu6"
] |
iclr_2021_Oos98K9Lv-k | Neural Topic Model via Optimal Transport | Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often degrade their performance severely on short documents. The requirement of reparameterisation could also comprise their training quality and model flexibility. To address these shortcomings, we present a new neural topic model via the theory of optimal transport (OT). Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document's word distributions. Importantly, the cost matrix of the OT distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Our proposed model can be trained efficiently with a differentiable loss. Extensive experiments show that our framework significantly outperforms the state-of-the-art NTMs on discovering more coherent and diverse topics and deriving better document representations for both regular and short texts. | spotlight-presentations | The reviewers unanimously agreed that this is an interesting paper that belongs at ICLR. The use of optimal transport in neural topic models is novel and the paper is well-written.
A common theme among the reviewers was that they would like to see more intuition and justification. I suggest you bear this in mind while editing the final version of the paper. I also believe that R3 brings up valid points about evaluating perplexity -- I don't think the lack of perplexity results are a reason to reject the paper, but I believe they can be calculated here (see eg the reference R3 provided) and they would give a clearer view of the model's performance.
| train | [
"bdTINidyy5g",
"hm5yZhRcIRL",
"6KWNmcCvExx",
"R-ToaUWH0Mx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The paper proposes a neural topic model which log-likelihood is regularized by Sinkhorn distance, instead of following Variational AutoEncoder (VAE) approach. The proposed model is hence cannot be interpreted as a probabilistic generative model. Still, with respect to metrics such as Topic Coherence and T... | [
7,
6,
7,
8
] | [
3,
4,
3,
4
] | [
"iclr_2021_Oos98K9Lv-k",
"iclr_2021_Oos98K9Lv-k",
"iclr_2021_Oos98K9Lv-k",
"iclr_2021_Oos98K9Lv-k"
] |
iclr_2021_bnY0jm4l59 | Memory Optimization for Deep Networks | Deep learning is slowly, but steadily, hitting a memory bottleneck. While the tensor computation in top-of-the-line GPUs increased by 32× over the last five years, the total available memory only grew by 2.5×. This prevents researchers from exploring larger architectures, as training large networks requires more memory for storing intermediate outputs. In this paper, we present MONeT, an automatic framework that minimizes both the memory footprint and computational overhead of deep networks. MONeT jointly optimizes the checkpointing schedule and the implementation of various operators. MONeT is able to outperform all prior hand-tuned operations as well as automated checkpointing. MONeT reduces the overall memory requirement by 3× for various PyTorch models, with a 9-16% overhead in computation. For the same computation cost, MONeT requires 1.2-1.8× less memory than current state-of-the-art automated checkpointing frameworks. Our code will be made publicly available upon acceptance. | spotlight-presentations | The reviewers all agree that Monet proposed in the paper which optimizes for both local and global memory saving in Deep learning models is theoretically sound and experimentally convincing.
Accept! | train | [
"wcvP6mRf6f1",
"D67vDjJpfy",
"e6d-Sb-ykF",
"Q-We0kSL7MK",
"aiT1CG_txr-",
"zgoGYctB5q5",
"T6n8h4FcX1f",
"lehSWIkOa4d",
"wOb533kplA4"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the insightful questions and analysis.\nWe provide our responses below:\n\n### Solution times and scalability\n\n**[Solution times for the ILP formulation]**\nWe evaluate schedules obtained using solution times set to a maximum of 24 hours.\nWe have added Table 3 and Table 4 in Appendix H... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"zgoGYctB5q5",
"T6n8h4FcX1f",
"lehSWIkOa4d",
"wOb533kplA4",
"iclr_2021_bnY0jm4l59",
"iclr_2021_bnY0jm4l59",
"iclr_2021_bnY0jm4l59",
"iclr_2021_bnY0jm4l59",
"iclr_2021_bnY0jm4l59"
] |
iclr_2021_QfTXQiGYudJ | Stabilized Medical Image Attacks | Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, a threat to these systems arises that adversarial attacks make CNNs vulnerable. Inaccurate diagnosis results make a negative influence on human healthcare. There is a need to investigate potential adversarial attacks to robustify deep medical diagnosis systems. On the other side, there are several modalities of medical images (e.g., CT, fundus, and endoscopic image) of which each type is significantly different from others. It is more challenging to generate adversarial perturbations for different types of medical images. In this paper, we propose an image-based medical adversarial attack method to consistently produce adversarial perturbations on medical images. The objective function of our method consists of a loss deviation term and a loss stabilization term. The loss deviation term increases the divergence between the CNN prediction of an adversarial example and its ground truth label. Meanwhile, the loss stabilization term ensures similar CNN predictions of this example and its smoothed input. From the perspective of the whole iterations for perturbation generation, the proposed loss stabilization term exhaustively searches the perturbation space to smooth the single spot for local optimum escape. We further analyze the KL-divergence of the proposed loss function and find that the loss stabilization term makes the perturbations updated towards a fixed objective spot while deviating from the ground truth. This stabilization ensures the proposed medical attack effective for different types of medical images while producing perturbations in small variance. Experiments on several medical image analysis benchmarks including the recent COVID-19 dataset show the stability of the proposed method. | spotlight-presentations | The paper proposes to use a regularization term for stabilizing the perturbation trajectories in generating adversarial examples for medical image tasks. The authors tested the effectiveness of their proposal on different medical image datasets obtained by different modalities, and the experimental results are generally encouraging.
All the reviewers see the value of the paper and give positive comments. At the same time, they also point out some aspects for further improvement, including
1) The datasets used are relatively small
2) The title is a little misleading since the paper only tackles the image attacks (but the title is stabilized medical attacks).
3) Case studies and visualization are needed to help people better understand the paper
The authors have done a good job in their rebuttal and paper revision, by adding experiments on larger datasets, changing the title to “stabilized medical image attacks”, and adding some geometric figures for better illustration. These have largely addressed the concerns of the reviewers, and we see no problem with accepting the paper.
| train | [
"ZCZ3TQekh3y",
"du1EufUuAIp",
"38PL-rUgfC2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes to use a regularization term for stabilizing the perturbation trajectories in generating adversarial examples for medical image tasks. More specifically, they introduce a loss stabilization term which forces perturbed inputs to be close to smoothed perturbed inputs in the CNN output space. The a... | [
7,
8,
7
] | [
3,
5,
4
] | [
"iclr_2021_QfTXQiGYudJ",
"iclr_2021_QfTXQiGYudJ",
"iclr_2021_QfTXQiGYudJ"
] |
iclr_2021_LwEQnp6CYev | Quantifying Differences in Reward Functions | For many tasks, the reward function is inaccessible to introspection or too complex to be specified procedurally, and must instead be learned from user data. Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward. However, this method cannot distinguish between the learned reward function failing to reflect user preferences and the policy optimization process failing to optimize the learned reward. Moreover, this method can only tell us about behavior in the evaluation environment, but the reward may incentivize very different behavior in even a slightly different deployment environment. To address these problems, we introduce the Equivalent-Policy Invariant Comparison (EPIC) distance to quantify the difference between two reward functions directly, without a policy optimization step. We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy. Furthermore, we find EPIC can be efficiently approximated and is more robust than baselines to the choice of coverage distribution. Finally, we show that EPIC distance bounds the regret of optimal policies even under different transition dynamics, and we confirm empirically that it predicts policy training success. Our source code is available at https://github.com/HumanCompatibleAI/evaluating-rewards. | spotlight-presentations | The proposed approach for evaluating reward functions is theoretically grounded while having several properties appealing to practical RL tasks. This novel approach fills a gap in the literature. All reviewers agree that this paper has a place at ICLR. | train | [
"tgUGeaVRIK6",
"Z8Q70Q-0b7_",
"2R8ewttwI7U",
"FxPF0f--DAM",
"EC2XIwCeIC",
"PE4HODkbf-_",
"ySn6_8762D",
"A_Pidpgz8e2",
"yVgkzXwqxRa",
"fYTArCLuF6J",
"sNmMgqEgAWy",
"-l5z2ZjAni",
"eIO1HMU68Mn",
"Vxx6EjGuwmh",
"ejl23_XNNRU",
"GGh4HSsluPE"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"- This paper proposes one main method EPIC to measure the differences between the reward functions of MDPs, and two weaker baselines methods NPEC, ERC. The methods are useful in directly comparing two different reward functions on a common MDPs, without running RL algorithms and comparing the resulted performance.... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
2,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_LwEQnp6CYev",
"iclr_2021_LwEQnp6CYev",
"iclr_2021_LwEQnp6CYev",
"A_Pidpgz8e2",
"Z8Q70Q-0b7_",
"eIO1HMU68Mn",
"Vxx6EjGuwmh",
"iclr_2021_LwEQnp6CYev",
"fYTArCLuF6J",
"sNmMgqEgAWy",
"ejl23_XNNRU",
"tgUGeaVRIK6",
"GGh4HSsluPE",
"A_Pidpgz8e2",
"Z8Q70Q-0b7_",
"iclr_2021_LwEQnp6CYe... |
iclr_2021_kHSu4ebxFXY | MARS: Markov Molecular Sampling for Multi-objective Drug Discovery | Searching for novel molecules with desired chemical properties is crucial in drug discovery. Existing work focuses on developing neural models to generate either molecular sequences or chemical graphs. However, it remains a big challenge to find novel and diverse compounds satisfying several properties. In this paper, we propose MARS, a method for multi-objective drug molecule discovery. MARS is based on the idea of generating the chemical candidates by iteratively editing fragments of molecular graphs. To search for high-quality candidates, it employs Markov chain Monte Carlo sampling (MCMC) on molecules with an annealing scheme and an adaptive proposal. To further improve sample efficiency, MARS uses a graph neural network (GNN) to represent and select candidate edits, where the GNN is trained on-the-fly with samples from MCMC. Experiments show that MARS achieves state-of-the-art performance in various multi-objective settings where molecular bio-activity, drug-likeness, and synthesizability are considered. Remarkably, in the most challenging setting where all four objectives are simultaneously optimized, our approach outperforms previous methods significantly in comprehensive evaluations. The code is available at https://github.com/yutxie/mars. | spotlight-presentations | This work proposes a method for generating candidate molecules using a novel fragment-based MCMC proposal mechanism.
Pros:
* Well-written paper
* Novel idea for an important application
* Very good empirical performance compared to the state-of-the-art in multi-objective molecule generation
* Careful ablation studies
Cons:
* Some details were missing (runtime, experimental details) and have been added to the revised version.
The authors engaged in an extensive discussion with the reviewers and modified their paper to address the reviewer concerns.
After discussions three reviewers recommend accepting the work and consider it a novel and useful contribution to the field.
One reviewer (Reviewer 3) is not satisfied by the authors comments and has concerns about the work regarding: asymptotic correctness of the sampling; fairness of the experimental comparison; and computational complexity. The authors provide detailed justifications for their choices. After looking at the discussion there are two factors:
1. technical arguments regarding the correctness of the sampling method; the authors justify the correctness by known results for adaptive MCMC methods, and the argument is sound, and the area chair fully accepts the authors' arguments as correct and applicable.
2. extend of the experimental evaluation and suitable baseline methods; this is partially subjective. The authors provide extensive experiments in their work and justify exclusion of certain methods in that they do not easily apply to the multi-objective setting. In addition, Reviewer 3 demands a comparison of generated molecules per time, which is plausibly useful, however, none of the prior works have used such a metric in a consistent manner and it is clearly challenging to do so fairly as such metric would depend on specifics of the implementation and computer. The authors have updated their paper and added runtime information for their method. The area chair fully accepts the authors' arguments and justification for the current experimental scope.
In summary the area chair considers the remaining concerns by Reviewer 3 as invalid; in particular, the authors have made extensive efforts to engage and educate the reviewer. | val | [
"rT8gO9kBP0r",
"l8dOU5F9bU",
"grsCEbvRYXq",
"1gXXA_KFd71",
"3QrwTrFJln9",
"v34rOezRVtw",
"8rf7SW5uAB9",
"v2E2moUJOdL",
"fk-bIQuU4N",
"Ra7lWdmXT-Z",
"h3_RuHWemhp",
"TpeD1jXlM9C",
"D1sbHygVoZh",
"X-qRLmC4mSc",
"0nBFEm3AhNS",
"NNf_T7ylDnY",
"02NDzIaiWQF",
"WKsWGhJ06Lk"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"> **Q: Complexity**: 1) In MARS, the sampling procedure can not be shared by different molecules. That means for each molecule, we need to sample independently, which costs lots of time and maybe too expensive; 2) When you calculate the running time you also need to take train time of MPNN into account instead of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"3QrwTrFJln9",
"3QrwTrFJln9",
"3QrwTrFJln9",
"iclr_2021_kHSu4ebxFXY",
"v2E2moUJOdL",
"v2E2moUJOdL",
"v2E2moUJOdL",
"0nBFEm3AhNS",
"0nBFEm3AhNS",
"0nBFEm3AhNS",
"0nBFEm3AhNS",
"NNf_T7ylDnY",
"02NDzIaiWQF",
"WKsWGhJ06Lk",
"iclr_2021_kHSu4ebxFXY",
"iclr_2021_kHSu4ebxFXY",
"iclr_2021_kHS... |
iclr_2021_Jnspzp-oIZE | Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs | A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs). Such GCNs utilize isotropic kernels and are therefore insensitive to the relative orientation of vertices and thus to the geometry of the mesh as a whole. We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels. Since the resulting features carry orientation information, we introduce a geometric message passing scheme defined by parallel transporting features over mesh edges. Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods. | spotlight-presentations |
This paper addresses a crucial problem with graph convolutions on meshes.
The authors identify the issues related to existing networks and devise a sensible approach.
The work presents a novel message passing GNN operator for meshes that is equivariant under gauge transformations.
The reviewers unanimously agree on the both the importance of the problem and the impact the proposed work could have.
Suggestions for next version:
- The paper is unreadable without the appendix and somehow it would be better to make it self-contained
- Additional references as suggested in the reviews.
- Expanded experiments as suggested by R4, will also improve reader's confidence in the method.
I would recommend acceptance. I would request the authors to release a sufficiently documented and easy to use implementation. This not only allows readers to build on this work but also increase the overall impact of this method. | train | [
"TX1mhDCFU5s",
"57XOAZ_6brf",
"l_FhaPC7DGO",
"h1AX_mzKP1n",
"Atl1VdNItzr",
"CATHR98LlW",
"lzT0xjOltKf",
"GT1sp3jRCe4",
"_KAJRuislCw",
"H5eJcfzcAX",
"0jy5G9arOAe",
"eiiu_hjzzku",
"wX9Gpq5wm0M",
"jU-Hv5TKPRU",
"vBngZDFGLIn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Although a mesh embedded in 3D space may be treated as a graph, a graph convolution network uses the same weights for each neighbor and is thus permutation invariant, which is the incorrect inductive bias for a mesh: the neighbors of a node are spatially related and may not be arbitrarily permuted. CNNs, GCNs, an... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
9
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_Jnspzp-oIZE",
"iclr_2021_Jnspzp-oIZE",
"iclr_2021_Jnspzp-oIZE",
"TX1mhDCFU5s",
"57XOAZ_6brf",
"l_FhaPC7DGO",
"H5eJcfzcAX",
"_KAJRuislCw",
"eiiu_hjzzku",
"iclr_2021_Jnspzp-oIZE",
"vBngZDFGLIn",
"TX1mhDCFU5s",
"57XOAZ_6brf",
"l_FhaPC7DGO",
"iclr_2021_Jnspzp-oIZE"
] |
iclr_2021_3UDSdyIcBDA | RMSprop converges with proper hyper-parameter | Despite the existence of divergence examples, RMSprop remains
one of the most popular algorithms in machine learning. Towards closing the gap between theory and practice, we prove that RMSprop converges with proper choice of hyper-parameters under certain conditions. More specifically, we prove that when the hyper-parameter β2 is close enough to 1, RMSprop and its random shuffling version converge to a bounded region in general, and to critical points in the interpolation regime. It is worth mentioning that our results do not depend on ``bounded gradient" assumption, which is often the key assumption utilized by existing theoretical work for Adam-type adaptive gradient method. Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSprop.
Finally, based on our theory, we conjecture that in practice there is a critical threshold β2∗, such that RMSprop generates reasonably good results only if 1>β2≥β2∗. We provide empirical evidence for such a phase transition in our numerical experiments. | spotlight-presentations | The paper shows convergence results for RMSprop in certain regimes. The reviews are uniformly positive about this paper and I recommend acceptance. | train | [
"N3S5c7KUmA",
"LK0CPBimjq",
"yg0Z0BPJhnE",
"hkya5mHmDvE",
"dpuRDjWPy3D",
"cjfcGuf98pp"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the detailed and constructive comments! Below we provide the responses to specific comments. \n\n1. RMSProp and SGD\n\nThanks for the suggestion. Our convergence rate in high $\\beta_2$ regime is $O\\left(\\frac{\\log T}{\\sqrt{T}}\\right)$, and under the nonconvex setting, the best proved rate of SG... | [
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"dpuRDjWPy3D",
"hkya5mHmDvE",
"cjfcGuf98pp",
"iclr_2021_3UDSdyIcBDA",
"iclr_2021_3UDSdyIcBDA",
"iclr_2021_3UDSdyIcBDA"
] |
iclr_2021_YwpZmcAehZ | Revisiting Dynamic Convolution via Matrix Decomposition | Recent research in dynamic convolution shows substantial performance boost for efficient CNNs, due to the adaptive aggregation of K static convolution kernels. It has two limitations: (a) it increases the number of convolutional weights by K-times, and (b) the joint optimization of dynamic attention and static convolution kernels is challenging. In this paper, we revisit it from a new perspective of matrix decomposition and reveal the key issue is that dynamic convolution applies dynamic attention over channel groups after projecting into a higher dimensional latent space. To address this issue, we propose dynamic channel fusion to replace dynamic attention over channel groups. Dynamic channel fusion not only enables significant dimension reduction of the latent space, but also mitigates the joint optimization difficulty. As a result, our method is easier to train and requires significantly fewer parameters without sacrificing accuracy. Source code is at https://github.com/liyunsheng13/dcd. | poster-presentations | This paper improves the dynamic convolution operation by replacing the dynamic attention over channel groups with channel fusion in a low-dimensional space. It includes extensive experiments with reasonable baselines. Dynamic convolutions are a fruitful method for making convnets more efficient, and this paper further improves their efficiency and efficacy with a novel technique. Reviewers all agreed that the paper was clearly written (though some parts were improved after rebuttal). | train | [
"eH_VlYPx46t",
"QjDC1XnOlO4",
"xR7fekP-3Ww",
"YUA0T_3X5dg",
"IvIRfKnDz6",
"CEUlfAwcx5e",
"wPFR1gSmn3",
"TZGARPzSKS",
"E549Wy8QkJ",
"yrzLX8kqhm8",
"0IKJS7vBS3U",
"oyIAwqAYN4Q",
"-W9gdadJTPw"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"# Post-rebuttal updates\nI've tentatively updated my review score from 6 to 7 for the following reasons:\n* The authors clarified the relationship of their proposed method with Squeeze-and-Excite, and promised to add an explanation to their paper.\n* The authors clarified the relationship between their method and ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"iclr_2021_YwpZmcAehZ",
"oyIAwqAYN4Q",
"-W9gdadJTPw",
"IvIRfKnDz6",
"TZGARPzSKS",
"eH_VlYPx46t",
"eH_VlYPx46t",
"eH_VlYPx46t",
"oyIAwqAYN4Q",
"0IKJS7vBS3U",
"iclr_2021_YwpZmcAehZ",
"iclr_2021_YwpZmcAehZ",
"iclr_2021_YwpZmcAehZ"
] |
iclr_2021_A5VV3UyIQz | Explainable Deep One-Class Classification | Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks. | poster-presentations | The paper touches upon explainable anomaly detection. To that extend, it modified hypersphere classifier towards fully convolutional data description (FCDD). This is, as also pointed out by two of the reviewers a direct application of a fully convolutional network within the hyperspherical classifier. However, the paper also shows how to then upsample the receptive field using a strided transposed convolution with a fixed Gaussian kernel. Both together with tackling explainable anomaly detection is important. Moreover, the empirical evaluation is quite exhaustive and shows several benefits compared to state-of-the-art. So, yes, incremental, but incremental for a very interesting an important case. | train | [
"1xfBFmGXaC2",
"4uFVLZcsHS4",
"MO3U0FgT0Qv",
"IOkTzBO22Mm",
"KL-9fT-IhEE",
"qtMgzBrYPUB",
"ngPoMXYW4mP"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all the reviewers for their helpful comments and are pleased that our work has been well received overall:\n\n**R1: \"The approach is well-motivated and compares well to the state of the art AD-methods.\"** \\\n**R1: \"The paper provides sophisticated theoretical as well as empirical insights.\"** \\\n**R... | [
-1,
-1,
-1,
-1,
7,
8,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"iclr_2021_A5VV3UyIQz",
"KL-9fT-IhEE",
"qtMgzBrYPUB",
"ngPoMXYW4mP",
"iclr_2021_A5VV3UyIQz",
"iclr_2021_A5VV3UyIQz",
"iclr_2021_A5VV3UyIQz"
] |
iclr_2021_lU5Rs_wCweN | Taking Notes on the Fly Helps Language Pre-Training | How to make unsupervised language pre-training more efficient and less resource-intensive is an important research direction in NLP. In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization. It is well-known that in language data corpus, words follow a heavy-tail distribution. A large proportion of words appear only very few times and the embeddings of rare words are usually poorly optimized. We argue that such embeddings carry inadequate semantic signals, which could make the data utilization inefficient and slow down the pre-training of the entire model. To mitigate this problem, we propose Taking Notes on the Fly (TNF), which takes notes for rare words on the fly during pre-training to help the model understand them when they occur next time. Specifically, TNF maintains a note dictionary and saves a rare word's contextual information in it as notes when the rare word occurs in a sentence. When the same rare word occurs again during training, the note information saved beforehand can be employed to enhance the semantics of the current sentence. By doing so, TNF provides a better data utilization since cross-sentence information is employed to cover the inadequate semantics caused by rare words in the sentences. We implement TNF on both BERT and ELECTRA to check its efficiency and effectiveness. Experimental results show that TNF's training time is 60% less than its backbone pre-training models when reaching the same performance. When trained with same number of iterations, TNF outperforms its backbone methods on most of downstream tasks and the average GLUE score. Code is attached in the supplementary material. | poster-presentations | The authors propose an approach for pre-training that involves "taking notes on the fly" for rare words. The paper stirred a lively discussion on the reasons for the reported results, which the authors followed-up with new experiments and findings that convinced the reviewers that indeed their approach is valid and interesting. Thus, I am recommending acceptance. | train | [
"Qc0DA7fj0Dn",
"LD6h1-8EGhA",
"eLEUZpJlpGm",
"hwxn__QtOXB",
"UjRfDZyugv3",
"bwdCMiYR3IM",
"Fn1sq_QyNiH",
"A8bhnN49sbV",
"7Zi5Fw4IEwZ",
"X5Vd0tD6yDk",
"qaon3QH0jIP",
"6VSXyrFpymN",
"6-Po8MUj1tg",
"MR7CS_Vyu47",
"ZWmZFGPFit",
"H_-2owwtcd",
"c74UxImYUn",
"QYOYB5Tajks",
"SRZWkYs76l9"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
"Your comments have indeed made us curious about what can actually happen if we optimize the notes via backprop. We will run this ablation and take a look at the gradients.",
"We thank AC for handling this paper and thank all reviewers for their kind help and useful suggestions. The comments have enlightened us t... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"UjRfDZyugv3",
"iclr_2021_lU5Rs_wCweN",
"bwdCMiYR3IM",
"iclr_2021_lU5Rs_wCweN",
"A8bhnN49sbV",
"Fn1sq_QyNiH",
"6VSXyrFpymN",
"X5Vd0tD6yDk",
"X5Vd0tD6yDk",
"QYOYB5Tajks",
"iclr_2021_lU5Rs_wCweN",
"H_-2owwtcd",
"SRZWkYs76l9",
"cdFOrFGz6wJ",
"Lud0JlJRD9h",
"qaon3QH0jIP",
"H_-2owwtcd",
... |
iclr_2021_l-LGlk4Yl6G | Mixed-Features Vectors and Subspace Splitting | Motivated by metagenomics, recommender systems, dictionary learning, and related problems, this paper introduces subspace splitting(SS): the task of clustering the entries of what we call amixed-features vector, that is, a vector whose subsets of coordinates agree with a collection of subspaces. We derive precise identifiability conditions under which SS is well-posed, thus providing the first fundamental theory for this problem. We also propose the first three practical SS algorithms, each with advantages and disadvantages: a random sampling method , a projection-based greedy heuristic , and an alternating Lloyd-type algorithm ; all allow noise, outliers, and missing data. Our extensive experiments outline the performance of our algorithms, and in lack of other SS algorithms, for reference we compare against methods for tightly related problems, like robust matched subspace detection and maximum feasible subsystem, which are special simpler cases of SS. | poster-presentations | The paper considers a new linear-algebraic problem motivated by applications such as metagenomics which requires the algorithm to partition the coordinates of a long noisy vector according to a few known subspaces. A number of theoretical questions were asked (e.g., identifiability; efficient algorithms and their error bounds; etc).
The reviewers generally liked the paper for what it does. Specific suggestions were raised by the reviewers, including how the paper went into length about the motivating applications but did not end up evaluating the proposed algorithms on any motivating applications; and that the main theoretical results were not technically challenging / nor surprising (although the authors provided a fair justification in their rebuttal).
The AC finds the paper an outlier in terms of the topics among papers typically received by ICLR, but liked the paper precisely because it is different. The authors are encouraged to discuss the connections of the specific problem to the context of representation learning and machine learning in general.
Overall, I believe the paper is a solid borderline accept.
| train | [
"vO6_xW7RiTi",
"KqwqNqVBIlD",
"yKwNtHYIvJ",
"OlpRGs8Frw9",
"k_j47-8k-BD",
"U21z4qScv2-",
"jpcy1bdF9Sn"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their comments.\n\nThis paper introduces the mixed features vectors model, and the subspace splitting problem. As such, our main focus was to explore its feasibility, motivate its applicability, and start developing fundamental theory. Now that we have established these points, our future... | [
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"U21z4qScv2-",
"jpcy1bdF9Sn",
"k_j47-8k-BD",
"iclr_2021_l-LGlk4Yl6G",
"iclr_2021_l-LGlk4Yl6G",
"iclr_2021_l-LGlk4Yl6G",
"iclr_2021_l-LGlk4Yl6G"
] |
iclr_2021_o966_Is_nPA | Neural Pruning via Growing Regularization | Regularization has long been utilized to learn sparsity in deep neural network pruning. However, its role is mainly explored in the small penalty strength regime. In this work, we extend its application to a new scenario where the regularization grows large gradually to tackle two central problems of pruning: pruning schedule and weight importance scoring. (1) The former topic is newly brought up in this work, which we find critical to the pruning performance while receives little research attention. Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains compared with its one-shot counterpart, even when the same weights are removed. (2) The growing penalty scheme also brings us an approach to exploit the Hessian information for more accurate pruning without knowing their specific values, thus not bothered by the common Hessian approximation problems. Empirically, the proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning. Their effectiveness is demonstrated with modern deep neural networks on the CIFAR and ImageNet datasets, achieving competitive results compared to many state-of-the-art algorithms. Our code and trained models are publicly available at https://github.com/mingsun-tse/regularization-pruning. | poster-presentations | This paper introduces a novel pruning algorithm for neural networks, gently regularizing the weights away (through weight decay) and using Hessian information instead of simple magnitude. All in all an idea that is simple and effective, and could be of interest to a large audience.
AC | test | [
"twxK8DYsy4y",
"6rRos5i0yO",
"jHvpZGpspHa",
"Hw5QiV4GJDM",
"7itGm67v_s",
"ofSpiXmWcJC",
"VPS3_K4s55y",
"OjyU0AHIJPL",
"PHjmCFBg75Y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary:\nThe authors propose regularization-based pruning methods with the penalty factors uniformly increased over the training session. The first algorithm (GReg-1) sorts the filters by L1-norm and only applies the increasing regularization to the “unimportant” filters; the second one (GReg-2) applies the incre... | [
8,
7,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
5,
4,
5,
-1,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2021_o966_Is_nPA",
"iclr_2021_o966_Is_nPA",
"iclr_2021_o966_Is_nPA",
"jHvpZGpspHa",
"twxK8DYsy4y",
"PHjmCFBg75Y",
"jHvpZGpspHa",
"6rRos5i0yO",
"iclr_2021_o966_Is_nPA"
] |
iclr_2021_6k7VdojAIK | Practical Massively Parallel Monte-Carlo Tree Search Applied to Molecular Design | It is common practice to use large computational resources to train neural networks, known from many examples, such as reinforcement learning applications. However, while massively parallel computing is often used for training models, it is rarely used to search solutions for combinatorial optimization problems. This paper proposes a novel massively parallel Monte-Carlo Tree Search (MP-MCTS) algorithm that works efficiently for a 1,000 worker scale on a distributed memory environment using multiple compute nodes and applies it to molecular design. This paper is the first work that applies distributed MCTS to a real-world and non-game problem. Existing works on large-scale parallel MCTS show efficient scalability in terms of the number of rollouts up to 100 workers. Still, they suffer from the degradation in the quality of the solutions. MP-MCTS maintains the search quality at a larger scale. By running MP-MCTS on 256 CPU cores for only 10 minutes, we obtained candidate molecules with similar scores to non-parallel MCTS running for 42 hours. Moreover, our results based on parallel MCTS (combined with a simple RNN model) significantly outperform existing state-of-the-art work. Our method is generic and is expected to speed up other applications of MCTS. | poster-presentations | I think this is a very solid and good work in the topic of "Practical Massive Parallel MCTS." I think it will be good to open up perspectives among ICLR's audience going beyond just Deep Learning and Machine Learning. I also noted a lot of positive comments during the evaluation and discussion period.
Still, it was a borderline case and not an easy decision (primarily because of the concerns raised by R3 towards the end of the discussion period). In the end the program committee decided that the paper does meet the bar. We think that the work is interesting and original, though not without weaknesses.
| train | [
"KdCxzJHMald",
"IVVK7z1EoPY",
"r3lRasKWoO",
"kerb5v8nrD",
"uGCA_ruZOC9",
"lAeZfFgbi7U",
"5DMCTJQWeAc",
"WGXhaZnVtd",
"IczQVN3F62s",
"eXU3zLpANq",
"-6c5YD53M-s"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: The paper proposes a new algorithm to scale up parallel MCTS. The proposed method, MP-MCTS is a modified version of previous efforts to parallelize MCTS (TDS-UCT, TDS-df-UCT), all using virtual loss and modern MCTS enhancements (NN-guided selection learned offline). In exchange for small additional memory... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
5,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"iclr_2021_6k7VdojAIK",
"5DMCTJQWeAc",
"WGXhaZnVtd",
"IczQVN3F62s",
"KdCxzJHMald",
"eXU3zLpANq",
"-6c5YD53M-s",
"iclr_2021_6k7VdojAIK",
"iclr_2021_6k7VdojAIK",
"iclr_2021_6k7VdojAIK",
"iclr_2021_6k7VdojAIK"
] |
iclr_2021_5jRVa89sZk | Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition | In many scenarios, named entity recognition (NER) models severely suffer from unlabeled entity problem, where the entities of a sentence may not be fully annotated. Through empirical studies performed on synthetic datasets, we find two causes of performance degradation. One is the reduction of annotated entities and the other is treating unlabeled entities as negative instances. The first cause has less impact than the second one and can be mitigated by adopting pretraining language models. The second cause seriously misguides a model in training and greatly affects its performances. Based on the above observations, we propose a general approach, which can almost eliminate the misguidance brought by unlabeled entities. The key idea is to use negative sampling that, to a large extent, avoids training NER models with unlabeled entities. Experiments on synthetic datasets and real-world datasets show that our model is robust to unlabeled entity problem and surpasses prior baselines. On well-annotated datasets, our model is competitive with the state-of-the-art method. | poster-presentations | This paper studies the unlabeled entity problem in NER. Specifically, performance degradation in training of NER models due to unlabeled entities. It analyzes the reason through evaluation on synthetic datasets and finds that it is due to the fact that all the unlabeled entities are treated as negative examples. To cope with the problem, it proposes a negative sampling method which considers the use of only a small subset of unlabeled entities. Experimental results show that the proposed method achieves better performances than the baselines on real-world datasets and achieves competitive performances compared with the state-of-the-art methods on well-annotated datasets.
Pros
• The paper is clearly written.
• The proposed method appears to be technically sound.
• Experimental results support the main claims.
• The findings in the paper are useful for the field.
Cons
• Novelty of the work might not be enough.
The authors have addressed some clarity and reference issues pointed out by the reviewers in the rebuttal. Discussions have been made among the reviewers.
| val | [
"3E-03klkV0",
"7arLl5w8LMO",
"m5fDH_kmtZj",
"8dzm3IQqHKi",
"3JCcwPG4ho6",
"5bXkHm8JYQT",
"IbfieAbz2Yp",
"JZDh9tCbsgS",
"wM0Cz3H5hlb",
"dN5WewHl75m"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on the unlabeled entity problem in NER, where the entities of a sentence are incomplete annotated. Since some entities may not be annotated, the performance of models can be degraded. This paper analyzes the performance degradation by evaluating synthetic datasets and finds that all the unlabele... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_5jRVa89sZk",
"3E-03klkV0",
"7arLl5w8LMO",
"JZDh9tCbsgS",
"wM0Cz3H5hlb",
"3JCcwPG4ho6",
"dN5WewHl75m",
"iclr_2021_5jRVa89sZk",
"iclr_2021_5jRVa89sZk",
"iclr_2021_5jRVa89sZk"
] |
iclr_2021_O-6Pm_d_Q- | Deep Networks and the Multiple Manifold Problem | We study the multiple manifold problem, a binary classification task modeled on applications in machine vision, in which a deep fully-connected neural network is trained to separate two low-dimensional submanifolds of the unit sphere. We provide an analysis of the one-dimensional case, proving for a simple manifold configuration that when the network depth L is large relative to certain geometric and statistical properties of the data, the network width n grows as a sufficiently large polynomial in L, and the number of i.i.d. samples from the manifolds is polynomial in L, randomly-initialized gradient descent rapidly learns to classify the two manifolds perfectly with high probability. Our analysis demonstrates concrete benefits of depth and width in the context of a practically-motivated model problem: the depth acts as a fitting resource, with larger depths corresponding to smoother networks that can more readily separate the class manifolds, and the width acts as a statistical resource, enabling concentration of the randomly-initialized network and its gradients. The argument centers around the "neural tangent kernel" of Jacot et al. and its role in the nonasymptotic analysis of training overparameterized neural networks; to this literature, we contribute essentially optimal rates of concentration for the neural tangent kernel of deep fully-connected ReLU networks, requiring width n≥Lpoly(d0) to achieve uniform concentration of the initial kernel over a d0-dimensional submanifold of the unit sphere Sn0−1, and a nonasymptotic framework for establishing generalization of networks trained in the "NTK regime" with structured data. The proof makes heavy use of martingale concentration to optimally treat statistical dependencies across layers of the initial random network. This approach should be of use in establishing similar results for other network architectures. | poster-presentations | This paper introduces the multiple manifold problem - in a simple setting there are two data manifolds representing the positive and negative samples, and the goal is to train a neural network (or any predictor) that separates these two manifolds. The paper showed that this is possible to do with a deep neural network under certain assumptions - notably on the shape of the manifold and also on the ability of the neural network to represent certain functions (which is harder to verify, and only verified for a 1-d case in the paper). The optimization of neural network falls in the NTK regime but requires new techniques. Overall the question seems very natural and the results are reasonable first steps. There are some concerns about clarity that the authors should address in the paper. | train | [
"6XjRdhzEYB",
"VgYTPqZ6TIq",
"XhH-Q98GXG",
"Fxu39uOgSm",
"ZogqSg3GSbz",
"A9m8L-7mYWU",
"UpdplrwWQ6",
"lPCutTimFAL",
"_qhIZwuRi5B",
"ta16eFzAadg"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors consider a binary classification task. As a model the authors use a deep fully-connected neural network and train it to separate the submanifolds, representing different classes. They assume that sub-manifolds belong to the unit sphere. Also, the authors restrict their analysis to a one-dimensional cas... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
2
] | [
"iclr_2021_O-6Pm_d_Q-",
"iclr_2021_O-6Pm_d_Q-",
"6XjRdhzEYB",
"lPCutTimFAL",
"_qhIZwuRi5B",
"ta16eFzAadg",
"iclr_2021_O-6Pm_d_Q-",
"iclr_2021_O-6Pm_d_Q-",
"iclr_2021_O-6Pm_d_Q-",
"iclr_2021_O-6Pm_d_Q-"
] |
iclr_2021_ZzwDy_wiWv | Knowledge distillation via softmax regression representation learning | This paper addresses the problem of model compression via knowledge distillation. We advocate for a method that optimizes the output feature of the penultimate layer of the student network and hence is directly related to representation learning. Previous distillation methods which typically impose direct feature matching between the student and the teacher do not take into account the classification problem at hand. On the contrary, our distillation method decouples representation learning and classification and utilizes the teacher's pre-trained classifier to train the student's penultimate layer feature. In particular, for the same input image, we wish the teacher's and student's feature to produce the same output when passed through the teacher's classifier which is achieved with a simple L2 loss. Our method is extremely simple to implement and straightforward to train and is shown to consistently outperform previous state-of-the-art methods over a large set of experimental settings including different (a) network architectures, (b) teacher-student capacities, (c) datasets, and (d) domains. The code will be available at \url{https://github.com/jingyang2017/KD_SRRL}. | poster-presentations | This paper proposes a new idea for performing knowledge distillation by leveraging teacher’s classifier to train student’s penultimate layer feature via proposing suitable loss functions. Reviewers appreciate the simultaneous simplicity and effectiveness of the method. A comprehensive set of studies are performed to empirically show the effectiveness of the method. Specifically, the proposed distillation method is shown to outperform state-of-the-art across various network architectures, teacher-student capacities, datasets, and domains. The paper is well-written and is easy to follow. All reviewers rate the paper on the accept side (after the rebuttal) and believe the new perspective this work provides on distillation and its simplicity to implement can lead it to gain high impact. I concur with the reviewers and find this submission a convincing empirical work, and thus recommend for accept.
| val | [
"hcGNUdl-0Jj",
"Hb67yJYl50",
"DLwuQOq2TJ",
"PeQ2q3OUpnd",
"NByt26kmWDb",
"7Mcx8qURV_1",
"oFa1fQdowp5",
"-x0bdLSSSZ",
"dmw6KydW9KI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#####################################################################################\nSummary:\n\nThis paper proposes a new formulation of knowledge distillation (KD) for model compression. Different from the classic formulation that matches the logits between student and teacher models, this paper suggests to ma... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_ZzwDy_wiWv",
"iclr_2021_ZzwDy_wiWv",
"dmw6KydW9KI",
"-x0bdLSSSZ",
"hcGNUdl-0Jj",
"hcGNUdl-0Jj",
"Hb67yJYl50",
"iclr_2021_ZzwDy_wiWv",
"iclr_2021_ZzwDy_wiWv"
] |
iclr_2021_7wCBOfJ8hJM | Nearest Neighbor Machine Translation | We introduce k-nearest-neighbor machine translation (kNN-MT), which predicts tokens with a nearest-neighbor classifier over a large datastore of cached examples, using representations from a neural translation model for similarity search. This approach requires no additional training and scales to give the decoder direct access to billions of examples at test time, resulting in a highly expressive model that consistently improves performance across many settings. Simply adding nearest-neighbor search improves a state-of-the-art German-English translation model by 1.5 BLEU. kNN-MT allows a single model to be adapted to diverse domains by using a domain-specific datastore, improving results by an average of 9.2 BLEU over zero-shot transfer, and achieving new state-of-the-art results---without training on these domains. A massively multilingual model can also be specialized for particular language pairs, with improvements of 3 BLEU for translating from English into German and Chinese. Qualitatively, kNN-MT is easily interpretable; it combines source and target context to retrieve highly relevant examples. | poster-presentations | This paper extends past work on kNN-augmentation for language modeling to the task of machine translation: a classic parametric NMT model is augmented with kNN retrieval from an external datastore. Decoder-internal token-level representations are used to index and retrieve relevant contexts (source + target prefix) that weigh-in during the final probability calculation for the next target word. Results are extremely positive across a range of MT setups including both in-domain evaluation and domain transfer. Reviews are thorough, but quite divergent. There is general agreement that the proposed approach is reasonable, well-motivated, and clearly described -- and further, that experimental results are both solid and relatively extensive. However, the strongest criticism concerns the paper's relationship with past work. In terms of ML novelty, everyone agrees (including the paper itself) that the proposed methodology is a relatively simple extension of past work on non-conditional language modeling. However, two of the four reviewers strongly feel that, in light of the potentially prohibitive decoding costs, the positive experimental results are not sufficient to make this paper relevant to an ICLR audience given the lack of ML novelty. In contrast, another reviewer strongly takes an opposite stand-point: rather, that the results will be extremely impactful to the MT subcommunity at ICLR since they are unexpected (i.e. that a non-parametric model might compete with highly-tuned NMT systems) and very positive across a range of domains and settings (i.e. in-domain, out-of-domain, multilingual) -- further, that the approach has substantial novelty in the context of MT where parametric models are the norm and that it might inspire substantial future work (e.g. on efficient decoding techniques and further non-parametric techniques) given that it so drastically breaks the current MT mold. The final reviewer shares the concern of the former two about novelty, but is swayed by the experimental results and potential uses for the model (given kNN augmentation is possible without further training) and therefore votes for a marginal accept. After thorough, well-reasoned, and well-intentioned discussion between all four reviewers, the reviews land just barely in favor of acceptance, but with substantial divide. After considering the paper, reviews, rebuttal, and discussion I am swayed by the argument that (a) these experimental results are largely unexpected, (b) they are both extremely positive and offer a new trade-off between test and train compute in MT, and (c) that the paper may therefore inspire substantial discussion and follow-up work in the community. Thus I lean in favor of acceptance overall. | train | [
"0fHOaZiowT9",
"4ax9gDZt8gA",
"fqSp7xjTe4N",
"OVoYl0_53bA",
"LZX4QZwQXC",
"50fOVDXq9tn",
"Zss9b1Iar9d",
"czu3xBy8_ID",
"q7nyBjU5k1I",
"Ey3paOnNj7_",
"T_3Kyhbxvg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper is an extension of [1]. The task in [1] is Language Modeling, while this paper is doing machine translation with the similar idea. The authors propose a non-parametric method for machine translation via a k-nearest-neighbor (KNN) classifier. Specifically, it predicts tokens with a KNN classifier over exa... | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2021_7wCBOfJ8hJM",
"iclr_2021_7wCBOfJ8hJM",
"OVoYl0_53bA",
"Zss9b1Iar9d",
"T_3Kyhbxvg",
"Ey3paOnNj7_",
"4ax9gDZt8gA",
"0fHOaZiowT9",
"iclr_2021_7wCBOfJ8hJM",
"iclr_2021_7wCBOfJ8hJM",
"iclr_2021_7wCBOfJ8hJM"
] |
iclr_2021_3SqrRe8FWQ- | WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic | Low-precision neural networks represent both weights and activations with few bits, drastically reducing the cost of multiplications. Meanwhile, these products are accumulated using high-precision (typically 32-bit) additions. Additions dominate the arithmetic complexity of inference in quantized (e.g., binary) nets, and high precision is needed to avoid overflow. To further optimize inference, we propose WrapNet, an architecture that adapts neural networks to use low-precision (8-bit) additions while achieving classification accuracy comparable to their 32-bit counterparts. We achieve resilience to low-precision accumulation by inserting a cyclic activation layer that makes results invariant to overflow. We demonstrate the efficacy of our approach using both software and hardware platforms. | poster-presentations | Most of the reviewers agree that this paper presents an interesting idea. Practically implementing a BNN that gains real world speedup is challenging, and as past work [1] showed, the bottleneck could shift into other layers(besides the accumulation). The paper would benefit from a thorough discussion about the practical impact in implementing the proposed method and relation to past work.
The meta-reviewer decided to accept the paper given the positive aspects, and encourages the author to further improve the paper per review comments.
Thank you for submitting the paper to ICLR.
[1] Riptide: Fast End-to-End Binarized Neural Networks
| train | [
"p37xqcUEhDD",
"EWkBW2PUhIN",
"-IqMk1AspQW",
"C4QosnFfzaU",
"tysL1fPZCQg",
"oCralohc7P",
"rfFpQTYeET",
"R7DDHXt5COf",
"6V-XGwj70Zh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method (WrapNet) for the problem of efficient low-precision inference. The main contribution is to allow efficient low-bit (e.g., 8 bit) accumulators via the use of a novel cyclic activation function that constrains the value space of the layer outputs, while still allowing good accuracy.\n\n... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_3SqrRe8FWQ-",
"iclr_2021_3SqrRe8FWQ-",
"R7DDHXt5COf",
"6V-XGwj70Zh",
"p37xqcUEhDD",
"EWkBW2PUhIN",
"iclr_2021_3SqrRe8FWQ-",
"iclr_2021_3SqrRe8FWQ-",
"iclr_2021_3SqrRe8FWQ-"
] |
iclr_2021_oZIvHV04XgC | Wandering within a world: Online contextualized few-shot learning | We aim to bridge the gap between typical human and machine-learning environments by extending the standard framework of few-shot learning to an online, continual setting. In this setting, episodes do not have separate training and testing phases, and instead models are evaluated online while learning novel classes. As in the real world, where the presence of spatiotemporal context helps us retrieve learned skills in the past, our online few-shot learning setting also features an underlying context that changes throughout time. Object classes are correlated within a context and inferring the correct context can lead to better performance. Building upon this setting, we propose a new few-shot learning dataset based on large scale indoor imagery that mimics the visual
experience of an agent wandering within a world. Furthermore, we convert popular few-shot learning approaches into online versions and we also propose a new model that can make use of spatiotemporal contextual information from the recent past. | poster-presentations | This paper proposes a new online contextualized few-shot learning setting, with two associated datasets (notably, including one obtained from trajectories within the real-world Matterport3D reconstructions). A simple recurrent contextualized extension of Prototypical Networks is also proposed as a stronger baseline, demonstrating the need for incorporating such context. The reviewers all agreed that this is an interesting setting combining continual and few-shot learning, offering a more realistic problem that mirrors those that might be encountered by embodied agents. The authors provided very detailed rebuttals, answering some of the questions and concerns raised by the reviewers. In the end, all reviewers agreed that this paper would contribute a significant novel setting, and so I recommend acceptance. I encourage the others to include modifications related to some of the comments, such as strengthening/clarifying the setting including metrics, details of the method, etc. | val | [
"o10wFYO7j0o",
"Fh3vyAfRrr",
"vYwdfR2rwio",
"hfh-TA1lRP2",
"Q2PzJLe6hL4",
"xx3gdmCvL8e",
"TXpLpx4Ijk3",
"5-eWx1pJo2x",
"KZ1dgnAqrNO",
"rqXrDg_MAnd",
"S219z92c1i7",
"35oPnOr_ZIO",
"W4Ku2cMDVPZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a new learning paradigm that combines both few-shot learning(FSL) and continual learning (CL) to provide a more realistic learning environment rather than the traditional train-test-retrain approach in FSL. Two environments are proposed, along with a novel dataset. The evaluation seems to be th... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iclr_2021_oZIvHV04XgC",
"TXpLpx4Ijk3",
"rqXrDg_MAnd",
"Q2PzJLe6hL4",
"5-eWx1pJo2x",
"35oPnOr_ZIO",
"o10wFYO7j0o",
"S219z92c1i7",
"W4Ku2cMDVPZ",
"xx3gdmCvL8e",
"iclr_2021_oZIvHV04XgC",
"iclr_2021_oZIvHV04XgC",
"iclr_2021_oZIvHV04XgC"
] |
iclr_2021_pW2Q2xLwIMD | Few-Shot Learning via Learning the Representation, Provably | This paper studies few-shot learning via representation learning, where one uses T source tasks with n1 data per task to learn a representation in order to reduce the sample complexity of a target task for which there is only n2(≪n1) data. Specifically, we focus on the setting where there exists a good common representation between source and target, and our goal is to understand how much a sample size reduction is possible. First, we study the setting where this common representation is low-dimensional and provide a risk bound of O~(dkn1T+kn2) on the target task for the linear representation class; here d is the ambient input dimension and k(≪d) is the dimension of the representation. This result bypasses the Ω(1T) barrier under the i.i.d. task assumption, and can capture the desired property that all n1T samples from source tasks can be \emph{pooled} together for representation learning. We further extend this result to handle a general representation function class and obtain a similar result. Next, we consider the setting where the common representation may be high-dimensional but is capacity-constrained (say in norm); here, we again demonstrate the advantage of representation learning in both high-dimensional linear regression and neural networks, and show that representation learning can fully utilize all n1T samples from source tasks. | poster-presentations | The paper considers the problem of learning a new task with few examples by using related tasks which can exploit shared representations for which more data is available. The paper proves a number of interesting (primarily theoretical) results. | train | [
"7LVY8j3_7g",
"of9rG-FXxKh",
"JfLx28XK-RG",
"nCVmfPcMcjJ",
"tkjMerL1LKA",
"ognVoGb6JF",
"eRZXOQU1TmN",
"1j4d6sGOGb",
"zEnHH1Csaqy"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"#######################################################################\n\nSummary:\nThis paper studies the benefit of few-shot learning for sample complexity, when all the tasks (both source and target task) share the same underline representation. Under some assumptions on the data and tasks, this paper improves... | [
6,
-1,
7,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_pW2Q2xLwIMD",
"tkjMerL1LKA",
"iclr_2021_pW2Q2xLwIMD",
"7LVY8j3_7g",
"JfLx28XK-RG",
"1j4d6sGOGb",
"zEnHH1Csaqy",
"iclr_2021_pW2Q2xLwIMD",
"iclr_2021_pW2Q2xLwIMD"
] |
iclr_2021_QkRbdiiEjM | AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models | The design of deep graph models still remains to be investigated and the crucial part is how to explore and exploit the knowledge from different hops of neighbors in an efficient way. In this paper, we propose a novel RNN-like deep graph neural network architecture by incorporating AdaBoost into the computation of network; and the proposed graph convolutional network called AdaGCN~(Adaboosting Graph Convolutional Network) has the ability to efficiently extract knowledge from high-order neighbors of current nodes and then integrates knowledge from different hops of neighbors into the network in an Adaboost way. Different from other graph neural networks that directly stack many graph convolution layers, AdaGCN shares the same base neural network architecture among all ``layers'' and is recursively optimized, which is similar to an RNN. Besides, We also theoretically established the connection between AdaGCN and existing graph convolutional methods, presenting the benefits of our proposal. Finally, extensive experiments demonstrate the consistent state-of-the-art prediction performance on graphs across different label rates and the computational advantage of our approach AdaGCN~\footnote{Code is available at \url{https://github.com/datake/AdaGCN}.}. | poster-presentations | Three of the reviewers are very positive about this work, and R3 is slightly concerned about the datasets, writing, and notations etc. The authors responded to these concerns in detail and have agreed to take care of these comments. Thus an accept is recommended based on the understanding that the authors will fulfil their commitments. | train | [
"_T2_vYCBfgj",
"trBN_XgGJZV",
"J6kdP-xSrhj",
"4nW4WVzP79c",
"Nd3MkkC5XLx",
"HakdAw6ehOt",
"zvBJALHzCXw",
"kKffE7JkQ9a",
"PmccgXMlmUr",
"mCzESxYkOse"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your recommendation for acceptance. Here is our clarification. \n\n1. RNN-like architecture.\n\nYou are right that each classifier shares the same architecture, but their parameters are different. Thus, we claim that AdaGCN is only an RNN-like architecture and we will highlight this difference fro... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"mCzESxYkOse",
"zvBJALHzCXw",
"kKffE7JkQ9a",
"PmccgXMlmUr",
"HakdAw6ehOt",
"iclr_2021_QkRbdiiEjM",
"iclr_2021_QkRbdiiEjM",
"iclr_2021_QkRbdiiEjM",
"iclr_2021_QkRbdiiEjM",
"iclr_2021_QkRbdiiEjM"
] |
iclr_2021_ee6W5UgQLa | MultiModalQA: complex question answering over text, tables and images | When answering complex questions, people can seamlessly combine information from visual, textual and tabular sources.
While interest in models that reason over multiple pieces of evidence has surged in recent years, there has been relatively little work on question answering models that reason across multiple modalities.
In this paper, we present MultiModalQA (MMQA): a challenging question answering dataset that requires joint reasoning over text, tables and images.
We create MMQA using a new framework for generating complex multi-modal questions at scale, harvesting tables from Wikipedia, and attaching images and text paragraphs using entities that appear in each table. We then define a formal language that allows us to take questions that can be answered from a single modality, and combine them to generate cross-modal questions. Last, crowdsourcing workers take these automatically generated questions and rephrase them into more fluent language.
We create 29,918 questions through this procedure, and empirically demonstrate the necessity of a multi-modal multi-hop approach to solve our task: our multi-hop model, ImplicitDecomp, achieves an average F1 of 51.7 over cross-modal questions, substantially outperforming a strong baseline that achieves 38.2 F1, but still lags significantly behind human performance, which is at 90.1 F1. | poster-presentations | The paper presents a new dataset for multimodal QA that is deemed interesting, relevant and well executed by all reviewers. Multimodality in NLP (QA included) is an increasingly important topic and this paper provides a potentially impactful benchmark for research in it. All reviewers acknowledge that.
We hence recommend to accept this paper as a poster. We recommend the authors to further improve the draft before camera ready by using the recommendations made by the reviewers with a particular focus on an extended discussion wrt prior work on VQA and other. The paper should also add more precisions on the license(s) related to the images used in the dataset. | val | [
"53uRQcuy5bT",
"uP3pXFxE4me",
"dLy9jX0cwRK",
"TEk3h-pylPH",
"XnUyxrtJU6p",
"AGk16XSE6Q7",
"PoeQEt0QltS",
"92WxaJGCyid"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper introduces MultiModalQA, a dataset that requires joint reasoning over table, text and images. The dataset has been created in a semi-automatic way through Wikipedia tables, the Wikientities in them, their related images and related textual question answer pairs from known text QA datasets. For the collec... | [
6,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"iclr_2021_ee6W5UgQLa",
"53uRQcuy5bT",
"92WxaJGCyid",
"PoeQEt0QltS",
"AGk16XSE6Q7",
"iclr_2021_ee6W5UgQLa",
"iclr_2021_ee6W5UgQLa",
"iclr_2021_ee6W5UgQLa"
] |
iclr_2021_73WTGs96kho | Net-DNF: Effective Deep Modeling of Tabular Data | A challenging open question in deep learning is how to handle tabular data. Unlike domains such as image and natural language processing, where deep architectures prevail, there is still no widely accepted neural architecture that dominates tabular data. As a step toward bridging this gap, we present Net-DNF a novel generic architecture whose inductive bias elicits models whose structure corresponds to logical Boolean formulas in disjunctive normal form (DNF) over affine soft-threshold decision terms. Net-DNFs also promote localized decisions that are taken over small subsets of the features. We present an extensive experiments showing that Net-DNFs significantly and consistently outperform fully connected networks over tabular data. With relatively few hyperparameters, Net-DNFs open the door to practical end-to-end handling of tabular data using neural networks. We present ablation studies, which justify the design choices of Net-DNF including the inductive bias elements, namely, Boolean formulation, locality, and feature selection.
| poster-presentations | The paper proposes an end-to-end architecture, Net-DNF, for handling tabular data. This is a novel approach in the relatively under-explored domain of application of neural networks; the paper also presents justification of the design choices via ablation studies. The paper is clearly written, and empirical results are convincing.
| train | [
"LiMjsRKMUUI",
"aJ8mkn4lHlO",
"S_JZaOaMxgk",
"q1_rl8j34jF",
"pNnNSLtmbgZ",
"9pDA55ZUfu1"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a neural architecture that emulates the characteristics of decision-tree variants, in the hope of mirroring their successes on tabular data. The architecture consists of three components: DNNF blocks, feature-selection masks, and spacial-localization weightings of DNNF blocks. I find these comp... | [
6,
-1,
-1,
-1,
6,
7
] | [
4,
-1,
-1,
-1,
3,
2
] | [
"iclr_2021_73WTGs96kho",
"LiMjsRKMUUI",
"9pDA55ZUfu1",
"pNnNSLtmbgZ",
"iclr_2021_73WTGs96kho",
"iclr_2021_73WTGs96kho"
] |
iclr_2021_7R7fAoUygoa | Optimal Regularization can Mitigate Double Descent | Recent empirical and theoretical studies have shown that many learning algorithms -- from linear regression to neural networks -- can have test performance that is non-monotonic in quantities such the sample size and model size. This striking phenomenon, often referred to as "double descent", has raised questions of if we need to re-think our current understanding of generalization. In this work, we study whether the double-descent phenomenon can be avoided by using optimal regularization. Theoretically, we prove that for certain linear regression models with isotropic data distribution, optimally-tuned ℓ2 regularization achieves monotonic test performance as we grow either the sample size or the model size.
We also demonstrate empirically that optimally-tuned ℓ2 regularization can mitigate double descent for more general models, including neural networks.
Our results suggest that it may also be informative to study the test risk scalings of various algorithms in the context of appropriately tuned regularization. | poster-presentations | Quality: the paper takes an important question and analyzes it well from a theoretical angle; it also provides empirical evidence to back up its main message in more complex models. The proofs are non-trivial. The paper adds value in improving our understanding of the double descent phenomenon by providing a clear picture of the non-asymptotic regime.
Clarity: The motivation of studying the double-descent phenomenon with optimal regularization is well-explained in the introduction. Connections and comparisons with existing related works are discussed clearly. The paper is clearly written, and exposes the results in a clear and accessible fashion.
Originality: The presented theoretical results on the linear regression model are non-asymptotic, which is new and different from existing works.
Significance: The proof techniques seem to heavily depend on the specific choice of the loss function and the regularizer, that is, the mean squared loss and the ridge penalty. It is not clear if the techniques can generalize to other settings, which affects its significance.
Main Pros:
- the paper takes an important question and analyzes it well from a theoretical angle. The proofs are non-trivial; the paper adds value in improving our understanding of the double descent phenomenon by providing a clear picture of the non-asymptotic regime.
Main Cons:
- Generality of the results. The paper mainly focuses on a simplified linear regression model, where the response variable is linearly generated using some ground-truth parameters \beta^*.
- The experiments need to be more extensive and better-explained, especially for the CIFAR-100 experiments. It is important to discuss this difference clearly at the beginning. | train | [
"9lknDoIrEpm",
"0uLR7_q331c",
"wW5XLve9llz",
"Dn3-o2c5hpa",
"cI5weEF_tmB",
"ycIX1N10mVX",
"xjpZpc2nWtV",
"lGMdQLIbhm8",
"p5C66sJzsQY",
"uOWLuto4noB",
"04PlfCC7pi5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"author"
] | [
"he paper studies the surprising phenomenon of “double descent” in machine learning models which has recently come into light through many prior works. The phenomenon is used to describe the behavior of test performance of an estimator as the model parameters (complexity) or the number of samples are increased. It ... | [
7,
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_7R7fAoUygoa",
"iclr_2021_7R7fAoUygoa",
"iclr_2021_7R7fAoUygoa",
"iclr_2021_7R7fAoUygoa",
"wW5XLve9llz",
"Dn3-o2c5hpa",
"0uLR7_q331c",
"9lknDoIrEpm",
"04PlfCC7pi5",
"iclr_2021_7R7fAoUygoa",
"uOWLuto4noB"
] |
iclr_2021_3jjmdp7Hha | Meta Back-Translation | Back-translation is an effective strategy to improve the performance of Neural Machine Translation~(NMT) by generating pseudo-parallel data. However, several recent works have found that better translation quality in the pseudo-parallel data does not necessarily lead to a better final translation model, while lower-quality but diverse data often yields stronger results instead.
In this paper we propose a new way to generate pseudo-parallel data for back-translation that directly optimizes the final model performance. Specifically, we propose a meta-learning framework where the back-translation model learns to match the forward-translation model's gradients on the development data with those on the pseudo-parallel data. In our evaluations in both the standard datasets WMT En-De'14 and WMT En-Fr'14, as well as a multilingual translation setting, our method leads to significant improvements over strong baselines. | poster-presentations | This paper proposes a meta-learning-based technique to learn how to back-translate (generate a synthetic source-language translation of an observed target-language sentence) for the purpose of better optimising a source-to-target translation model.
The approach is an interesting novel angle to jointly training the translation model and the back-translation component. Compared to techniques like UNMT and DualNMT, the approach offers reduced training time and a simpler formulation with fewer trainable components (and fewer hyperparameters).
During the discussion phase the authors provided additional insight, clarifications, and results that improved our perception of the paper. I would personally appreciate if the authors would update their paper with the clarifications they made to points raised by R2, R3, and R4, especially on the details about meta-validation, the discussion about memory footprint, and the additional results on UNMT (and variants). | train | [
"0m5NmLzKfQj",
"mCXrEu0Shv5",
"quN69SIwL6K",
"xJNkSBrOSDR",
"CjcN8lbVvvF",
"FMrasOOJVZi",
"piOW1BnmBHd",
"2aRXecuN6Np",
"FcA5kiIgF3N",
"tg7gdd2n3_E",
"5z7TfVWMRe",
"oOOJaK7K6UO",
"hK_IDXiM07",
"x1IcJIMoO4d",
"wizsFpQYH8L"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks you for trying this experiment. I'm curious to see the results. \n\nArtetxe et. al, use denoising objective since they assume a small amount of parallel data (10K - 100K) available. I think it's not necessary to use denoising objective in this experiment if you use full WMT14 parallel data. Using denoising ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"mCXrEu0Shv5",
"xJNkSBrOSDR",
"iclr_2021_3jjmdp7Hha",
"CjcN8lbVvvF",
"FMrasOOJVZi",
"FcA5kiIgF3N",
"iclr_2021_3jjmdp7Hha",
"iclr_2021_3jjmdp7Hha",
"quN69SIwL6K",
"quN69SIwL6K",
"2aRXecuN6Np",
"wizsFpQYH8L",
"x1IcJIMoO4d",
"iclr_2021_3jjmdp7Hha",
"iclr_2021_3jjmdp7Hha"
] |
iclr_2021_nkIDwI6oO4_ | Learning A Minimax Optimizer: A Pilot Study | Solving continuous minimax optimization is of extensive practical interest, yet notoriously unstable and difficult. This paper introduces the learning to optimize(L2O) methodology to the minimax problems for the first time and addresses its accompanying unique challenges. We first present Twin-L2O, the first dedicated minimax L2O method consisting of two LSTMs for updating min and max variables separately. The decoupled design is found to facilitate learning, particularly when the min and max variables are highly asymmetric. Empirical experiments on a variety of minimax problems corroborate the effectiveness of Twin-L2O. We then discuss a crucial concern of Twin-L2O, i.e., its inevitably limited generalizability to unseen optimizees. To address this issue, we present two complementary strategies. Our first solution, Enhanced Twin-L2O, is empirically applicable for general minimax problems, by improving L2O training via leveraging curriculum learning. Our second alternative, called Safeguarded Twin-L2O, is a preliminary theoretical exploration stating that under some strong assumptions, it is possible to theoretically establish the convergence of Twin-L2O. We benchmark our algorithms on several testbed problems and compare against state-of-the-art minimax solvers. The code is available at: https://github.com/VITA-Group/L2O-Minimax. | poster-presentations | The paper proposed Twin L2O (learning to optimize) for extending L2O from minimization to minimax problems. The authors honestly discussed the limitation of Twin L2O and proposed two improvements upon it with better generalization/transferability. While some reviewer had some concerns on the motivation of applying L2O to solve minimax problems and the motivation of the loss-function design (why objective-based one is chosen but not gradient-based one), the authors have done a particularly good job in the rebuttal. Even though this is more a proof-of-concept paper, it indeed has novel and solid contributions, and should be accept for publication. | train | [
"J0meEXK-iAy",
"QCY86BsO2jb",
"GjYwFiDYZ-Q",
"6i8J5cqg0DY",
"NtvKdU-MLi_",
"ZcU7uRj_Kjv",
"IgfYHC23QbM",
"6_G9EX_KaLK",
"NUVo-9JgVy",
"vkSrM-RES6h",
"VYRLpRuF0Dj",
"tpSLiXGty99",
"UMM3MROjCp8",
"oshU3tHJKAH",
"koBe7DwJVmi",
"XQoqbxsWkL7"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the positive review and constructive feedback.\n\n### Q1: Comparison of running clock time.\n\nWe first thank the reviewer for his/her appreciation of the superior performance of Twin-L2O over carefully tuned analytical algorithms. It is a great suggestion to also compare the running cloc... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"XQoqbxsWkL7",
"6_G9EX_KaLK",
"iclr_2021_nkIDwI6oO4_",
"J0meEXK-iAy",
"iclr_2021_nkIDwI6oO4_",
"vkSrM-RES6h",
"VYRLpRuF0Dj",
"NUVo-9JgVy",
"tpSLiXGty99",
"NtvKdU-MLi_",
"koBe7DwJVmi",
"GjYwFiDYZ-Q",
"GjYwFiDYZ-Q",
"GjYwFiDYZ-Q",
"iclr_2021_nkIDwI6oO4_",
"iclr_2021_nkIDwI6oO4_"
] |
iclr_2021_ajOrOhQOsYx | A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels | Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors, which can lead to a considerably improved performance. Recent advances in the theoretical description of GCNNs revealed that such models can generally be understood as performing convolutions with G-steerable kernels, that is, kernels that satisfy an equivariance constraint themselves. While the G-steerability constraint has been derived, it has to date only been solved for specific use cases - a general characterization of G-steerable kernel spaces is still missing. This work provides such a characterization for the practically relevant case of G being any compact group. Our investigation is motivated by a striking analogy between the constraints underlying steerable kernels on the one hand and spherical tensor operators from quantum mechanics on the other hand. By generalizing the famous Wigner-Eckart theorem for spherical tensor operators, we prove that steerable kernel spaces are fully understood and parameterized in terms of 1) generalized reduced matrix elements, 2) Clebsch-Gordan coefficients, and 3) harmonic basis functions on homogeneous spaces. | poster-presentations | Reviewers generally agree that the main result of the paper, which generalizes the classical Wigner-Eckart Theorem and provides a basis for the space of G-steerable kernels for any compact group G, is a significant result. There are also several concerns
that need to be addressed. R4 notes that the use of the Dirac delta function (e.g. Theorem C.7) is informal and mathematically imprecise and needs to be fixed. R1 notes that it would be helpful to at least describe how this general formulation can be applied in machine learning.
Presentation and accessibility: the current version of the paper will be accessible to only a small part of the machine learning audience, i.e. those already with advanced knowledge in mathematics and/or theoretical physics, in particular in representation theory. If the authors aim to make it more accessible, the writing would need to be substantially improved. | train | [
"Z0qt_Wp_kMK",
"63NlHIhr2bL",
"Xf7Oz82_YLk",
"C3C5cWHwMBb",
"KqNz2szwA9Q",
"11LDaPyibZk",
"A4md1U3-RKb",
"SxdbfEU11Vr",
"I5n5q2DzbR"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThe authors prove a theorem (thm 4.1) which describes a basis for the space of kernels in a G-steerable CNN for any compact group G. Steerable CNNs are similar to CNNs but replace channels with G-reps and enforce an equivariance constraint on the kernels. Though Cohen et al 2019 state the constraint, and Cohen... | [
8,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
1
] | [
"iclr_2021_ajOrOhQOsYx",
"Z0qt_Wp_kMK",
"A4md1U3-RKb",
"63NlHIhr2bL",
"SxdbfEU11Vr",
"I5n5q2DzbR",
"iclr_2021_ajOrOhQOsYx",
"iclr_2021_ajOrOhQOsYx",
"iclr_2021_ajOrOhQOsYx"
] |
iclr_2021_enoVQWLsfyL | Viewmaker Networks: Learning Views for Unsupervised Representation Learning | Many recent methods for unsupervised representation learning train models to be invariant to different "views," or distorted versions of an input. However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models that learn to produce useful views from a given input. Viewmakers are stochastic bounded adversaries: they produce views by generating and then adding an ℓp-bounded perturbation to the input, and are trained adversarially with respect to the main encoder network. Remarkably, when pretraining on CIFAR-10, our learned views enable comparable transfer accuracy to the well-tuned SimCLR augmentations---despite not including transformations like cropping or color jitter. Furthermore, our learned views significantly outperform baseline augmentations on speech recordings (+9 points on average) and wearable sensor data (+17 points on average). Viewmaker views can also be combined with handcrafted views: they improve robustness to common image corruptions and can increase transfer performance in cases where handcrafted views are less explored. These results suggest that viewmakers may provide a path towards more general representation learning algorithms---reducing the domain expertise and effort needed to pretrain on a much wider set of domains. Code is available at https://github.com/alextamkin/viewmaker. | poster-presentations | New generative model to come up with data that is needed when doing contrastive learning. Like the fact that multiple modalities were considered and evaluated. The Viewmaker methods appears to do well on CIFAR-19 and outperforms baselines on speed and wearable domains. The reviewers praise the method for being simple, well-described and well-motivated. The main drawbacks stem from the fact that viewmaker cannot make certain types of image-specific augmentations (crop & rescale, as an example), but it's fair that the authors argue that their method is more domain-agnostic; and one can indeed add more domain-specific stuff if needed.
All in all, this seems like a solid paper with an easy to implement idea that is quite general and that has been shown to work in a variety of settings. It definitely belongs at ICLR. | train | [
"Cb1RCFPXUTQ",
"J75qCmhqDM",
"La_Vq9JpBN",
"6SiT6uHtJ4S",
"hf15Cx_vPwm",
"5oGNvk_rg9e",
"-c3Zi91TLdF",
"Z7Eb6CmwEGr",
"gI17l27mIZJ",
"UB8tOxZmOI",
"miZx15ds7RR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer"
] | [
"**[Summary]**\nThe authors proposed the Viewmaker, which learns to generate augmentation for contrastive learning. They show that the method achieves comparable results when applied for CIFAR-10, but significantly outperformed baseline augmentations in the speech domain and wearable sensor domain. \n\n**[Reason fo... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
-1,
6
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
3
] | [
"iclr_2021_enoVQWLsfyL",
"5oGNvk_rg9e",
"iclr_2021_enoVQWLsfyL",
"UB8tOxZmOI",
"La_Vq9JpBN",
"miZx15ds7RR",
"Cb1RCFPXUTQ",
"gI17l27mIZJ",
"iclr_2021_enoVQWLsfyL",
"iclr_2021_enoVQWLsfyL",
"iclr_2021_enoVQWLsfyL"
] |
iclr_2021_23ZjUGpjcc | Scalable Transfer Learning with Expert Models | Transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. We explore the use of expert representations for transfer with a simple, yet effective, strategy. We train a diverse set of experts by exploiting existing label structures, and use cheap-to-compute performance proxies to select the relevant expert for each target task. This strategy scales the process of transferring to new tasks, since it does not revisit the pre-training data during transfer. Accordingly, it requires little extra compute per target task, and results in a speed-up of 2-3 orders of magnitude compared to competing approaches. Further, we provide an adapter-based architecture able to compress many experts into a single model. We evaluate our approach on two different data sources and demonstrate that it outperforms baselines on over 20 diverse vision tasks in both cases. | poster-presentations | The presented idea is aligned with past work using multiple experts or multiple sources for transfer. However, it is positioned uniquely and cleverly in that the approach is developed with scalability in mind. Within this setting, the paper is convincing. Although the approach does not come with strong backing theory, it is intuitive and seems to work well. During the discussions phase, the authors have clarified some questions that made the paper convincing, even if it is a relatively heuristic approach. The results are strong if one is concerned with both quantitative performance and efficiency, a combination of objectives very often encountered in practice. Overall, it is expected that this idea can stimulate further research along those lines, especially since this paper is very nice and easy to read. | train | [
"Jk3GsONx6zR",
"DXavS6WCGS8",
"ypCLLngV7bU",
"L9i6PR3H2jx",
"G7nlTpdCBG4",
"XzFaxSvns_P",
"ZJpMsX8zuva",
"6Va1Ze62-3",
"kzAQ6Q0W9v8",
"1drZdlHhpE",
"D2pTcZ0iZo",
"BosXVuWQqZl",
"youy-IT_3y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe authors propose a four-stage transfer learning strategy:\n1. pre-train a baseline model on the entire source data\n2. fine tune the baseline model on different parts of the source data (determined by the label hierarchy) to get multiple experts.\n3. for the target task of interest, select the best ex... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"iclr_2021_23ZjUGpjcc",
"ZJpMsX8zuva",
"6Va1Ze62-3",
"G7nlTpdCBG4",
"XzFaxSvns_P",
"1drZdlHhpE",
"Jk3GsONx6zR",
"D2pTcZ0iZo",
"BosXVuWQqZl",
"youy-IT_3y",
"iclr_2021_23ZjUGpjcc",
"iclr_2021_23ZjUGpjcc",
"iclr_2021_23ZjUGpjcc"
] |
iclr_2021_Ovp8dvB8IBH | Negative Data Augmentation | Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA) that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks. | poster-presentations | All reviewers find the proposed data augmentation approach simple, interesting and effective. They agree that paper does a good job exploring this idea with number of experiments. However the paper also suffers from some drawbacks, and reviewers raise questions about some of the conclusions of the paper - in particular how to designate an augmentation as either negative or positive is not clear apriori to training. While I agree with this criticism, I believe the paper overall explores an interesting direction and provides a good set of experiments than can be built on in future works, and I suggest acceptance. I encourage authors to address all the reviewers concerns as per the feedback in the final version. | train | [
"VIqLBw4ck5J",
"k-TfifrIaM9",
"_SXH79AlCYC",
"ARsbHcQcNud",
"u4rl-tYmEmc",
"6OGRgPQQhF2",
"jLafA3U5D7g",
"0bTLiUex3Ha",
"TU9CP2fh5Ao"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method that uses artificial augmentation of data as Negative (aka OOD) samples to improve various computer vision tasks, including generation, unsupervised learning on images and videos.\n\nProns:\n- The paper is very well written.\n- Experiments are comprehensive across different tasks\n- Th... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
9,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_Ovp8dvB8IBH",
"iclr_2021_Ovp8dvB8IBH",
"ARsbHcQcNud",
"k-TfifrIaM9",
"TU9CP2fh5Ao",
"VIqLBw4ck5J",
"0bTLiUex3Ha",
"iclr_2021_Ovp8dvB8IBH",
"iclr_2021_Ovp8dvB8IBH"
] |
iclr_2021_JCRblSgs34Z | Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers | In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute the spectral norm of convolution layers either rely on heuristics that are efficient in computation but lack guarantees or are theoretically-sound but computationally expensive. In this work, we obtain the best of both worlds by deriving {\it four} provable upper bounds on the spectral norm of a standard 2D multi-channel convolution layer. These bounds are differentiable and can be computed efficiently during training with negligible overhead. One of these bounds is in fact the popular heuristic method of Miyato et al. (multiplied by a constant factor depending on filter sizes). Each of these four bounds can achieve the tightest gap depending on convolution filters. Thus, we propose to use the minimum of these four bounds as a tight, differentiable and efficient upper bound on the spectral norm of convolution layers. Moreover, our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks. Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks. | poster-presentations | The authors provide four rigorous upper bounds on the operator norm of the linear transformation associated with a 2D convolutional layer of a neural network. One of these is a heuristic proposed in earlier work by Miyato et al, and widely used, so, among other things, their result provides theoretical context for that method which will be of broad interest. All four of their bounds can be efficiently computed and have easily computed gradients, so they propose using the minimum of the four bounds for various purposes. Since, for standard architectures, the Lipschitz constant of a network can be bounded above by the product of the operator norms of its layers, there are a variety of applications of differentiable bounds on these operator norms. They show that their new bound is sometimes much tighter than the bound of Miyato et al, and can be computed much more efficiently than two known methods for exact computation. The paper is written well, which will facilitate future work building on this work. The analysis builds on earlier work, but insight was required to obtain the new results; the fundamental novelty of the mathematical development was confirmed by an expert reviewer.
While they experimentally compared the accuracy of their approximations to those of the method of Miyato, et al, the case for the practical utility of their method would have been stronger if they had shown that their regularizer led to better results for some tasks. However, I believe that the paper should be accepted purely on the basis of its theoretical contribution, which enhances our understanding of this important topic, and, even if it cannot be directly applied, seems like to inspire practically useful methods in the future.
| train | [
"ySWUT0Afs4L",
"sesn_dMTB1",
"SoW5ge1Z_pT",
"S6fA0YAG_z",
"-VN0w0VReKX",
"nFnFlNdy6dL",
"DInNW5GmUek",
"n9kSQJOoUVb"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your positive feedback. In the updated version of the paper, we will include a more detailed comparison with previous methods for the robustness application. \n\n",
"1. We followed the experimental practice of Sedghi et al. (published in ICLR 2018). They show that their method results in improvemen... | [
-1,
-1,
-1,
-1,
8,
5,
3,
4
] | [
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"-VN0w0VReKX",
"nFnFlNdy6dL",
"DInNW5GmUek",
"n9kSQJOoUVb",
"iclr_2021_JCRblSgs34Z",
"iclr_2021_JCRblSgs34Z",
"iclr_2021_JCRblSgs34Z",
"iclr_2021_JCRblSgs34Z"
] |
iclr_2021_Ozk9MrX1hvA | CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding | Data augmentation has been demonstrated as an effective strategy for improving model generalization and data efficiency. However, due to the discrete nature of natural language, designing label-preserving transformations for text data tends to be more challenging. In this paper, we propose a novel data augmentation frame-work dubbed CoDA, which synthesizes diverse and informative augmented examples by integrating multiple transformations organically. Moreover, a contrastive regularization is introduced to capture the global relationship among all the data samples. A momentum encoder along with a memory bank is further leveraged to better estimate the contrastive loss. To verify the effectiveness of the proposed framework, we apply CoDA to Transformer-based models on a wide range of natural language understanding tasks. On the GLUE benchmark, CoDA gives rise to an average improvement of 2.2%while applied to the Roberta-large model. More importantly, it consistently exhibits stronger results relative to several competitive data augmentation and adversarial training baselines (including the low-resource settings). Extensive experiments show that the proposed contrastive objective can be flexibly combined with various data augmentation approaches to further boost their performance, highlighting the wide applicability of the CoDA framework. | poster-presentations | This paper concerns data augmentation techniques for NLP. In particular, the authors introduce a general augmentation framework they call CoDA and demonstrate its utility on a few benchmark NLP tasks, reporting promising empirical results. The authors addressed some key concerns (e.g., regarding hyperparameters, reporting of variances) during the discussion period. The consensus, then, is that this work provides a useful and relatively general method for augmentation in NLP and the ICLR audience is likely to find this useful. | val | [
"dHx1VhDGzfc",
"7DG7Swl3-9p",
"DcLoJ0iJkmJ",
"MrCHfQgf5gY",
"CggMCWq0Uxf",
"1zZL2xG70pI",
"uslwszAU-EW",
"_7lW_Y77tZ",
"scZsGOYdKGd"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"1. Most of the hyper-parameters are detailed in Appendix C. More specifically, we tune alpha = {0, 0.3, 1}, beta = {0, 0.3, 1, 3}, lambda = {0, 0.01,0.03} in main experiments. The following table shows one of the parameter search results with stack(back, adv) (no contrastive regularization):\n\n| MNLI-m (acc) | al... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"7DG7Swl3-9p",
"1zZL2xG70pI",
"_7lW_Y77tZ",
"CggMCWq0Uxf",
"scZsGOYdKGd",
"uslwszAU-EW",
"iclr_2021_Ozk9MrX1hvA",
"iclr_2021_Ozk9MrX1hvA",
"iclr_2021_Ozk9MrX1hvA"
] |
iclr_2021_4RbdgBh9gE | Teaching with Commentaries | Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, learned meta-information helpful for training on a particular task. We present gradient-based methods to learn commentaries, leveraging recent work on implicit differentiation for scalability. We explore diverse applications of commentaries, from weighting training examples, to parameterising label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process. We also observe that commentaries generalise: they can be reused when training new models to obtain performance benefits, suggesting a use-case where commentaries are stored with a dataset and leveraged in future for improved model training. | poster-presentations | This paper proposes an interesting unified framework for meta-learning with commentaries, which contains information helpful for learning about new tasks or new data points. The authors present three kinds of different instantiations, i.e., example weighting, example blending, and attention mask, and show the effectiveness with the extensive experiments. The proposed method has a potential to be used for a wide variety of tasks. | train | [
"TNh0tCoKLp",
"OLz07Thydmp",
"xc9qBDYP3-p",
"3CpKq38VAz8",
"D52pR-OGJ-n",
"8t6GPNoqF03",
"BwnIwnpPMML",
"ESHeYoq-Wvd",
"yZOwY3-Wu9B",
"zsOFBP8eQpo",
"4qqqQxtiBja"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a general framework for boosting CNNs performance on different tasks by using'commentary' to learn meta-information. The obtained meta-information can also be used for other purposes such as the mask of objects within spurious background and the similarities among classes. The commentary module... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2021_4RbdgBh9gE",
"8t6GPNoqF03",
"ESHeYoq-Wvd",
"yZOwY3-Wu9B",
"4qqqQxtiBja",
"TNh0tCoKLp",
"zsOFBP8eQpo",
"iclr_2021_4RbdgBh9gE",
"iclr_2021_4RbdgBh9gE",
"iclr_2021_4RbdgBh9gE",
"iclr_2021_4RbdgBh9gE"
] |
iclr_2021_UFGEelJkLu5 | MixKD: Towards Efficient Distillation of Large-scale Language Models | Large-scale language models have recently demonstrated impressive empirical performance. Nevertheless, the improved results are attained at the price of bigger models, more power consumption, and slower inference, which hinder their applicability to low-resource (both memory and computation) platforms. Knowledge distillation (KD) has been demonstrated as an effective framework for compressing such big models. However, large-scale neural network systems are prone to memorize training instances, and thus tend to make inconsistent predictions when the data distribution is altered slightly. Moreover, the student model has few opportunities to request useful information from the teacher model when there is limited task-specific data available. To address these issues, we propose MixKD, a data-agnostic distillation framework that leverages mixup, a simple yet efficient data augmentation approach, to endow the resulting model with stronger generalization ability. Concretely, in addition to the original training examples, the student model is encouraged to mimic the teacher's behavior on the linear interpolation of example pairs as well. We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the generalization error and the empirical error. To verify its effectiveness, we conduct experiments on the GLUE benchmark, where MixKD consistently leads to significant gains over the standard KD training, and outperforms several competitive baselines. Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach. | poster-presentations | This work explores the distillation of language models using MixUp for data augmentation. Distillation with MixUp seems to be novel in the narrow context of distilling language models, although it has been used before in different contexts as the reviewers point out. The results of the experimental validation are encouraging, and the application is valuable and of wide interest to the ICLR audience. I therefore recommend accepting this paper for a poster presentation. | train | [
"LFc_qGo-k4Y",
"EivEpQXvMYd",
"i_NwoErzuu-",
"GMKTsiE0imt",
"f9woLsKTtV",
"8omgKwtWJHP",
"x5tX7E9QkPG",
"zKlq2EpMZY0",
"X19tRXyCoz",
"KNptB6n1M3",
"sIE9ytToVXN",
"pNCx8QMdJso",
"8vuiKX0HF5c",
"bw3PtpRfnDM",
"I-CoWWYfmd6",
"cSaTtU1-YeS",
"SPj2uNmVGqb",
"x6IvMLv-Lx_",
"x9WQNdDhu2e"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Nice paper. The main idea in this paper is to use a specific kind of data augmentation, Mixup (Manifold Mixup), in order to improve the effectiveness of the KD process and obtain better performing student models, especially in cases where not enough data is available on the target dataset and task.\n\nWhile the id... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2021_UFGEelJkLu5",
"i_NwoErzuu-",
"x5tX7E9QkPG",
"I-CoWWYfmd6",
"x9WQNdDhu2e",
"_DX0z-z_dsw",
"LFc_qGo-k4Y",
"YzU82qRBZcP",
"KNptB6n1M3",
"sIE9ytToVXN",
"pNCx8QMdJso",
"8vuiKX0HF5c",
"bw3PtpRfnDM",
"cSaTtU1-YeS",
"x6IvMLv-Lx_",
"SPj2uNmVGqb",
"iclr_2021_UFGEelJkLu5",
"iclr_20... |
iclr_2021_N6JECD-PI5w | FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders | Pretrained text encoders, such as BERT, have been applied increasingly in various natural language processing (NLP) tasks, and have recently demonstrated significant performance gains. However, recent studies have demonstrated the existence of social bias in these pretrained NLP models. Although prior works have made progress on word-level debiasing, improved sentence-level fairness of pretrained encoders still lacks exploration. In this paper, we proposed the first neural debiasing method for a pretrained sentence encoder, which transforms the pretrained encoder outputs into debiased representations via a fair filter (FairFil) network. To learn the FairFil, we introduce a contrastive learning framework that not only minimizes the correlation between filtered embeddings and bias words but also preserves rich semantic information of the original sentences. On real-world datasets, our FairFil effectively reduces the bias degree of pretrained text encoders, while continuously showing desirable performance on downstream tasks. Moreover, our post hoc method does not require any retraining of the text encoders, further enlarging FairFil's application space. | poster-presentations | The paper presents a fair filter network to mitigating bias in sentence encoders by constructive learning. The approach reduces the bias in the embedding while preserves the semantic information of the original sentences.
Overall, all the reviewers agree that the paper is interesting and the experiment is convincing. Especially the proposed approach is conceptually simple and effective.
One suggestion is that the model only considers fairness metric based on the similarity between sentence embedding; however, it would be better to investigate how the "debiased embedding" helps to reduce the bias in more advanced downstream NLP applications such as coreference resolution, in which researchers demonstrate that the bias in underlying representation causing bias in the downstream model predictions. | train | [
"XtqbsDdHe2_",
"Cu1xY4pek21",
"brO_hT8kgtl",
"FcuUgtiSg4V",
"8iRM2tiwNLh",
"ol-6lYPgjkc",
"tEknSDd6Qv",
"pOIfSPcnUY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a novel technique for debiasing pretrained contextual embedding models. Their approach trains a 2 layer fully-connected neural network which takes as input the output from the pretrained model and outputs a new, \"debiased\" representation. This model is trained by minimizing the InfoNCE betw... | [
6,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2021_N6JECD-PI5w",
"ol-6lYPgjkc",
"tEknSDd6Qv",
"XtqbsDdHe2_",
"pOIfSPcnUY",
"iclr_2021_N6JECD-PI5w",
"iclr_2021_N6JECD-PI5w",
"iclr_2021_N6JECD-PI5w"
] |
iclr_2021_T1XmO8ScKim | Probabilistic Numeric Convolutional Neural Networks | Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods. Coherently defined feature representations must depend on the values in unobserved regions of the input. Drawing from the work in probabilistic numerics, we propose Probabilistic Numeric Convolutional Neural Networks which represent features as Gaussian processes, providing a probabilistic description of discretization error. We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity. This approach also naturally admits steerable equivariant convolutions under e.g. the rotation group. In experiments we show that our approach yields a 3× reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time series dataset PhysioNet2012. | poster-presentations | This is a fairly technical paper bridging deep learning with uncertainty propagation in computations (i.e. probabilistic numerics). It is well structured, but it could benefit from further improvements in readability given that there are only very few researchers that are experts in all sub-domains associated with this work. Given the above, as well as low overall confidence by the reviewers, I attempted a more thorough reading of the paper (even if not an expert myself), and I was also happy to see that the discussion clarified important points. Overall, the idea is novel, convincing and seems well executed, with good results. The technical advancements needed to make the idea work are fairly complicated and are appreciated as contributions, because they are expected to be useful in other applications too (beyond irregular sampled data) where uncertainty propagation matters. | val | [
"suIt9BqTLEh",
"AlgdBQxlhh9",
"w9bAsD1Qy8",
"BExcBssXjAr",
"8ZIrnYSgdP",
"R5DC7PUFJ5p",
"Ui7KPwMf08U",
"qHYyjuKlM3O",
"Kl6Ds3MPRZA"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This work presents an uncertainty aware continuous convolutional layers for learning from continuous signals like time series/images. This work is most useful in the setting of irregularly sampled data. Gaussian processes (GP) are used to represent the irregularly sampled input. The proposed continuous co... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
1,
3
] | [
"iclr_2021_T1XmO8ScKim",
"w9bAsD1Qy8",
"Kl6Ds3MPRZA",
"suIt9BqTLEh",
"Ui7KPwMf08U",
"qHYyjuKlM3O",
"iclr_2021_T1XmO8ScKim",
"iclr_2021_T1XmO8ScKim",
"iclr_2021_T1XmO8ScKim"
] |
iclr_2021_hkMoYYEkBoI | Computational Separation Between Convolutional and Fully-Connected Networks | Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network. | poster-presentations | This paper aims at answering an interesting question that puzzles the whole community of deep learning: why CNNs perform better than FCNs? The authors show that CNNs can solve the k-pattern problem much more efficiently than FCNs, which partially contributes to the answer of the question.
Pros:
1. Studies an interesting question on DNNs.
2. Constructs a specific problem, the k-pattern problem, so that CNNs can solve much more efficiently than FCNs.
Cons:
1. The analysis is only a very limited answer to the question. It only shows that CNNs are more efficient than FCNs on a very specific problem, which is of little interest to the community. On the one hand, people want to see the advantage of CNNs on more common problems, perhaps the image recognition problem (The AC understands that analyzing this problem is nearly impossible. It is just for hinting the choice of problems to analyze)? On the other hand, maybe others can find another specific problem that FCNs can solve much more efficiently than CNNs. If so, the value of this paper will be totally gone. The authors did not exclude such a possibility (Nonetheless, it is still a computational "separation" between CNNs and FCNs :)).
2. Reviewer #4 pointed out an issue in the proof. The response from the authors, though looked promising, did not fully convince the reviewer (in the confidential comment). Reviewer #3 also raised a question on the bounded stepsize. The authors should address both issues.
Overall, since the problem studied is of great interest to the community and the analysis is mostly sound, the AC recommended acceptance. | train | [
"5lOSoGfIH8",
"8hknhywnytM",
"N_xURoTPeym",
"1Ft21lXosD",
"N0E4Pz8alrW",
"pgX18LkPGD",
"S6LaXnha1rN",
"QPzAgFN8FMy"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\n\nThe paper studies the problem of computational separation between two-layer convolutional neural network (CNN) and fully-connected neural network (FCN). It shows that there is a class of function, which is defined in the paper as k-pattern, such that CNN could learn this class within polynomial time wi... | [
6,
-1,
-1,
-1,
-1,
8,
8,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_hkMoYYEkBoI",
"pgX18LkPGD",
"S6LaXnha1rN",
"5lOSoGfIH8",
"QPzAgFN8FMy",
"iclr_2021_hkMoYYEkBoI",
"iclr_2021_hkMoYYEkBoI",
"iclr_2021_hkMoYYEkBoI"
] |
iclr_2021_nzpLWnVAyah | On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines | Fine-tuning pre-trained transformer-based language models such as BERT has become a common practice dominating leaderboards across various NLP benchmarks. Despite the strong empirical performance of fine-tuned models, fine-tuning is an unstable process: training the same model with multiple random seeds can result in a large variance of the task performance. Previous literature (Devlin et al., 2019; Lee et al., 2020; Dodge et al., 2020) identified two potential reasons for the observed instability: catastrophic forgetting and small size of the fine-tuning datasets. In this paper, we show that both hypotheses fail to explain the fine-tuning instability. We analyze BERT, RoBERTa, and ALBERT, fine-tuned on commonly used datasets from the GLUE benchmark, and show that the observed instability is caused by optimization difficulties that lead to vanishing gradients. Additionally, we show that the remaining variance of the downstream task performance can be attributed to differences in generalization where fine-tuned models with the same training loss exhibit noticeably different test performance. Based on our analysis, we present a simple but strong baseline that makes fine-tuning BERT-based models significantly more stable than the previously proposed approaches. Code to reproduce our results is available online: https://github.com/uds-lsv/bert-stable-fine-tuning. | poster-presentations | This paper identifies the causal factors behind a major known issue in deep learning for NLP: Fine-tuning models on small datasets after self-supervised pretraining can be extremely unstable, with models needing dozens of restarts to achieve acceptable performance in some cases. The paper then introduces a simple suggested fix.
Pros:
- The motivating problem is important: A large fraction of all computing time used on language-understanding tasks involves fine-tuning runs under the protocol studied here, and the problem of fine-tuning self-supervised models should be of broader interest at ICLR.
- The proposed fix is simple and well-demonstrated. It consists of only an adjustment to the range of values considered in hyperparameter tuning (which is significant, since BERT and related papers *explicitly advise* users to use inappropriate values) and an adjustment to the implementation of the optimizer.
Cons:
- The method is demonstrated on a relatively small set of difficult text-classification datasets, so the behavior studied here may be different in very different dataset size, task difficulty, or label entropy regimes.
This paper was divisive, so I gave it a fairly close look myself, and I'm persuaded by R1 and the other two positive reviewers: This is a classic example of a 'strong baselines paper', in that demonstrates that a more careful use of established methods can obviate the need for additional tricks.
R3 raised two major concerns that they presented as potentially fatal, but that I find unpersuasive.
- This paper studies stability in model performance, not stability in predictions on individual data points. R3 argues that the latter sense of stability is the more important problem. Stability is an ambiguous term in this context, and both versions of the problem are interesting. However, as the authors pointed out, the definition of stability that is used here is consistent with previous work, and is widely accepted to be a major practical problem in NLP. I don't think this is a weakness of this paper, rather, it's an opportunity for someone else to write another, different paper on a different problem.
- R3 claims that the results are described as being more positive than they actually are, and the figure is potentially misleading. Looking at the quantitative results with R3's points in mind, I still see clear support for both of the paper's main suggestions. R3 opened up some potentially important questions about the handling of outliers in particular, but these questions were raised too late for the authors to be allowed to respond, and I don't see any evidence in the paper that anything improper was done. The marked outliers are clearly much farther from the mean/median in terms of standard deviations than the unmarked outliers. So, I don't see any evidence that these concerns reflect real methodological problems. | train | [
"foyAUHbCnmu",
"KTv0sXdOgJh",
"hpe3B0IIfG1",
"EQduW8tagj",
"oBeISngZ0c",
"QZZiAFdLpQ",
"JaY7QfNW79",
"IYAT-AwIR4Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"################################ \n\nSummary:\n\nThis paper considers the stability of fine-tuning BERT-LARGE models, with considerations for RoBERTa and ALBERT. In particular, it aims to demonstrate that previously identified reasons, catastrophic forgetting and small fine-tuning datasets, fail to explain the obs... | [
6,
-1,
-1,
-1,
-1,
6,
8,
4
] | [
3,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2021_nzpLWnVAyah",
"foyAUHbCnmu",
"IYAT-AwIR4Q",
"QZZiAFdLpQ",
"JaY7QfNW79",
"iclr_2021_nzpLWnVAyah",
"iclr_2021_nzpLWnVAyah",
"iclr_2021_nzpLWnVAyah"
] |
iclr_2021_kvhzKz-_DMF | Variational Information Bottleneck for Effective Low-Resource Fine-Tuning | While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose feature extractors, many of these features are inevitably irrelevant for a given target task. We propose to use Variational Information Bottleneck (VIB) to suppress irrelevant features when fine-tuning on low-resource target tasks, and show that our method successfully reduces overfitting. Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets. Evaluation on seven low-resource datasets in different tasks shows that our method significantly improves transfer learning in low-resource scenarios, surpassing prior work. Moreover, it improves generalization on 13 out of 15 out-of-domain natural language inference benchmarks. Our code is publicly available in https://github.com/rabeehk/vibert. | poster-presentations | The paper shows the success of a relatively simple idea -- fine tune a pretrained BERT Model using Variational Information Bottleneck method of Alemi to improve transfer learning in low resource scenarios.
I agree with the reviewers that novelty is low -- one would like to use any applicable method for controlling overfitting when doing transfer learning, and of the suite of good candidates, VIB is an obvious one -- but at the same time, I'm moved by the results because of: the improvements and the success on a wide range of tasks and the surprising success of VIB over other alternatives like dropout etc, and hence I'm breaking the tie in the reviews by supporting acceptance. Its a nice trick that the community could use, if the results of the paper are an indication of its potential. | train | [
"sSbPLswLVMs",
"dyV_chrp-P",
"DYBxebFtAxy",
"5iblQpT36n4",
"4jBNLlDlwdQ",
"fjHLmKF6hK2",
"xK-Kp1WtSsl",
"R5uNbQ3CRx",
"QFlyPRIvuiw",
"MQZkybuGqSp"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the response, especially regarding the random seeds.\nRegarding the IB Curve, it does make sense to just use the CE-Loss in context of targeting a more broad audience.\n\nRegarding the novelty:\n\nI acknowledge that the proposed method of using a VIB in this specific setting has not been done before,... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"dyV_chrp-P",
"xK-Kp1WtSsl",
"R5uNbQ3CRx",
"QFlyPRIvuiw",
"MQZkybuGqSp",
"iclr_2021_kvhzKz-_DMF",
"iclr_2021_kvhzKz-_DMF",
"iclr_2021_kvhzKz-_DMF",
"iclr_2021_kvhzKz-_DMF",
"iclr_2021_kvhzKz-_DMF"
] |
iclr_2021_01olnfLIbD | Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching | Data Poisoning attacks modify training data to maliciously control a model trained on such data.
In this work, we focus on targeted poisoning attacks which cause a reclassification of an unmodified test image and as such breach model integrity. We consider a
particularly malicious poisoning attack that is both ``from scratch" and ``clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.
Previous poisoning attacks against deep neural networks in this setting have been limited in scope and success, working only in simplified settings or being prohibitively expensive for large datasets.
The central mechanism of the new attack is matching the gradient direction of malicious examples. We analyze why this works, supplement with practical considerations. and show its threat to real-world practitioners, finding that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
Finally we demonstrate the limitations of existing defensive strategies against such an attack, concluding that data poisoning is a credible threat, even for large-scale deep learning systems. | poster-presentations | The paper presents a scalable data poisoning algorithm for targeted attacks, using the idea of designing poisoning patterns which "align" the gradients of the real objective and the adversarial objective. This intuition is supported by theoretical results, and the paper presents convincing experimental results about the effectiveness of the model.
The reviewers overall liked the paper. However, they requested a number of clarifications and some additional work, which should be incorporated in the final version (however, the authors are not required to use the wording as poison integrity/ poison availability). In particular, it would be great to see the experiment the authors suggested in their response to Reviewer 2 about the effectiveness of their method for multiple targets (this is important to better understand the limitations of the proposed approach). | test | [
"vhPesayC7X9",
"5u-8sh4dQh3",
"nHg4LNDJata",
"Jyx88gL6pci",
"8S8UNSdyvc3",
"qaG09SkNDSJ",
"cN9U40kfsM",
"A389m8x3HYC",
"JcUtpsosZzr",
"yu44fjLZro",
"SfrDchiFuvp",
"vfl_sgowWkJ",
"Zy418x0vOS",
"38h-AaEjjBd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\n- The paper proposes a novel data poisoning attack i.e., to perturb a small fraction of images in the victim's training dataset so as to cause targeted misclassification on certain examples at inference time.\n- The proposed approach works by perturbing the clean poison set to introduce a gradient dire... | [
7,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_01olnfLIbD",
"nHg4LNDJata",
"8S8UNSdyvc3",
"cN9U40kfsM",
"iclr_2021_01olnfLIbD",
"JcUtpsosZzr",
"8S8UNSdyvc3",
"Zy418x0vOS",
"38h-AaEjjBd",
"vhPesayC7X9",
"8S8UNSdyvc3",
"iclr_2021_01olnfLIbD",
"iclr_2021_01olnfLIbD",
"iclr_2021_01olnfLIbD"
] |
iclr_2021_XPZIaotutsD | DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION | Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models’
generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understand(NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, outperforming the human baseline by a decent margin (90.3 versus
89.8). The pre-trained DeBERTa models and the source code were released at: https://github.com/microsoft/DeBERTa.
| poster-presentations | All reviewers gave, though not very strong, positive scores for this work. Although the technical contribution of the paper is somewhat incremental, the reviewers agree that it solidly addresses the known important issues in BERT, and the experiments are extensive enough to demonstrate the empirical effectiveness of the method. The main concerns raised by the reviewers are regarding the novelty and the discussion with respect to related work as well as some unclear writings in the detail, but I think the pros outweigh the cons and thus would like to recommend acceptance of the paper.
We do encourage authors to properly take in the reviewers' comments to further polish the paper in the final version.
| train | [
"a5oUDreFvx",
"i5sVBc3PWrr",
"TgVEle3IJA_",
"yIugoHlwxmk",
"tAB4kErmoV",
"tM7jxs-mDwp",
"WYwTmQDzGb3",
"hbRWP5lM16H"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the positive review. We provide the answer to the questions and potential concerns. \n\n**Q1**: The disentangled attention in DeBERTa is motivated but not closely related to disentangled representations or features. Unlike the conventional absolute position bias encoding which adds the position embe... | [
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
4,
3,
3
] | [
"WYwTmQDzGb3",
"tAB4kErmoV",
"tM7jxs-mDwp",
"hbRWP5lM16H",
"iclr_2021_XPZIaotutsD",
"iclr_2021_XPZIaotutsD",
"iclr_2021_XPZIaotutsD",
"iclr_2021_XPZIaotutsD"
] |
iclr_2021_CBmJwzneppz | Optimism in Reinforcement Learning with Generalized Linear Function Approximation | We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call ``optimistic closure,'' which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound of O~(Hd3T) where H is the horizon, d is the dimensionality of the state-action features and T is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions. | poster-presentations | This paper analyzes a version of optimistic value iteration with generalized linear function approximation. Under an optimistic closure assumption, the algorithm is shown to enjoy sublinear regret. The paper also studies error propagation through backups that do not require closed-form characterization of dynamics and reward functions.
Overall, this is a solid contribution and the consensus is to accept. | train | [
"0CNPIETK54n",
"NJpYxFkocwu",
"XKjpLf9jftM",
"GsGrViTfIgp",
"UD-t3bg36lO",
"kwmWMzZwyPE",
"8Kxx8brpwm",
"HLWCfbtxYfY",
"h2bA2wE7iOX",
"-H7ANcZfYi",
"fwYSy-IrnS",
"Z2GWArKY92Z"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary After Discussion Period:\n-----------------------------------------------\nAfter corresponding to the authors and reading other reviews, my assessment hasn't changed much, which is that the paper is a good line of research but still needs improvement readability and strictness of assumptions.\n\nThe author... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_CBmJwzneppz",
"iclr_2021_CBmJwzneppz",
"GsGrViTfIgp",
"-H7ANcZfYi",
"kwmWMzZwyPE",
"8Kxx8brpwm",
"NJpYxFkocwu",
"fwYSy-IrnS",
"Z2GWArKY92Z",
"0CNPIETK54n",
"iclr_2021_CBmJwzneppz",
"iclr_2021_CBmJwzneppz"
] |
iclr_2021_6DOZ8XNNfGN | Graph Traversal with Tensor Functionals: A Meta-Algorithm for Scalable Learning | Graph Representation Learning (GRL) methods have impacted fields from chemistry to social science. However, their algorithmic implementations are specialized to specific use-cases e.g. "message passing" methods are run differently from "node embedding" ones. Despite their apparent differences, all these methods utilize the graph structure, and therefore, their learning can be approximated with stochastic graph traversals. We propose Graph Traversal via Tensor Functionals (GTTF), a unifying meta-algorithm framework for easing the implementation of diverse graph algorithms and enabling transparent and efficient scaling to large graphs. GTTF is founded upon a data structure (stored as a sparse tensor) and a stochastic graph traversal algorithm (described using tensor operations). The algorithm is a functional that accept two functions, and can be specialized to obtain a variety of GRL models and objectives, simply by changing those two functions. We show for a wide class of methods, our algorithm learns in an unbiased fashion and, in expectation, approximates the learning as if the specialized implementations were run directly.
With these capabilities, we scale otherwise non-scalable methods to set state-of-the-art on large graph datasets while being more efficient than existing GRL libraries -- with only a handful of lines of code for each method specialization. | poster-presentations | Summary: The authors propose to approximate operations on graphs, roughly
speaking by approximating the graph locally around a collection of
vertices by a collection of trees. The method is presented as a
meta-algorithm that can be applied to a range of problems in the
context of learning graph representations.
Discussion: The reviews are overall positive, though they point out a
number of weaknesses. One was unconvincing experimental
validation. Another, more conceptual one was that this is a 'unifying
framework' rather than a novel method. Additionally, there were a number of
minor points that were not clear. However, the authors have provided
additional experiments that the reviewers consider convincing, and
were able to provide sufficient clarification.
Recommendation:
The reviewer's verdict post-discussion favors publication, and I
agree. The authors have convincingly addressed the main concerns in discussion, and novelty is not a necessity: Unifying frameworks often seem an end in themselves, but this one is
potentially useful and compellingly simple.
| train | [
"YDVV6nZ84Bh",
"qSfQeDT7Mj9",
"cZQnzc5UPzi",
"sGgdgqSbieT",
"P8koz6LiTVU",
"sk4vmcqfgT",
"UtECzlnVJo4",
"jsQLDFrzRul",
"2IP1rklSZoA",
"cdJUv_2sOIy",
"k3XjexqdjnC",
"gheznvXq6WD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### Summary\n\nThe authors propose a \"meta-algorithm\" for approximating various graph representation learning schemes: generate batches of random trees with fixed fanout (and possibly biased probabilities of selecting different edges), and use them to accumulate information to approximate operations on the graph... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_6DOZ8XNNfGN",
"iclr_2021_6DOZ8XNNfGN",
"sk4vmcqfgT",
"iclr_2021_6DOZ8XNNfGN",
"sk4vmcqfgT",
"UtECzlnVJo4",
"qSfQeDT7Mj9",
"gheznvXq6WD",
"k3XjexqdjnC",
"YDVV6nZ84Bh",
"iclr_2021_6DOZ8XNNfGN",
"iclr_2021_6DOZ8XNNfGN"
] |
iclr_2021_Qm7R_SdqTpT | Diverse Video Generation using a Gaussian Process Trigger | Generating future frames given a few context (or past) frames is a challenging task. It requires modeling the temporal coherence of videos as well as multi-modality in terms of diversity in the potential future states. Current variational approaches for video generation tend to marginalize over multi-modal future outcomes. Instead, we propose to explicitly model the multi-modality in the future outcomes and leverage it to sample diverse futures. Our approach, Diverse Video Generator, uses a GP to learn priors on future states given the past and maintains a probability distribution over possible futures given a particular sample. We leverage the changes in this distribution over time to control the sampling of diverse future states by estimating the end of on-going sequences. In particular, we use the variance of GP over the output function space to trigger a change in the action sequence. We achieve state-of-the-art results on diverse future frame generation in terms of reconstruction quality and diversity of the generated sequences. | poster-presentations | All three reviewers agree on accepting the paper and think that the proposed approach will be of interest for those working in vdieo prediction. The authors are asked to include the extra discussion with R3 as part of the paper and include the proposed changes by R2 to provide more thorough experimentation. The paper is recommended as a poster presentation. | test | [
"Zi0WDzjgDvZ",
"Fbzk6Pnb8r-",
"8NeC706mt60",
"y2r-ipmOqOf",
"6a8gOQ1MT-0",
"_s6FkEfaiat",
"xt6RU6emoaT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"### SUMMARY\n\nThe authors propose to use a Gaussian Process (GP) to model the uncertainty of future frames in a video prediction setup. In particular, they employ a GP to model the uncertainty of the next step latent in a latent variable model. This allows them to use the GP variance to decide when to change an \... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_Qm7R_SdqTpT",
"iclr_2021_Qm7R_SdqTpT",
"Zi0WDzjgDvZ",
"xt6RU6emoaT",
"_s6FkEfaiat",
"iclr_2021_Qm7R_SdqTpT",
"iclr_2021_Qm7R_SdqTpT"
] |
iclr_2021_lqU2cs3Zca | Signatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU | Signatory is a library for calculating and performing functionality related to the signature and logsignature transforms. The focus is on machine learning, and as such includes features such as CPU parallelism, GPU support, and backpropagation. To our knowledge it is the first GPU-capable library for these operations. Signatory implements new features not available in previous libraries, such as efficient precomputation strategies. Furthermore, several novel algorithmic improvements are introduced, producing substantial real-world speedups even on the CPU without parallelism. The library operates as a Python wrapper around C++, and is compatible with the PyTorch ecosystem. It may be installed directly via \texttt{pip}. Source code, documentation, examples, benchmarks and tests may be found at \texttt{\url{https://github.com/patrick-kidger/signatory}}. The license is Apache-2.0. | poster-presentations | This paper introduces Signatory, a library for computing functionality related to the signature and logsignature transforms. Although a large body of the initial literature on the signature in ML focuses on using it as a feature extractor, more recent works have incorporated within modern deep learning architectures and therefore, the importance of having GPU-capable libraries (with automatic differentiation) that implement these transforms. Several algorithmic improvements are incorporated into the library. Some of the computational benefits of this library wrt to previous ones are demonstrated empirically.
There were some concerns from the reviewers about accepting library papers at ICLR. Library papers clearly fall into the ICLR CFP and, therefore, library, frameworks and platform papers that can be relevant and impactful are welcome contributions to the community. Additionally, more signature-related papers are appearing are mainstream ML venues, hence, despite the poor scalability wrt input dimensions, this paper is definitely relevant.
Perhaps one the drawbacks of this paper is the lack of a more rigorous empirical evaluation. The authors have added a deep learning benchmark, which is welcome but only on a toy dataset. There are still some concerns about the wide applicability of the signature (and its relatives) given its exponential scaling. That’s why applications on more realistic problems will be welcome. At the very least, It will be good if the authors incorporate a separate section discussing the limitations of the signature transform (and the library), especially in terms of computations and scalability.
| train | [
"8czUy-QQTt",
"9GCQLuv6Bz4",
"nk8-HuJI95t",
"Pe-QZjQXuqT",
"Is0DNb2MMg",
"aXQmPBsTYdK",
"VGLfXfo4cLQ",
"zBilBsd2Tt5",
"12s36kMDet",
"ehkndfl_8ZS",
"i8KO4PVSZad",
"FzLzNy2xnD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Update:\nThe authors have revised the paper, which helps the presentation somewhat (though headings like \"The Grouplike Structure\" still come at the reader without much context).\n\nThe authors added a more application-oriented benchmark, which makes the more convincing case for practical speedup of 210x.\n\nCer... | [
6,
8,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_lqU2cs3Zca",
"iclr_2021_lqU2cs3Zca",
"iclr_2021_lqU2cs3Zca",
"iclr_2021_lqU2cs3Zca",
"aXQmPBsTYdK",
"VGLfXfo4cLQ",
"zBilBsd2Tt5",
"12s36kMDet",
"nk8-HuJI95t",
"9GCQLuv6Bz4",
"Pe-QZjQXuqT",
"8czUy-QQTt"
] |
iclr_2021_0-EYBhgw80y | MoPro: Webly Supervised Learning with Momentum Prototypes | We propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. Most existing works on webly-supervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. MoPro achieves state-of-the-art performance on WebVision, a weakly-labeled noisy dataset. MoPro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. It outperforms the ImageNet supervised pretrained model by +10.5 on 1-shot classification on VOC, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1% of ImageNet labeled samples. Furthermore, MoPro is more robust to distribution shifts. Code and pretrained models are available at https://github.com/salesforce/MoPro. | poster-presentations | This paper provides an approach for weakly supervised learning by label noise correction and OOD sample removal. Overall, all reviewers agree paper is simple and approach makes sense. The experiments are solid with results on Webvision and ImageNet Mini (there were initial concerns but rebuttal handled some of those concerns). AC agrees with reviewers and recommends acceptance.
| train | [
"Xgns85xKmuC",
"JMBqW71toHS",
"Urn2nNactMw",
"oLi0gEXgaql",
"R6atRx9y1Ys",
"VyiyUaSCitJ",
"U3AoWs21-dB",
"35N6re0c_rc"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"To train a model with a noisy weakly supervised training set, this paper proposed a momentum prototypes method for label noise correction and OOD sample removal. Noise correction is done by a heuristic rule, that if the prediction is confident enough or the prediction on original label is higher than uniform proba... | [
6,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_0-EYBhgw80y",
"U3AoWs21-dB",
"Xgns85xKmuC",
"VyiyUaSCitJ",
"35N6re0c_rc",
"iclr_2021_0-EYBhgw80y",
"iclr_2021_0-EYBhgw80y",
"iclr_2021_0-EYBhgw80y"
] |
iclr_2021_04cII6MumYV | A Universal Representation Transformer Layer for Few-Shot Image Classification | Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it achieves top-performance on the highest number of data sources compared to competing methods. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization. | poster-presentations | This paper studies the problem of multi-domain few-shot image classification and proposes a Universal Representation Transformer (URT) layer, which leverages universal features by dynamically re-weighting and composing the most appropriate domain-specific representations in a meta-learning way. The paper extends the prior work of SUR [Dvornik et al 2020] by using meta-learning and avoiding additional training during test phase. The experimental results show improvements over SUR in both accuracy (not always significant on some datasets though) and inference efficiency. Overall, the paper is well written with sufficient contributions. After the author's rebuttal and revision, reviewers generally agree the paper can be accepted. I recommend to Accept (Poster). | train | [
"INP0MSRrOsx",
"vChIuirDQAs",
"4PLoNmDS5Mc",
"Z8vn9Sh7UqJ",
"E5qcdIshxNw",
"Q7q9jhSEvjg",
"017sIKm3dpE",
"OLxV71Zl-zE",
"MP7SSSxtuEr",
"X5fspbD-lNp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"## Summary\n\nThe paper addresses the problem of multi-domain few-shot image classification (where unseen classes and examples come from diverse data sources), and proposes a Universal Representation Transformer (URT) layer, which learns to transform a universal representation into task-adapted representations. Th... | [
7,
6,
6,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
5,
4,
5,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_04cII6MumYV",
"iclr_2021_04cII6MumYV",
"iclr_2021_04cII6MumYV",
"vChIuirDQAs",
"MP7SSSxtuEr",
"X5fspbD-lNp",
"4PLoNmDS5Mc",
"INP0MSRrOsx",
"iclr_2021_04cII6MumYV",
"iclr_2021_04cII6MumYV"
] |
iclr_2021_TtYSU29zgR | Primal Wasserstein Imitation Learning | Imitation Learning (IL) methods seek to match the behavior of an agent with that of an expert. In the present work, we propose a new IL method based on a conceptually simple algorithm: Primal Wasserstein Imitation Learning (PWIL), which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. We present a reward function which is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and which requires little fine-tuning. We show that we can recover expert behavior on a variety of continuous control tasks of the MuJoCo domain in a sample efficient manner in terms of agent interactions and of expert interactions with the environment. Finally, we show that the behavior of the agent we train matches the behavior of the expert with the Wasserstein distance, rather than the commonly used proxy of performance. | poster-presentations | It is common in imitation learning to measure and minimize the differences between the agent’s and expert’s visitation distributions. This paper proposes using Wasserstein distance for this, named PWIL, by considering the upper bound of its primal form and taking it as the optimization objective. The effectiveness of the approach is demonstrated by an extensive set of experiments.
Overall, reviewers reached general agreement that this paper makes a good contribution to the conference, and given the overall positive reviews, I also recommend accepting the paper.
| train | [
"n-UN6LMCvPR",
"SVN-03vclqS",
"cP-F6SM1lVV",
"NZNAX5rRGP",
"xL79MNN8hxQ",
"8mKnvizVnLC",
"f3gIxvEBnzd",
"xzD-F9c79O"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper develops an imitation-learning (IL) method starting from the primal form of the Wasserstein distance, creating an upper-bound (by replacing optimal coupling with a greedy coupling), and converting that into a practical, scalable algorithm (PWIL). Experiments on the standard MuJoCo tasks and a pixel-based... | [
6,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"iclr_2021_TtYSU29zgR",
"8mKnvizVnLC",
"f3gIxvEBnzd",
"xzD-F9c79O",
"n-UN6LMCvPR",
"iclr_2021_TtYSU29zgR",
"iclr_2021_TtYSU29zgR",
"iclr_2021_TtYSU29zgR"
] |
iclr_2021_MIDckA56aD | Learning perturbation sets for robust machine learning | Although much progress has been made towards robust deep learning, a significant gap in robustness remains between real-world perturbations and more narrowly defined sets typically studied in adversarial defenses. In this paper, we aim to bridge this gap by learning perturbation sets from data, in order to characterize real-world effects for robust training and evaluation. Specifically, we use a conditional generator that defines the perturbation set over a constrained region of the latent space. We formulate desirable properties that measure the quality of a learned perturbation set, and theoretically prove that a conditional variational autoencoder naturally satisfies these criteria. Using this framework, our approach can generate a variety of perturbations at different complexities and scales, ranging from baseline spatial transformations, through common image corruptions, to lighting variations. We measure the quality of our learned perturbation sets both quantitatively and qualitatively, finding that our models are capable of producing a diverse set of meaningful perturbations beyond the limited data seen during training. Finally, we leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations, while improving generalization on non-adversarial data. All code and configuration files for reproducing the experiments as well as pretrained model weights can be found at https://github.com/locuslab/perturbation_learning. | poster-presentations | The authors propose an approach to learn perturbation sets from data and go beyond the mathematically sound L_p adversarial perturbations towards more realistic real-world perturbations. To measure the quality of the learned perturbation set the authors put forward two specific criteria and prove that an approach based on conditional variational autoencoders (cVAE) can satisfy these criteria. In particular, given access to paired data (instance and its perturbation), the authors train a cVAE which can then be used to generate novel perturbations similar to the ones observed during training. Leveraging this generative model the authors train models which are robust to such perturbations while improving the generalisation performance on clean data.
The studied problem is of high significance and the proposed solution is sufficiently novel. The reviewers agree that the paper presents a significant step in the right direction and will be of interest to the ICLR community. The authors addressed all major concerns raised by the reviewers. In my opinion, given the inherent tradeoff between the two terms in Assumption 1, and the approximation gap due to the design choices of the particular cVAE, I feel that a hard problem was reduced to an almost equally hard problem. Nevertheless, the principled approach coupled with promising empirical results are sufficient to recommend acceptance. I strongly advise the authors to incorporate the remaining reviewer feedback and try to tone down claims such as “certifiably robust” given the issues pointed out above.
| train | [
"MwDdosA8K0A",
"ylfTwYbIIi0",
"dUAVNxMhxLu",
"I2yiUv3BGZL",
"mD63h6cUni1",
"JRTugHSJuNY",
"RcB8W9uVR-W",
"FzC5lJp_pG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"===============================================================================================\n\nPost-author response: I have read the response and am satisfied with the answer. I am leaning more towards accepting this paper.\n\n====================================================================================... | [
6,
5,
6,
-1,
-1,
-1,
-1,
8
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_MIDckA56aD",
"iclr_2021_MIDckA56aD",
"iclr_2021_MIDckA56aD",
"ylfTwYbIIi0",
"MwDdosA8K0A",
"dUAVNxMhxLu",
"FzC5lJp_pG",
"iclr_2021_MIDckA56aD"
] |
iclr_2021_XI-OJ5yyse | CopulaGNN: Towards Integrating Representational and Correlational Roles of Graphs in Graph Neural Networks | Graph-structured data are ubiquitous. However, graphs encode diverse types of information and thus play different roles in data representation. In this paper, we distinguish the \textit{representational} and the \textit{correlational} roles played by the graphs in node-level prediction tasks, and we investigate how Graph Neural Network (GNN) models can effectively leverage both types of information. Conceptually, the representational information provides guidance for the model to construct better node features; while the correlational information indicates the correlation between node outcomes conditional on node features. Through a simulation study, we find that many popular GNN models are incapable of effectively utilizing the correlational information. By leveraging the idea of the copula, a principled way to describe the dependence among multivariate random variables, we offer a general solution. The proposed Copula Graph Neural Network (CopulaGNN) can take a wide range of GNN models as base models and utilize both representational and correlational information stored in the graphs. Experimental results on two types of regression tasks verify the effectiveness of the proposed method. | poster-presentations | Three referees support accept and one indicates reject. The issues pointed out by the reviewer who proposed rejection should be properly reflected in the final version.
First, regarding the synthetic experiment that illustrates the shortcomings of the existing GNN models, three reviewers, including myself, judged quite interesting. However, note the opinion of one reviewer that it is more appropriate to separate the influence of feature x and graph structure in the label generation method and each independently contribute to label generation. This part should be more justified in the final version.
In addition, it was pointed out that the expressive power of the model may be limited according to the parameterization type of the precision matrix, and there is a limitation that there may be a disadvantage in inference because it is a copula-based probabilistic model. I think this characteristic is actually a fundamental limitation of the proposed method. However, three reviewers, including myself, thought that it was an interesting framework as a role that can complement the message passing architecture, and decided that the possibility of the proposed method was worth publishing. However, in order to reinforce this argument a little more, it would be better if the final version verifies it with more diverse GNN architectures and datasets. | train | [
"EhnSqG24jgo",
"pqvM9HyaAx",
"i_cKgAmVc4y",
"WqVbaeA-7E",
"LCx3IoRE8jS",
"fV-S2FHYm7",
"hDy820MO6fD",
"sHfHVbbomWr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\nThe paper presents a new model based on the Graphical Neural Network (GNN). The proposed model adopts probability distributions called copulas and is called the Copula Graphical Neural Network (CopulaGNN). Two parametrization... | [
7,
7,
-1,
-1,
-1,
-1,
5,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_XI-OJ5yyse",
"iclr_2021_XI-OJ5yyse",
"hDy820MO6fD",
"EhnSqG24jgo",
"sHfHVbbomWr",
"pqvM9HyaAx",
"iclr_2021_XI-OJ5yyse",
"iclr_2021_XI-OJ5yyse"
] |
iclr_2021_8Ln-Bq0mZcy | On the Critical Role of Conventions in Adaptive Human-AI Collaboration | Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi. | poster-presentations | This paper proposes a new paradigm for learning to perform cooperative tasks with partners, which factors the problem into two components: how to perform the task and how to coordinate with the partner according to conventions. The setting is new and the reviewers are excited about the paper. A clear accept. | train | [
"GbUX4VGLOgn",
"zM-Um462ct",
"rRkZqoh9xjU",
"_JJyQiThuRi",
"0bUSGDOfaG1",
"FcrZHl6rMZS",
"EVv8101_rQR",
"_NGsJgaOl39",
"CfUX3bn7RMi",
"6VniyGfe9U2",
"OQvJLdov30",
"eJqJfc4mEX1",
"cA3jdNruCP",
"IzjP4DTMtcL",
"6ODjbVUx3M1"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper makes the observation that when performing cooperative tasks with partners, there are two components to learn: how to perform the task, and how to coordinate with the partner according to conventions. Therefore, it proposes to separate these two components via a modular architecture, which learns a task... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"iclr_2021_8Ln-Bq0mZcy",
"iclr_2021_8Ln-Bq0mZcy",
"_JJyQiThuRi",
"OQvJLdov30",
"FcrZHl6rMZS",
"6VniyGfe9U2",
"iclr_2021_8Ln-Bq0mZcy",
"CfUX3bn7RMi",
"zM-Um462ct",
"IzjP4DTMtcL",
"eJqJfc4mEX1",
"GbUX4VGLOgn",
"6ODjbVUx3M1",
"iclr_2021_8Ln-Bq0mZcy",
"iclr_2021_8Ln-Bq0mZcy"
] |
iclr_2021_i80OPhOCVH2 | On the Bottleneck of Graph Neural Networks and its Practical Implications | Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph.
We propose a new explanation for this problem: GNNs are susceptible to a bottleneck when aggregating messages across a long path. This bottleneck causes the over-squashing of exponentially growing information into fixed-size vectors.
As a result, GNNs fail to propagate messages originating from distant nodes and perform poorly when the prediction task depends on long-range interaction.
In this paper, we highlight the inherent problem of over-squashing in GNNs:
we demonstrate that the bottleneck hinders popular GNNs from fitting long-range signals in the training data;
we further show that GNNs that absorb incoming edges equally, such as GCN and GIN, are more susceptible to over-squashing than GAT and GGNN;
finally, we show that prior work, which extensively tuned GNN models of long-range problems, suffers from over-squashing, and that breaking the bottleneck improves their state-of-the-art results without any tuning or additional weights.
Our code is available at https://github.com/tech-srl/bottleneck/ . | poster-presentations | The paper identifies the phenomenon of oversquashing in GNNs and relate it to bottleneck. While this phenomenon has been previously observed, the analysis is new and insightful. The authors conclude that standard message passing may be inefficient in cases where the graphs exhibit an exponentially growing number of neighbors and long-range dependencies, and propose a solution in the form of a fully-adjacent layer. While the paper does not offer much methodologically, it is the observation of bottleneck that is of importance.
We therefore believe that the criticism raised by some reviewers of the observation not being novel and the solution "too simple" rather unsubstantiated. The authors have well addressed these issues in their rebuttal. The AC recommends accepting the paper. | val | [
"duk8iorazv",
"4A3-jz7zMrq",
"ynoI9qTWxZO",
"4K822E7SSd",
"Kk2NNR9KUoW",
"uDI0RSbEI3",
"evHLgfLfPvv",
"se06ge-fMW8",
"A9_SbowkGOv",
"Q35Q9n_Ro_g",
"hBfvsRu-mN",
"Fj6eaf58CP",
"q7iHI913Yop",
"bJOwcATcTar",
"x3fwQbHMTz",
"YSRsPCPe-Wp"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the time to review our paper! \nYou raise important points that we think are addressable within the discussion phase. \nPlease see our detailed response below.\n\n> the paper identifies a well-known problem in GNN... \n\nTo the best of our knowledge, although the problem of passing long-range ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4,
4
] | [
"x3fwQbHMTz",
"iclr_2021_i80OPhOCVH2",
"uDI0RSbEI3",
"iclr_2021_i80OPhOCVH2",
"evHLgfLfPvv",
"duk8iorazv",
"se06ge-fMW8",
"Fj6eaf58CP",
"q7iHI913Yop",
"YSRsPCPe-Wp",
"bJOwcATcTar",
"iclr_2021_i80OPhOCVH2",
"iclr_2021_i80OPhOCVH2",
"iclr_2021_i80OPhOCVH2",
"iclr_2021_i80OPhOCVH2",
"iclr... |
iclr_2021_42kiJ7n_8xO | The geometry of integration in text classification RNNs | Despite the widespread application of recurrent neural networks (RNNs), a unified understanding of how RNNs solve particular tasks remains elusive. In particular, it is unclear what dynamical patterns arise in trained RNNs, and how those pat-terns depend on the training dataset or task. This work addresses these questions in the context of text classification, building on earlier work studying the dynamics of binary sentiment-classification networks (Maheswaranathan et al., 2019). We study text-classification tasks beyond the binary case, exploring the dynamics ofRNNs trained on both natural and synthetic datasets. These dynamics, which we find to be both interpretable and low-dimensional, share a common mechanism across architectures and datasets: specifically, these text-classification networks use low-dimensional attractor manifolds to accumulate evidence for each class as they process the text. The dimensionality and geometry of the attractor manifold are determined by the structure of the training dataset, with the dimensionality reflecting the number of scalar quantities the network remembers in order to classify.In categorical classification, for example, we show that this dimensionality is one less than the number of classes. Correlations in the dataset, such as those induced by ordering, can further reduce the dimensionality of the attractor manifold; we show how to predict this reduction using simple word-count statistics computed on the training dataset. To the degree that integration of evidence towards a decision is a common computational primitive, this work continues to lay the foundation for using dynamical systems techniques to study the inner workings of RNNs. | poster-presentations | this paper adds onto the line of research in investigating the mechanism by which a recurrent network solves a supervised sequence classification problem, following the recent studies such as Maheswaranathan et al., 2019 and Maheswaranathan & Sussillo (2020). in doing so, this paper hypothesizes and confirms that the internal hidden state of a recurrent net, be it GRU or LSTM, evolves over a planar (approximate) attractor as it reads the input, amounting to integrating the evidence as it processes the input sequence, and demonstrates the existences of these attractors and integration dynamics on three types of problems (classification, ordered classification and multi-label classification.)
there were some potentially misleading or confusing statements throughout the manuscript in the initial version, which were pointed out by the reviewers. the authors however did a commendable job of addressing these concerns by the reviewers to the point that most of them have revised their scores up.
based on the reviewers' assessments, authors' response and their exchange, i strongly believe this work will enrich our understanding of recurrent nets further. | train | [
"ApGsn-ElttW",
"xEPrpkjr0qZ",
"DdqU_i6pFnv",
"D7lzBg5hMl2",
"_ZIgoKSeZ3E",
"Y2vh5rYpKbo",
"3OkojqNJM-L",
"xWmsdY5_jh6",
"u-L9KDi4XH",
"19CDFM0RWG",
"lvSDBkTSq24",
"ppziwaKieZF",
"SOKBYRBzL2c",
"1TVbydG3W8d"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"### Paper Summary\n\nThis paper sheds light on how trained RNNs solve text classification problems by analyzing them from a dynamical systems perspective. It extends recent work where a similar analysis was applied to the simpler setting of binary sentiment classification. When projecting the RNN hidden states to ... | [
7,
7,
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
5,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_42kiJ7n_8xO",
"iclr_2021_42kiJ7n_8xO",
"iclr_2021_42kiJ7n_8xO",
"ppziwaKieZF",
"iclr_2021_42kiJ7n_8xO",
"19CDFM0RWG",
"iclr_2021_42kiJ7n_8xO",
"u-L9KDi4XH",
"xEPrpkjr0qZ",
"DdqU_i6pFnv",
"1TVbydG3W8d",
"_ZIgoKSeZ3E",
"ApGsn-ElttW",
"iclr_2021_42kiJ7n_8xO"
] |
iclr_2021_jh-rTtvkGeM | Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability | We empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the Edge of Stability. In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the value 2/(step size), and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. Since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. We hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the Edge of Stability. | poster-presentations | The paper demonstrates that Gradient Descents generally operates in a regime where the spectral norm of the Hessian is as large as possible given the learning rate.
The paper presents a very thorough empirical demonstration of the central claim, which was appreciated by the reviewers.
A central issue to me in accepting the work was its novelty. Prior work has shown very closely related effects for SGD. The reviewers appreciated in discussions the novelty of the precise claim about the spectral norm hovering at around $\frac{2}{\eta}$. R4 and R2 also raised the issue that the related work discussion is not sufficient. Please make sure that you discuss very carefully related work in the paper, including a more detailed discussion in the Introduction.
The two key issues raised by R3, who voted for rejection, were that (1) the work studies Gradient Descent (rather than SGD), and (2) lack of theory. I agree with these concerns. Perhaps the Authors should address (1) by citing more carefully prior work that shows that a similar phenomenon does seem to happen in training with SGD. As for (2), I agree here with R1,R2 and R4 that empirical evaluation is a key strength of the paper.
Based on the above, it is my pleasure to recommend the acceptance of the paper. Thank you for submitting your work to ICLR, and please make sure you address all remarks of the reviewers in the camera-ready version. | train | [
"t0ibVX-hdm2",
"gnHf_tIPYny",
"bsTUhRQSgSH",
"Px7-NzJFJsi",
"zevzlDlqWmk",
"axs13Hxpt-r",
"R0djHY5PCnA",
"yiVbwXP8EG",
"g-9w5jatLra",
"paYsnm3KY0",
"02DPH3sJeNM",
"m3TzdRcj5Ha",
"k1bHB0qopm",
"KHKfRvNx0A",
"71dSph-s6EG",
"yEHApvNekzi",
"3ujwFa5ynx_",
"IiUTgoOMqce",
"QwoQ18FDuo",
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
... | [
"Thank you for your comments!\n\nWe completely agree that we need to weave the dependence on width into the main text.\n\nA few follow-up points:\n\n> 128 is not a particularly wide network\n\nIn our experience, it's tricky to judge networks as \"wide\" or \"narrow\" in absolute terms --- these judgements have to b... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"gnHf_tIPYny",
"_YL4p57HZfN",
"zevzlDlqWmk",
"axs13Hxpt-r",
"Px7-NzJFJsi",
"yiVbwXP8EG",
"02DPH3sJeNM",
"g-9w5jatLra",
"paYsnm3KY0",
"K7-PewUyI_",
"yEHApvNekzi",
"t_Jh7YExp5s",
"71dSph-s6EG",
"3ujwFa5ynx_",
"KHKfRvNx0A",
"kaAKJ6gMfb",
"IiUTgoOMqce",
"VGQbjWXRQrn",
"OpSGDsEc9dT",
... |
iclr_2021_SK7A5pdrgov | CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning | Despite recent successes of reinforcement learning (RL), it remains a challenge for agents to transfer learned skills to related environments. To facilitate research addressing this problem, we proposeCausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer. Tasks consist of constructing 3D shapes from a set of blocks - inspired by how children learn to build complex structures. The key strength of CausalWorld is that it provides a combinatorial family of such tasks with common causal structure and underlying factors (including, e.g., robot and object masses, colors, sizes). The user (or the agent) may intervene on all causal variables, which allows for fine-grained control over how similar different tasks (or task distributions) are. One can thus easily define training and evaluation distributions of a desired difficulty level, targeting a specific form of generalization (e.g., only changes in appearance or object mass). Further, this common parametrization facilitates defining curricula by interpolating between an initial and a target task. While users may define their own task distributions, we present eight meaningful distributions as concrete benchmarks, ranging from simple to very challenging, all of which require long-horizon planning as well as precise low-level motor control. Finally, we provide baseline results for a subset of these tasks on distinct training curricula and corresponding evaluation protocols, verifying the feasibility of the tasks in this benchmark. | poster-presentations | CausalWorld is a benchmark for robotic manipulation to address transfer and structural learning. The benchmark includes (i) a variety of tasks (picking, pushing, tower, etc) relating to manipulating blocks, (ii) configurable properties for environments (properties of blocks, gravity, etc), (iii) customizable learning settings involving intervention actors, which can change the environment to induce a curriculum.
The reviewers found the paper compelling and with many strengths, including ‘interesting and important ideas’ (R4), ‘simple API with a standardized interface’ for ‘procedural generation of goals’ (R5), ‘strongly motivated and tackles a real and practical problem’ (R3), and ‘benchmark with many good properties’ (R2). By and large, the reviewers agree that the paper presents an important benchmark satisfying several desiderata, which I certainly agree with.
On the other hand, most of the reviewers (3 out of 4) also raised serious concerns, more prominently, about the experimental results and the causal inference component. For instance, R5 commented that “all the SOTA algorithms fail,” and it is hard to quantify how agents would perform well in different tasks. R3 pointed out the lack of “qualitative results exploring the relationship between the identified and proposed causal variables,” emphasizing that ‘the benchmark is well-motivated, but not backed up with strong experimental results.‘’ R2 identified the lack of clear causal component in the paper while the paper mentions “opportunity to investigate causality” and “underlying structural causal model (SCM).” All in all, these are valid concerns.
The authors' rebuttal was quite detailed, and appreciated, but left some important questions unanswered. The first and critical issue is about the causal nature of the simulator. The simulator's name is "causalworld" and its stated goal is to provide "a benchmark for causal structure and transfer learning in a robotic manipulation environment." Also, the first bullet in the list of contributions is: "We propose CausalWorld, a new benchmark comprising a parametrized family of robotic manipulation environments for advancing out-of-distribution generalization and causal structure learning in RL." After reading the paper, I was quite surprised to realize there is no *single* example of a causal model, in any shape or form (e.g., SCM, DAG, Physics) or a structural learning benchmark. In other words, there is a serious, somewhat nontrivial gap between the claimed contributions and what was realized in the paper. One way to address this issue would be to make the causality more explicit in the paper, for example, by sharing the underlying structural causal model, how variables form causal relationships, what causal structures are being learned, and how these learned structures compare with the ground truth. I think these would be reasonable expectations of a simulator that aims to disentangle the causal aspect of the learning process.
The second issue is about the experimental results in terms of generalizability. The authors emphasized on different occasions that "The primary goal of this work is to provide the tools to build and evaluate generalizable agents in a more systematic fashion, rather than building generalizable agents for the tasks specified," or "the experiments is to showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld, rather than solving new tasks or proposing new algorithms." These responses are somewhat not satisfactory given that the goal of the paper is providing tools to build generalizable agents, while the authors seem to suggest they are not committed to actually building such agents. Specifically, the experiments did not demonstrate the simulator as a benchmark but only showcased its flexibility (i.e., offering a large number of degrees of freedom). One suggestion would be to evaluate how algorithms (agents) with varying degrees of "generalizability" power perform across tasks with various difficulty levels. As it currently stands, the tasks are too easy or too hard for the standard, uncategorized algorithms, which makes it difficult to learn any lessons from running something in the simulator.
Lastly, I should mention that the work has a great potential to introduce causal concepts and causal reasoning to robotics, there is a natural and compelling educational component here. Still, the complete absence of *any* discussion of causality and the current literature results hurt this connection and the realization of this noble goal. I believe that after reading the paper, the regular causal inference researcher will not be able to understand what assumptions and types of challenges are entailed by this paper and robotics research. On the other hand, the robotics researcher will not be able to understand what a causal model is and the tools currently available in causal reasoning that may be able to help solve the practical challenges of robotics. In other words, this is a huge missed opportunity since there is a complementary nature of what the paper is trying to do in robotics and the results available in causal inference. I believe readers expect and would benefit from having this connection clearly articulated and realized in a more explicit fashion.
If the issues listed above are addressed, I believe the paper can be a game-changer in understanding and investigating robotics & causality. Given the aforementioned potential and reasons, I recommend the paper's acceptance *under the assumption that* the authors will take the constructive feedback provided in this meta-review into account and revise the manuscript accordingly. | train | [
"SPdPS9f8kyP",
"LFC6eEwTeP2",
"4a0KfkfNir-",
"d_8IO6fpEmf",
"M6ZN6Nx12tI",
"GPVvWuU8F9b",
"p4QFocuya_o",
"WwNMgKHdl9Q",
"zHZxh-MjTd",
"4XSuvYDIsAL",
"BGIPmQiy-TW",
"UXVrXE8R6XG",
"SCbQMONAnMF",
"YHZDS3tVq_p"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Once again, we thank the reviewer for their feedback and valuable comments that will help us improve this paper.\n\n“What confuses me in Figure 4 is that \"full randomization\" should generalize better, but the result only shows that \"full generalization\" doesn't learn. With this result, it is hard to disentangl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"LFC6eEwTeP2",
"4a0KfkfNir-",
"4XSuvYDIsAL",
"WwNMgKHdl9Q",
"SCbQMONAnMF",
"YHZDS3tVq_p",
"iclr_2021_SK7A5pdrgov",
"zHZxh-MjTd",
"UXVrXE8R6XG",
"BGIPmQiy-TW",
"iclr_2021_SK7A5pdrgov",
"iclr_2021_SK7A5pdrgov",
"iclr_2021_SK7A5pdrgov",
"iclr_2021_SK7A5pdrgov"
] |
iclr_2021_jrA5GAccy_ | Empirical or Invariant Risk Minimization? A Sample Complexity Perspective | Recently, invariant risk minimization (IRM) was proposed as a promising solution to address out-of-distribution (OOD) generalization. However, it is unclear when IRM should be preferred over the widely-employed empirical risk minimization (ERM) framework. In this work, we analyze both these frameworks from the perspective of sample complexity, thus taking a firm step towards answering this important question. We find that depending on the type of data generation mechanism, the two approaches might have very different finite sample and asymptotic behavior. For example, in the covariate shift setting we see that the two approaches not only arrive at the same asymptotic solution, but also have similar finite sample behavior with no clear winner. For other distribution shifts such as those involving confounders or anti-causal variables, however, the two approaches arrive at different asymptotic solutions where IRM is guaranteed to be close to the desired OOD solutions in the finite sample regime, while ERM is biased even asymptotically. We further investigate how different factors --- the number of environments, complexity of the model, and IRM penalty weight --- impact the sample complexity of IRM in relation to its distance from the OOD solutions. | poster-presentations | The paper considers learning settings with distributional change. It makes a lot of assumptions to obtain sample complexities that justify the use of empirical invariant risk minimization, and falls a bit short by not giving a formal converse for the inadequacy of plan empirical risk minimization, despite making the claim. Nevertheless, the contributions are insightful, and the paper may be worth sharing with the community.
The grading were overall positive from the reviewers, though particularly critical, and I doubt the whole paper could be fully double-checked: one could question the ability of the reviewers to perform a deep analysis on a 48-pages theoretical paper in the time constraints imposed by a conference model... | train | [
"RWH5vyYgShM",
"rW91QK0qBDt",
"HFkHqaBU6MX",
"bParcyRequ2",
"e0UPD2sZQmG",
"g3yGJRFZL-K",
"sKQWGSxtzlx",
"D3qQF5JUe9f",
"uXMNWht8_9n",
"lHNmAbinumC",
"SaEvJ5eJZdn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Summary: The paper investigates the choice of learning paradigms to reach out-of-distribution generalization, namely IRM vs ERM under different scenarios of domain generalization. Technically, generalization bounds and rates are calculated to be able to compare theoretically how each paradigm fares in the differen... | [
7,
6,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
2,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_jrA5GAccy_",
"iclr_2021_jrA5GAccy_",
"iclr_2021_jrA5GAccy_",
"iclr_2021_jrA5GAccy_",
"D3qQF5JUe9f",
"RWH5vyYgShM",
"bParcyRequ2",
"rW91QK0qBDt",
"lHNmAbinumC",
"HFkHqaBU6MX",
"sKQWGSxtzlx"
] |
iclr_2021_V5j-jdoDDP | Scaling Symbolic Methods using Gradients for Neural Model Explanation | Symbolic techniques based on Satisfiability Modulo Theory (SMT) solvers have been proposed for analyzing and verifying neural network properties, but their usage has been fairly limited owing to their poor scalability with larger networks. In this work, we propose a technique for combining gradient-based methods with symbolic techniques to scale such analyses and demonstrate its application for model explanation. In particular, we apply this technique to identify minimal regions in an input that are most relevant for a neural network's prediction. Our approach uses gradient information (based on Integrated Gradients) to focus on a subset of neurons in the first layer, which allows our technique to scale to large networks. The corresponding SMT constraints encode the minimal input mask discovery problem such that after masking the input, the activations of the selected neurons are still above a threshold. After solving for the minimal masks, our approach scores the mask regions to generate a relative ordering of the features within the mask. This produces a saliency map which explains" where a model is looking" when making a prediction. We evaluate our technique on three datasets-MNIST, ImageNet, and Beer Reviews, and demonstrate both quantitatively and qualitatively that the regions generated by our approach are sparser and achieve higher saliency scores compared to the gradient-based methods alone. Code and examples are at - https://github.com/google-research/google-research/tree/master/smug_saliency | poster-presentations | This paper combines considers the task of finding a minimal set of inputs that explain predictions of trained neural models. The authors propose a method that they refer to as "scaling symbolic methods using gradients" (SMUG). This method use integrated gradients methods to score first-layer neurons on the degree to which they influence the prediction and then produces and solve an SMT problem (restricted to first-layer activations) that finds the minimal mask that changes these influential neurons.
Reviewers had somewhat mixed perspectives on this submission. All reviewers were broadly in agreement that the paper is clearly written and presents an interesting combination of symbolic (i.e. SMT-based) and gradient-based methods for model explanation. R2 questions the need for sparsity (and therefore the SMT component) in model explanations, and R3 similarly notes that SMUG does not necessarily rely on SMT at all. That said, no reviewers raise major concerns with the quality of exposition, experimental evaluation, or the level of technical contributions in this work. The metareviewer is inclined to say that this work is above the bar for acceptance, and represents a reasonable approach to integrating SMT-based and gradient-based methods for model explanation. | train | [
"6ayVhw8GsB",
"lXeBdmgcGPP",
"nA4vBo38vm6",
"C0OOweE3dlQ",
"gXhResM53ic",
"9eyyM5J5zqx",
"BvFt1Ms8kxz",
"0LAIrjVAJRN",
"ZsR4kM9vp02",
"KRuWUiIK1Qa"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all reviewers for their thoughtful and helpful comments. We have responded to each reviewer and clarified specific issues. We reiterate some of our key contributions:\n* Our method shows a promising direction to combine symbolic methods with gradients. While previous SMT based methods for model explanatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"iclr_2021_V5j-jdoDDP",
"ZsR4kM9vp02",
"ZsR4kM9vp02",
"0LAIrjVAJRN",
"KRuWUiIK1Qa",
"BvFt1Ms8kxz",
"iclr_2021_V5j-jdoDDP",
"iclr_2021_V5j-jdoDDP",
"iclr_2021_V5j-jdoDDP",
"iclr_2021_V5j-jdoDDP"
] |
iclr_2021_dgd4EJqsbW5 | Control-Aware Representations for Model-based Reinforcement Learning | A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the latent dynamics, and utilizing it to perform control in the latent space. Two important questions in this area are how to learn a representation that is amenable to the control problem at hand, and how to achieve an end-to-end framework for representation learning and control. In this paper, we take a few steps towards addressing these questions. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration style algorithm in the latent space.We call this model control-aware representation learning(CARL). We derive a loss function and three implementations for CARL. In the offline implementation, we replace the locally-linear control algorithm (e.g., iLQR) used by the existing LCE methods with a RL algorithm, namely model-based soft actor-critic, and show that it results in significant improvement. In online CARL, we interleave representation learning and control, and demonstrate further gain in performance. Finally, we propose value-guided CARL, a variation in which we optimize a weighted version of the CARL loss function, where the weights depend on the TD-error of the current policy. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines. | poster-presentations | This paper addresses the question of RL in high-dimensional spaces by learning lower-dimensional representations for control purposes. The work contains both theoretical and empirical results that shows the promise of the proposed approach.
While the reviewers had initial concerns, including with a problem in a proof and questions around the contributions, after robust responses and discussions this paper is now in good shape. | train | [
"o67CDKifq89",
"NfITCzOkMzP",
"hyW1G_q-8aI",
"QmfA7FZjVsT",
"9g6jaCLJag0",
"NEcy-dLyxOr",
"HzCWnu72cQJ",
"2FSGF7DWls4",
"_aGxoqcmf4D",
"c1vu1V6ciu",
"75SPtsgzOsb",
"okBn6BHrzZ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper aims to address an important question in reinforcement learning: policy learning from high-dimensional sensory observations. The authors propose an algorithm for Learning Controllable Embedding (LCE) based on policy iteration in the latent space. The authors provide a theorem to show how the policy perf... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_dgd4EJqsbW5",
"QmfA7FZjVsT",
"iclr_2021_dgd4EJqsbW5",
"NEcy-dLyxOr",
"HzCWnu72cQJ",
"HzCWnu72cQJ",
"_aGxoqcmf4D",
"okBn6BHrzZ",
"hyW1G_q-8aI",
"o67CDKifq89",
"iclr_2021_dgd4EJqsbW5",
"iclr_2021_dgd4EJqsbW5"
] |
iclr_2021_tc5qisoB-C | C-Learning: Learning to Achieve Goals via Recursive Classification | We study the problem of predicting and controlling the future state distribution of an autonomous agent. This problem, which can be viewed as a reframing of goal-conditioned reinforcement learning (RL), is centered around learning a conditional probability density function over future states. Instead of directly estimating this density function, we indirectly estimate this density function by training a classifier to predict whether an observation comes from the future. Via Bayes' rule, predictions from our classifier can be transformed into predictions over future states. Importantly, an off-policy variant of our algorithm allows us to predict the future state distribution of a new policy, without collecting new experience. This variant allows us to optimize functionals of a policy's future state distribution, such as the density of reaching a particular goal state. While conceptually similar to Q-learning, our work lays a principled foundation for goal-conditioned RL as density estimation, providing justification for goal-conditioned methods used in prior work. This foundation makes hypotheses about Q-learning, including the optimal goal-sampling ratio, which we confirm experimentally. Moreover, our proposed method is competitive with prior goal-conditioned RL methods. | poster-presentations | **Overview**: This paper provides a new clustering-based method to predict future probability density of a policy. It provides comparable performance to prior Q-learning-based methods, but without careful hyper-parameter tuning.
**Pro**: The method of using clustering to estimate future density is novel. Both theory and experiments appear solid. In the rebuttal phase, the authors convinced all the reviewers by addressing their concerns. The reviewers unanimous tend to acceptance.
**Con**: The reviewers had many concerns before the rebuttal. But these were addressed by the authors.
**Recommendation**: The C-learning method proposed in this paper is novel and can be potentially useful in practice. Both theory and experiments are solid and convincing. Hence the recommendation is accept. | train | [
"Thjf-hg76E",
"EHYJOuthYr1",
"LVVnrfMuy5l",
"agW-0Wf1a-b",
"u6e6b3xrod9",
"3LUkDQpuX0B",
"iY9NNnLDkES",
"MoLPO07u3RM",
"77Ic_oz1cTQ",
"0w81MfvEqJc",
"6K2MfystF-I",
"UGXMNEVBVeL",
"Eb96jSSfZA",
"X7q8n4pGVow"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a new algorithm, called C-learning, which tackles goal-conditioned reinforcement learning problems. Specifically, the algorithm converts the future density estimation problem, which goal-conditioned Q learning is inherently performing, to a classification problem. The experiments showed that th... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
4
] | [
2,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2021_tc5qisoB-C",
"iclr_2021_tc5qisoB-C",
"iY9NNnLDkES",
"77Ic_oz1cTQ",
"6K2MfystF-I",
"0w81MfvEqJc",
"Thjf-hg76E",
"UGXMNEVBVeL",
"EHYJOuthYr1",
"Eb96jSSfZA",
"X7q8n4pGVow",
"iclr_2021_tc5qisoB-C",
"iclr_2021_tc5qisoB-C",
"iclr_2021_tc5qisoB-C"
] |
iclr_2021_guetrIHLFGI | The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers | We propose a new framework for reasoning about generalization in deep learning.
The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the population loss. This leads to an alternate decomposition of test error into: (1) the Ideal World test error plus (2) the gap between the two worlds. If the gap (2) is universally small, this reduces the problem of generalization in offline learning to the problem of optimization in online learning.
We then give empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification. For example, CNNs generalize better than MLPs on image distributions in the Real World, but this is "because" they optimize faster on the population loss in the Ideal World. This suggests our framework is a useful tool for understanding generalization in deep learning, and lays the foundation for future research in this direction. | poster-presentations | The paper is proposing a new framework for understanding generalization in the deep learning. The main idea is considering the difference of stochastic optimization on a population risk and optimization on an empirical risk. The classical theory considers the difference of empirical risk and population risk. This basically translates the practical motivation from finding good function classes to finding good optimizers which can re-use the data effectively. Although the paper provides no theoretical result, it provides an interesting empirical study. The paper somewhat demonstrates that SGD on deep networks is somehow good at re-using the same data. I believe this angle is very novel and might hope to future theoretical discoveries. The paper is reviewed by four reviewers and two of them argue its acceptance and two of them argue rejection. After discussion, this status remained and I carefully read and reviewed the paper. Here are the major issues raised by the reviewers:
- R#1: The paper is missing a theoretical study. The implications on the practical deep learning is not clear.
- R#2: Choice of the soft-error is particular to the task and how to go beyond soft-max is not clear.
- R#3: Finds the paper not novel as well as trivial or hard to understand.
- R#4: The choice of soft error is ad-hoc.
I believe the issues raised by R#3 are not justified. First of all, novelty is very clear and. appreciated by other reviewers. Moreover, the paper is rather easy to understand and the results are very farm from trivial. However, the other issues raised by other reviewers are valid. Specifically, soft-error seems to be a limitation of the study. However, the authors respond to this concern and reviewer increases their score. I believe the theory is lacking but the paper is simply showing this novel approach and its empirical validity. A theory to explain this phenomenon would be amazing but not necessary for publication. Similarly, without a theory it is hard to expect any practical implication. Overall, I believe the paper is an interesting and novel one which will likely to lead additional work in the area. Considering we are still far from a satisfying theory of generalization for deep learning and the role of the optimization is clear, this angle worth sharing with the community. Hence, I decide to accept. However, I have some concerns which should be addressed by the camera-ready.
- Claims should be revised and authors should make sure they have enough evidence for them. For example, authors provide no satisfying evidence for random labels or very limited evidence for pre-training. I strongly recommend authors to either remove some of these discussions or present in a fashion which is not a result but part of the discussion for future research.
- A section about limitations should be added. Specifically, the soft-error choice should be discussed in this limitation section.
- Discussion section should be extended with the pointers to the relevant work on bootstrap literature as well as suggestions to the theoreticians. Not providing any theoretical result is always fine but authors should understanding why is it hard to make theoretical statements and where to search them. | train | [
"0-gcF18uxC",
"b8Cm8qbxO9p",
"DDtx40A0pg3",
"vy13GQ8Pnh1",
"xJ21Na-pLTH",
"Kf0IwULvSU",
"eD6MVKDfdE2",
"pXOTcrxZntt",
"L7EbK7mY4gN",
"l5bucvu6zuV",
"aL1FzVAOrcx",
"oOC3cvQAMHU"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a bootstrap framework to study the generalization problem of deep learning, by decomposing the traditional test error into an ‘Ideal World’ test error plus the gap between. Empirically, it demonstrates that such gap (soft-error) is small in supervised image classification for typical deep learn... | [
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
4
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2021_guetrIHLFGI",
"DDtx40A0pg3",
"pXOTcrxZntt",
"eD6MVKDfdE2",
"iclr_2021_guetrIHLFGI",
"0-gcF18uxC",
"aL1FzVAOrcx",
"xJ21Na-pLTH",
"oOC3cvQAMHU",
"L7EbK7mY4gN",
"iclr_2021_guetrIHLFGI",
"iclr_2021_guetrIHLFGI"
] |
iclr_2021_-Hs_otp2RB | Improving VAEs' Robustness to Adversarial Attack | Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs. Namely, we first demonstrate that methods proposed to obtain disentangled latent representations produce VAEs that are more robust to these attacks. However, this robustness comes at the cost of reducing the quality of the reconstructions. We ameliorate this by applying disentangling methods to hierarchical VAEs. The resulting models produce high--fidelity autoencoders that are also adversarially robust. We confirm their capabilities on several different datasets and with current state-of-the-art VAE adversarial attacks, and also show that they increase the robustness of downstream tasks to attack. | poster-presentations | This paper presents a hierarchical version of β-TCVAE that promotes disentanglement in the latent space and improves the robustness of VAEs over adversarial attacks, without (much) degeneration on the quality of reconstructions. The analysis on the relationship between disentanglement and adversarial robustness is valuable and the method is new. The results look promising. The comments were properly addressed. | val | [
"QtVaB52NViJ",
"r40eqCHw2lJ",
"YAFhbD6rch8",
"ytizPyVY-A",
"LrKhASQZFkn",
"eU5k49V837w",
"SKbfsqN9QXz",
"CpaeLqzOJIB",
"W_1LnNKOnoQ",
"dcCbsq0E0W4",
"ZYUOKIR6jz",
"W785xVhjY4l",
"tvqJICECFuF",
"GA26ylAC_rc",
"6OJJhfUxgNN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers the problem of training VAEs which are robust to adversarial attacks. It shows that learning disentangled representations improves the robustness of VAE. However, this hurts the reconstruction accuracy. The paper then shows that using hierarchical VAEs can ensure robustness without sacrificing ... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2021_-Hs_otp2RB",
"iclr_2021_-Hs_otp2RB",
"CpaeLqzOJIB",
"LrKhASQZFkn",
"eU5k49V837w",
"SKbfsqN9QXz",
"W_1LnNKOnoQ",
"GA26ylAC_rc",
"dcCbsq0E0W4",
"r40eqCHw2lJ",
"6OJJhfUxgNN",
"QtVaB52NViJ",
"iclr_2021_-Hs_otp2RB",
"iclr_2021_-Hs_otp2RB",
"iclr_2021_-Hs_otp2RB"
] |
iclr_2021_Qm8UNVCFdh | What Can You Learn From Your Muscles? Learning Visual Representation from Human Interactions | Learning effective representations of visual data that generalize to a variety of downstream tasks has been a long quest for computer vision. Most representation learning approaches rely solely on visual data such as images or videos. In this paper, we explore a novel approach, where we use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations. For this study, we collect a dataset of human interactions capturing body part movements and gaze in their daily lives. Our experiments show that our ``"muscly-supervised" representation that encodes interaction and attention cues outperforms a visual-only state-of-the-art method MoCo (He et al.,2020), on a variety of target tasks: scene classification (semantic), action recognition (temporal), depth estimation (geometric), dynamics prediction (physics) and walkable surface estimation (affordance). Our code and dataset are available at: https://github.com/ehsanik/muscleTorch. | poster-presentations | The paper presents an attempt to learn interaction-based representations by taking advantage of body part movements and gaze attention. Video representations are learned by benefiting from additional supervisory signals, which are not the ones commonly used, making the paper more interesting.
R3 expresses a concern that the supervisory signal does not come "for free" and that the paper is misleading. The ACs do agree with R3 that the paper benefits from additional signals and is not a pure self-supervised learning paper, strictly speaking. The authors also agreed to this in their response to the R3’s comment. R1 also mentioned (after the rebuttal phase) that the proposed approach is not a practical self-supervised learning solution and that it does not perform as effectively as conventional self-supervised learning methods like InfoNCE on Moco.
Simultaneously, the AC and the majority of the reviewers believe that the paper itself has a value as a multi-modal learning paper. We strongly suggest the authors revise the paper to remove the 'self-supervision' claim. As mentioned above, the paper is not a self-supervised learning paper and the authors are asked to correct the details of the paper to reflect this. We also recommend adding analysis on each body signal qualitatively in the final manuscript, as suggested by R4.
It will be great if the authors can consider this as a "conditional accept". In particular, the 'self-supervision' claim in the current version of the paper is misleading and this must be corrected in the final version. Note that this was also pointed out by the Program Chairs. | train | [
"U_W5QKLhvhB",
"pdeulpjH3I",
"gQZu1WQaEJF",
"cKP6KdDusmQ",
"JmPevHd8CTq",
"HfT4A1qCpH9",
"3yD8e2U9rcU",
"nKbMstJujAI"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to improve upon unsupervised representation learning for various downstream vision tasks by leveraging human motion and attention (gaze) information. The authors collect a large spatio-temporal dataset with gaze and body motion labels for this task. They train a network to jointly predict the v... | [
6,
-1,
-1,
-1,
-1,
9,
8,
4
] | [
5,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"iclr_2021_Qm8UNVCFdh",
"3yD8e2U9rcU",
"U_W5QKLhvhB",
"nKbMstJujAI",
"HfT4A1qCpH9",
"iclr_2021_Qm8UNVCFdh",
"iclr_2021_Qm8UNVCFdh",
"iclr_2021_Qm8UNVCFdh"
] |
iclr_2021_lWaz5a9lcFU | EEC: Learning to Encode and Regenerate Images for Continual Learning | The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. Reconstructed images from encoded episodes are replayed when training the classifier model on a new task to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable with less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space. | poster-presentations | This paper uses an autoencoder with neural style transfer to generate images from previously seen classes to avoid catastrophic forgetting in continual learning.
While reviewers had some concerns about the paper (experiments on high-resolution images, comparison with FearNet), authors have addressed all the concerns. R1's concern about the motivation for generation instead of replaying actual images is not necessary since this is not the first work to use generative replay. | train | [
"stv-HRT9-s",
"MLRHMurlzr",
"lGqYrEhaz0h",
"QtasK2zWE1C",
"pzU1PTuPOvX",
"LdmjNlmeCPm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##########################################################################\n\nSummary:\n\n \nThe paper proposes an approach which trains autoencoders with Neural Style Transfer to encode and store images. The method is applied to the problem of continual learning to overcome the catastrophic forgetting and memory ... | [
6,
-1,
-1,
-1,
4,
4
] | [
2,
-1,
-1,
-1,
5,
5
] | [
"iclr_2021_lWaz5a9lcFU",
"stv-HRT9-s",
"pzU1PTuPOvX",
"LdmjNlmeCPm",
"iclr_2021_lWaz5a9lcFU",
"iclr_2021_lWaz5a9lcFU"
] |
iclr_2021_edJ_HipawCa | Impact of Representation Learning in Linear Bandits | We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play T linear bandits with dimension d concurrently, and these T bandit tasks share a common k(≪d) dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves O~(TkN+dkNT) regret, where N is the number of rounds we play for each bandit. When T is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing T bandits independently) that achieves O~(TdN) regret. We also provide an Ω(TkN+dkNT) regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms. | poster-presentations | The paper studies the representation learning problem in the linear bandit setting, where each bandit "task" shares a common low-dimensional representation. The paper introduces a novel algorithm, it provides theoretical regret guarantees, and it illustrates the effectiveness of the proposed method in a number of experiments.
There is a general agreement among the reviewers about the relevance of the problem and the contribution of the paper. The authors properly addressed concerns about the novelty (e.g., comparison with linear bandit and low-rank structure) and about the underlying assumptions. Although some of them do seem relatively strong (and in some cases stronger than the state-of-the-art in bandit, such as the distribution on the contexts), it is indeed non trivial to understand whether such assumptions can be easily relaxed in the representation learning context.
The novelty of the algorithm is more on the specific problem and set of assumptions, but it mostly relies on known principles (e.g., using method-of-moment for estimating the underlying representation). In this sense, I see this paper more as a useful addition to the fast growing landscape of representation learning methods in online learning, rather than a breakthrough. Also, the structure of the algorithm seems very "theoretical" in nature, since the explore-than-commit approach is very rarely a good strategy in practice.
Another issue the authors clarified in their revised submission is the actual improvement obtained in the bounds depending on the parameters T, k, d, N. In this respect, I still would like to encourage the authors to further illustrate the regime where the bound is actually better than for the single-task approach. For instance, they could consider N fixed to a convenient value and produce a plot with x-axis T and y-axis the regret bound and report different curves for varying values of k and d. This would further clarify to the reader when representation learning can *provably* improve over plain single-task learning.
Overall, given the general support from the reviewers and the revised version of the paper, I consider this contribution is significant enough to propose acceptance. As mentioned above, I believe it will serve as a reference for developing further the literature in this domain. | train | [
"CIQ8gnM2ma",
"THoN1ZjAQfD",
"4lh2vQfrG9C",
"xiVmhNJE1pr",
"vPOS0pWDc4k",
"TBM-tw2czHw",
"ZMyumWO8OQe",
"HifzRMY0gzQ",
"yrifL6eNBZG",
"ib6L7Yvh8U7",
"hKgpj03cEco"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the benefits of learning a low-rank feature extractor in multi-task linear bandits. Specifically, the paper studies the setting where an unknown common linear feature extractor $B \\in R^{d \\times k}$ maps the original $d$-dimensional contexts $x$ to a $k$-dimensional representation. Essentiall... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2021_edJ_HipawCa",
"iclr_2021_edJ_HipawCa",
"yrifL6eNBZG",
"iclr_2021_edJ_HipawCa",
"ib6L7Yvh8U7",
"hKgpj03cEco",
"CIQ8gnM2ma",
"THoN1ZjAQfD",
"iclr_2021_edJ_HipawCa",
"iclr_2021_edJ_HipawCa",
"iclr_2021_edJ_HipawCa"
] |
iclr_2021_XjYgR6gbCEc | MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space | Data augmentation is an efficient way to expand a training dataset by creating additional artificial data. While data augmentation is found to be effective in improving the generalization capabilities of models for various machine learning tasks, the underlying augmentation methods are usually manually designed and carefully evaluated for each data modality separately, like image processing functions for image data and word-replacing rules for text data. In this work, we propose an automated data augmentation approach called MODALS (Modality-agnostic Automated Data Augmentation in the Latent Space) to augment data for any modality in a generic way. MODALS exploits automated data augmentation to fine-tune four universal data transformation operations in the latent space to adapt the transform to data of different modalities. Through comprehensive experiments, we demonstrate the effectiveness of MODALS on multiple datasets for text, tabular, time-series and image modalities. | poster-presentations | This paper proposes a unified way of data augmentation using a latent embedding space --- it learns a continuous latent space for transformation, and finds effective directions to traverse in this space for data augmentation. The proposed approach combines existing approaches for data augmentation, e.g., adversarial training, triplet loss, and joint training. The paper also identifies input examples where the model had low performance and creates harder examples that help the model improve its performance. It is evaluated on multiple corresponding to text, table, time-series and image modalities and outperforms SOTA except on image data.
The paper has responded to the reviewers' feedback to provide more detailed experiments with stronger baselines and also ablation studies to show the effectiveness of different components of the approach. The results can be further improved by thorough empirical comparison to other SOTA methods, and by using other loss functions (e.g.,center loss, large margin loss and other contrastive losses) as alternatives to the triplet loss proposed in the paper.
Some reviewers have pointed out that the paper is somewhat limited in it's novelty, since it combines existing off-the-shelf modules/losses and similar methods have been tried in the past --- the novel contributions of the paper should be clearly highlighted in the revised submission. | test | [
"Y3BC9VvhvQ",
"fmFe6-SNqdo",
"iTkh-bPlK9",
"yFOwJWc0-GD",
"rf6s9Seoqqm",
"WD_fZU6wLEK",
"dJBkjGg2HG",
"Lnap3NrF-j0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes an automatic data augmentation method that is modality agnostic by modifying the data in a latent space (rather than in input space). They design latent space interventions that yield hard examples (which they claim should improve downstream model learning). They apply population based training ... | [
6,
6,
7,
-1,
-1,
-1,
-1,
6
] | [
4,
3,
5,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_XjYgR6gbCEc",
"iclr_2021_XjYgR6gbCEc",
"iclr_2021_XjYgR6gbCEc",
"fmFe6-SNqdo",
"Y3BC9VvhvQ",
"Lnap3NrF-j0",
"iTkh-bPlK9",
"iclr_2021_XjYgR6gbCEc"
] |
iclr_2021_3T9iFICe0Y9 | The Recurrent Neural Tangent Kernel | The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network (RNN). In this paper we introduce and study the Recurrent Neural Tangent Kernel (RNTK), which provides new insights into the behavior of overparametrized RNNs. A key property of the RNTK should greatly benefit practitioners is its ability to compare inputs of different length. To this end, we characterize how the RNTK weights different time steps to form its output under different initialization parameters and nonlinearity choices. A synthetic and 56 real-world data experiments demonstrate that the RNTK offers significant performance gains over other kernels, including standard NTKs, across a wide array of data sets. | poster-presentations | Reviewers agreed on the value of theoretical contribution, especially the surprising conclusion that the weight-tied and untied RNTK are identical. The empirical results were updated in response to reviewer's suggestion. I believe this would be of interest to ICLR audience. | train | [
"8rnTixldWsY",
"gIcUaJP8dND",
"I3QtHBS0blx",
"DHh3gCEavs5",
"6V1l0pBZjUD",
"4OkzS-X4Flk"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their supportive feedback and are delighted that you found our proposed sensitivity analysis useful for studying infinite width RNNs. Below we address each of your comments. \n\n**The proposed method is restricted to the small data setting**: Indeed, a downside shared by all kernel meth... | [
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"6V1l0pBZjUD",
"DHh3gCEavs5",
"4OkzS-X4Flk",
"iclr_2021_3T9iFICe0Y9",
"iclr_2021_3T9iFICe0Y9",
"iclr_2021_3T9iFICe0Y9"
] |
iclr_2021_MBpHUFrcG2x | Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows | We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the exact conditional distributions learned by normalizing flows. As a conditional sampling method, PL-MCMC enables Monte Carlo Expectation Maximization (MC-EM) training of normalizing flows from incomplete data. Through experimental tests applying normalizing flows to missing data tasks for a variety of data sets, we demonstrate the efficacy of PL-MCMC for conditional sampling from normalizing flows. | poster-presentations | This work combines normalizing flows with conditional sampling. While there are connections to other works, the paper seems novel and applicable, and has nice experimental results. The authors did a good job clarifying the reviewers questions, and have addressed their major concerns. We appreciate the additional analyses added to the paper. | train | [
"7ka9z27ytVg",
"oWQ1rg9bfsJ",
"RUUf0B8wHPP",
"lzJsN4cuvQ",
"mB9pCKG5_kG",
"y_YRoMR2zpN"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Summary\n\nThis paper proposes a stochastic expectation--maximisation (EM) algorithm. The main idea is that the target distribution is specified as a deterministic mapping, a.k.a. a normalising flow, from some simple \"base\" distribution.\n\n\nStrengths\n\nThe algorithm appears to be formally correct (in the sens... | [
6,
7,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
4
] | [
"iclr_2021_MBpHUFrcG2x",
"iclr_2021_MBpHUFrcG2x",
"7ka9z27ytVg",
"y_YRoMR2zpN",
"oWQ1rg9bfsJ",
"iclr_2021_MBpHUFrcG2x"
] |
iclr_2021_NjF772F4ZZR | Learning the Pareto Front with Hypernetworks | Multi-objective optimization (MOO) problems are prevalent in machine learning. These problems have a set of optimal solutions, called the Pareto front, where each point on the front represents a different trade-off between possibly conflicting objectives. Recent MOO methods can target a specific desired ray in loss space however, most approaches still face two grave limitations: (i) A separate model has to be trained for each point on the front; and (ii) The exact trade-off must be known before the optimization process. Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training. We call this new setup Pareto-Front Learning (PFL).
We describe an approach to PFL implemented using HyperNetworks, which we term Pareto HyperNetworks (PHNs). PHN learns the entire Pareto front simultaneously using a single hypernetwork, which receives as input a desired preference vector and returns a Pareto-optimal model whose loss vector is in the desired ray. The unified model is runtime efficient compared to training multiple models and generalizes to new operating points not used during training. We evaluate our method on a wide set of problems, from multi-task regression and classification to fairness. PHNs learn the entire Pareto front at roughly the same time as learning a single point on the front and at the same time reach a better solution set. PFL opens the door to new applications where models are selected based on preferences that are only available at run time. | poster-presentations | The paper proposes a hyper-net method for multi-objective optimization, which trains a neural network that maps preference vector to the corresponding Pareto solution. The proposed idea is interesting and useful, although the evaluation of the work is not overwhelming convincing. The writing of the work can be further improved.
Also, the basic idea of the work is the almost the same as a concurrent work "Lin et al 2020. controllable pareto multi-task learning" which is also submitted to this conference. The paper cited that paper briefly, "... The proposed method is conceptually similar to our approach...", which is too vague and brief. We urge the author to provide a through discussion on the detailed difference and similarity of the works, including empirical comparisons when necessary. | train | [
"K5tA3-pFp-",
"mTLHGIO13Ge",
"M0ohlA8T8X",
"MWyCGHBdyAN",
"CzNbFJX2ZPX",
"qWCWtzd0qOb",
"gMbzwHeghW",
"_FuQdofSBVs",
"Td-cCzuIyMw",
"hQluav46dvD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a method for multi-objective optimization. The key idea is to learn the entire Pareto front at once by training a hypernetwork that takes preference vector as an inputs and outputs network parameters, which corresponds to a point on the Pareto set with the desired trade-off specified by the pref... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2021_NjF772F4ZZR",
"iclr_2021_NjF772F4ZZR",
"iclr_2021_NjF772F4ZZR",
"mTLHGIO13Ge",
"hQluav46dvD",
"M0ohlA8T8X",
"_FuQdofSBVs",
"K5tA3-pFp-",
"iclr_2021_NjF772F4ZZR",
"iclr_2021_NjF772F4ZZR"
] |
iclr_2021_YLewtnvKgR7 | Estimating and Evaluating Regression Predictive Uncertainty in Deep Object Detectors | Predictive uncertainty estimation is an essential next step for the reliable deployment of deep object detectors in safety-critical tasks. In this work, we focus on estimating predictive distributions for bounding box regression output with variance networks. We show that in the context of object detection, training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions regardless of the correctness of the output mean. We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions than NLL. We also address the widespread use of non-proper scoring metrics for evaluating predictive distributions from deep object detectors by proposing an alternate evaluation approach founded on proper scoring rules. Using the proposed evaluation tools, we show that although variance networks can be used to produce high quality predictive distributions, ad-hoc approaches used by seminal object detectors for choosing regression targets during training do not provide wide enough data support for reliable variance learning. We hope that our work helps shift evaluation in probabilistic object detection to better align with predictive uncertainty evaluation in other machine learning domains. Code for all models, evaluation, and datasets is available at: https://github.com/asharakeh/probdet.git. | poster-presentations | The initial reviews were mixed (2 positive, 2 negative). The main concerns were about presentation issues: unclear contribution or main point; unclear analysis of figures; missing some motivation of selecting object detectors; etc.). On the other hand, reviewers appreciated the well-formulated paper, analysis and recommendations from the experiments;
The author response addressed the presentation issues and added additional motivations and clarifications. All reviewers in the end recommended accept. | train | [
"IaB3dspX9Q",
"wCZDOocCMV",
"pqlF9cDasl",
"0srJQNooB38",
"pzxn5OzEKcZ",
"78Uxps63CFV",
"4FvYtGctrnQ",
"-byXOBpQQSf",
"s6jhHhcIZHw",
"927zFaqbF5M",
"8mQWToKAii",
"algJiWi46Ca",
"QaZcLxgiPJs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper explores the predictive uncertainty estimation problem in object detection. They observe that the commonly used NLL loss leads to high entropy predictive distributions but regardless of the correctness of the output mean. Instead, they use energy score as a non-local proper scoring rule. They also propo... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9
] | [
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2021_YLewtnvKgR7",
"4FvYtGctrnQ",
"iclr_2021_YLewtnvKgR7",
"78Uxps63CFV",
"IaB3dspX9Q",
"-byXOBpQQSf",
"927zFaqbF5M",
"pqlF9cDasl",
"algJiWi46Ca",
"QaZcLxgiPJs",
"pzxn5OzEKcZ",
"iclr_2021_YLewtnvKgR7",
"iclr_2021_YLewtnvKgR7"
] |
iclr_2021_Y9McSeEaqUh | Predicting Classification Accuracy When Adding New Unobserved Classes | Multiclass classifiers are often designed and evaluated only on a sample from the classes on which they will eventually be applied. Hence, their final accuracy remains unknown. In this work we study how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes. For this, we define a measure of separation between correct and incorrect classes that is independent of the number of classes: the "reversed ROC" (rROC), which is obtained by replacing the roles of classes and data-points in the common ROC. We show that the classification accuracy is a function of the rROC in multiclass classifiers, for which the learned representation of data from the initial class sample remains unchanged when new classes are added. Using these results we formulate a robust neural-network-based algorithm, "CleaneX", which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes. Unlike previous methods, our method uses both the observed accuracies of the classifier and densities of classification scores, and therefore achieves remarkably better predictions than current state-of-the-art methods on both simulations and real datasets of object detection, face recognition, and brain decoding. | poster-presentations | The paper is very clear. It provides a good overview of the problem, making it easy to follow even for researchers outside the area.
This work provides a novel approach for extrapolating the expected accuracy on a larger set of classes from a training set with smaller number of classes with a creative, simple and elegant solution through reversed ROC. Such an approach will be useful for extreme classification settings. In real-world settings, classifiers are often trained on a pilot set of data, and then deployed where the classes are much larger. It is useful to have a mechanism to estimate how the classification performance will change with larger number of classes.
The reviewers all agree that this work provides a novel contribution to predicting classification accuracy. The authors have satisfactorily addressed the reviewers’ comments and provided sufficient clarification to the questions. We also appreciate the edits that the authors have made.
| train | [
"N7fVz0OHN0",
"-ofh1dwcW8x",
"jQkSZiNVk4",
"vRQGyC2RNj",
"UhsGBUih664",
"N4ZRLWLW-Ld",
"JteZXHKkZaN"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors show a relationship between classification accuracy and reverse ROC in multiclass classifiers, when there are new classes not seen in the training data. They propose a method called CleaneX that learns to estimate the accuracy of multiclass classifiers on arbitrarily large sets of classes. \n\nMajor Co... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2021_Y9McSeEaqUh",
"iclr_2021_Y9McSeEaqUh",
"N4ZRLWLW-Ld",
"N7fVz0OHN0",
"JteZXHKkZaN",
"iclr_2021_Y9McSeEaqUh",
"iclr_2021_Y9McSeEaqUh"
] |
iclr_2021_POWv6hDd9XH | BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction | We study the challenging task of neural network quantization without end-to-end retraining, called Post-training Quantization (PTQ). PTQ usually requires a small subset of training data but produces less powerful quantized models than Quantization-Aware Training (QAT). In this work, we propose a novel PTQ framework, dubbed BRECQ, which pushes the limits of bitwidth in PTQ down to INT2 for the first time. BRECQ leverages the basic building blocks in neural networks and reconstructs them one-by-one. In a comprehensive theoretical study of the second-order error, we show that BRECQ achieves a good balance between cross-layer dependency and generalization error. To further employ the power of quantization, the mixed precision technique is incorporated in our framework by approximating the inter-layer and intra-layer sensitivity. Extensive experiments on various handcrafted and searched neural architectures are conducted for both image classification and object detection tasks. And for the first time we prove that, without bells and whistles, PTQ can attain 4-bit ResNet and MobileNetV2 comparable with QAT and enjoy 240 times faster production of quantized models. Codes are available at https://github.com/yhhhli/BRECQ. | poster-presentations | This paper proposes a new method for post-training quantization, achieving very good results. After the author's response, all the reviewers were positive. There were some issues regarding clarity, and about explaining why the methods work better than just optimizing the loss, but I think the reviewers were eventually satisfied. Following some info after the author's response phase, I'll just ask the authors to verify their published code works with publicly available PyTorch packages, so their method could be easily used. | train | [
"2YRPVayIn3p",
"6Ls9AazoGvv",
"QeeBQa6GKm",
"RP5GfbHbl2",
"98N-WHCgj_i",
"4BGWo6ALMYv",
"uIRK_CKvQQh",
"t5sQGu_o-gB",
"ZfJSFAzi75H",
"P_FYDuSYqiX",
"oOg-qWXmFoe",
"T2_S8rdLTK8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes BRECQ which is a new Post Training Quantization (PTQ) method. The goal of the paper is to push the limit of PTQ to low bit precision (INT2). They try to address this by considering both inter and intra-layer sensitivity to find the best update to the model parameters so that the output from a b... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"iclr_2021_POWv6hDd9XH",
"QeeBQa6GKm",
"98N-WHCgj_i",
"P_FYDuSYqiX",
"2YRPVayIn3p",
"2YRPVayIn3p",
"oOg-qWXmFoe",
"T2_S8rdLTK8",
"iclr_2021_POWv6hDd9XH",
"iclr_2021_POWv6hDd9XH",
"iclr_2021_POWv6hDd9XH",
"iclr_2021_POWv6hDd9XH"
] |
iclr_2021_ixpSxO9flk3 | No MCMC for me: Amortized sampling for fast and stable training of energy-based models | Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty. Despite recent advances, training EBMs on high-dimensional data remains a challenging problem as the state-of-the-art approaches are costly, unstable, and require considerable tuning and domain expertise to apply successfully. In this work, we present a simple method for training EBMs at scale which uses an entropy-regularized generator to amortize the MCMC sampling typically used in EBM training. We improve upon prior MCMC-based entropy regularization methods with a fast variational approximation. We demonstrate the effectiveness of our approach by using it to train tractable likelihood models. Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training. This allows us to extend JEM models to semi-supervised classification on tabular data from a variety of continuous domains. | poster-presentations | The authors proposed to train an energy based model with a hierachical
variational approximations. The entropy can be tricky in hierarchical
variational approximations. The authors suggest using the auxillary
samples to guide an importance samples to compute the gradient of the
entropy. They evaluate their approach on a slew of models. The idea is
straightfoward and could potentially be applied to other hierarchical
variational models out side of the energy-based model setting. The
authors were responsive and clarified many agressive questions. I'd
ask the authors to clean up two things
- Equation 8 would be easier to follow if it kept the expectation from
equation 6 thereby making z_0 feel like it materialize out of thin
air
- A more detailed discusion of when the proposal is good and what could
be missed out when relying on the generating z to center the proposal | test | [
"KJEb6ZD1cHv",
"d5eBVKdgkuH",
"sXFEfnVd9-v",
"TLkjVmO7rXD",
"W9hqQNeNfEH",
"LsmWWBSAoH9",
"IDPXKRxmg9x",
"7qveH-NCGs",
"WhDI2jbyfrd",
"iEQndg5hWi5",
"WjuK_QrW2F0",
"4cFsIrg_mWp",
"AItsMU1Q5h3",
"yHxeXYAnbs4",
"AoxFnFdiLl-",
"ZRvafFPkR_L",
"djYzxDhtRQj",
"JAWjwnUAVx6",
"MJUZnAmjNH... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
... | [
"This paper proposes an improved algorithm to train EBM-based models, called Variational Entropy Regularized Approximate maximum likelihood. The basic idea is to formulate the intractable partition function as an optimization problem with an additional entropy term. To estimate the gradient of the entropy term, the... | [
4,
7,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2021_ixpSxO9flk3",
"iclr_2021_ixpSxO9flk3",
"TLkjVmO7rXD",
"W9hqQNeNfEH",
"LsmWWBSAoH9",
"Bcm_1YymGY2",
"iclr_2021_ixpSxO9flk3",
"d5eBVKdgkuH",
"IDPXKRxmg9x",
"KJEb6ZD1cHv",
"4cFsIrg_mWp",
"iclr_2021_ixpSxO9flk3",
"C4VuDibwoD",
"AoxFnFdiLl-",
"iclr_2021_ixpSxO9flk3",
"KJEb6ZD1cHv... |
iclr_2021_jLoC4ez43PZ | GraphCodeBERT: Pre-training Code Representations with Data Flow | Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search. | poster-presentations | This paper proposes a simple extension to BERT-like pre-training for source code models, which allows incorporation of data flow information. This is a new way of incorporating code structural information into models, and it appears practical and effective. Reviewers are all in favor of accepting the paper. | train | [
"O-8pm3aJHqe",
"oSpWBGN1sa",
"L2cC6JOSgvW",
"1dOf2lr8ueO",
"ekQ5hQEgDPs",
"2KLdo-CCc1p",
"EtePPHYyFEI",
"bj8YqWK2Wf2",
"HQq2rKwSkFs"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"### Respond to comments:\n\n1. We use the byte-pair encoding (BPE) method [1] to tokenize variable names, e.g. “x1” and “max_value” will be tokenized to [‘Ġx’,’1’] and [‘Ġmax’, ‘_’, ‘value’] where ‘Ġ’ is the special token to represent the beginning sub-token of variable names.\n\n [1] Rico Sennrich, Barry Hadd... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
"EtePPHYyFEI",
"HQq2rKwSkFs",
"iclr_2021_jLoC4ez43PZ",
"2KLdo-CCc1p",
"bj8YqWK2Wf2",
"iclr_2021_jLoC4ez43PZ",
"iclr_2021_jLoC4ez43PZ",
"iclr_2021_jLoC4ez43PZ",
"iclr_2021_jLoC4ez43PZ"
] |
iclr_2021_iaO86DUuKi | Conservative Safety Critics for Exploration | Safe exploration presents a major challenge in reinforcement learning (RL): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning. In this paper, we target the problem of safe exploration in RL, by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration. We theoretically characterize the tradeoff between safety and policy improvement, show that the safety constraints are satisfied with high probability during training, derive provable convergence guarantees for our approach which is no worse asymptotically then standard RL, and empirically demonstrate the efficacy of the proposed approach on a suite of challenging navigation, manipulation, and locomotion tasks. Our results demonstrate that the proposed approach can achieve competitive task performance, while incurring significantly lower catastrophic failure rates during training as compared to prior methods. Videos are at this URL https://sites.google.com/view/conservative-safety-critics/ | poster-presentations | Summary:
This paper introduces a different, interesting definition of safety in RL. The paper does a nice job of showing success with empirical results and providing bounds. I think it provides a nice contribution to the field.
Discussion:
The reviewers agree this paper should be accepted. The initial points brought up against the paper have been successfully addressed or mitigated. | train | [
"7DFZHSfcklP",
"1BvX86TvZ0",
"OUch0Lo3nPc",
"oAW7FBIr86",
"tcKVVjHCzxK",
"c69c96bSs7t",
"auERtJbNjUl",
"LqMWeV5v8Yu",
"ePd-XSQ7FZn",
"DrqQt-PLqGp",
"twxIgdf-rBV",
"IljEtrRDBjr",
"b-Sw0_VZQR5",
"y1BJosqTfG",
"rsCLmqtC4cm",
"uabc92i34mH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper would like to address the problem of ``\"safe exploration\" with a conservative estimation of the environment. Although the problem seems reasonable, I have the following several concerns on this paper:\n\n- Will the safety constraints be revealed to the agent? Standard RL assumes the reward is not reve... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"iclr_2021_iaO86DUuKi",
"iclr_2021_iaO86DUuKi",
"oAW7FBIr86",
"tcKVVjHCzxK",
"ePd-XSQ7FZn",
"1BvX86TvZ0",
"7DFZHSfcklP",
"1BvX86TvZ0",
"1BvX86TvZ0",
"rsCLmqtC4cm",
"uabc92i34mH",
"7DFZHSfcklP",
"7DFZHSfcklP",
"7DFZHSfcklP",
"iclr_2021_iaO86DUuKi",
"iclr_2021_iaO86DUuKi"
] |
iclr_2021_uKhGRvM8QNH | Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors | Knowledge distillation, in which a student model is trained to mimic a teacher model, has been proved as an effective technique for model compression and model accuracy boosting. However, most knowledge distillation methods, designed for image classification, have failed on more challenging tasks, such as object detection. In this paper, we suggest that the failure of knowledge distillation on object detection is mainly caused by two reasons: (1) the imbalance between pixels of foreground and background and (2) lack of distillation on the relation between different pixels. Observing the above reasons, we propose attention-guided distillation and non-local distillation to address the two problems, respectively. Attention-guided distillation is proposed to find the crucial pixels of foreground objects with attention mechanism and then make the students take more effort to learn their features. Non-local distillation is proposed to enable students to learn not only the feature of an individual pixel but also the relation between different pixels captured by non-local modules. Experiments show that our methods achieve excellent AP improvements on both one-stage and two-stage, both anchor-based and anchor-free detectors. For example, Faster RCNN (ResNet101 backbone) with our distillation achieves 43.9 AP on COCO2017, which is 4.1 higher than the baseline. Codes have been released on Github. | poster-presentations | After the rebuttal stage, all reviewers lean positive (in final scores and/or in comments during the discussion phase). The AC found no reason to disagree. The benefit of the proposed method is demonstrated in many diverse settings, and the authors argue novelty in that no prior work addresses both fg/bg imbalance and relation distillation. | train | [
"kbedPNPDFwa",
"_B-C27rmvO1",
"AfwSl1Z3sS",
"woshgYbs-FA",
"fBIfBEmWHxm",
"ob0v3MR2Dj"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Pros:\n\n- The different attention techniques seem to consistently improve object detectors across different models. \n- The ablation studies are important in showing the advantage and impact of each proposed module. \n- Please clarify if the student models start from random weights or are initialized after the te... | [
6,
6,
-1,
-1,
-1,
7
] | [
4,
4,
-1,
-1,
-1,
5
] | [
"iclr_2021_uKhGRvM8QNH",
"iclr_2021_uKhGRvM8QNH",
"kbedPNPDFwa",
"_B-C27rmvO1",
"ob0v3MR2Dj",
"iclr_2021_uKhGRvM8QNH"
] |
iclr_2021_whE31dn74cL | A Temporal Kernel Approach for Deep Learning with Continuous-time Information | Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings. | poster-presentations | This paper presents a novel approach for integrating time into deep neural network models based on the Gaussian process limit view of a neural network model. Specifically, the approach augments an a-temporal neural network designed to process a single time point with a temporal kernel that relates data points across time. The composition of the a-temporal neural network kernel with with the temporal kernel is accomplished efficiently using a random features representation of the temporal kernel. The authors propose to represent the temporal kernel via its spectral decomposition, which makes the approach quite flexible. Learning leverages re-parameterization. While random features have been used to approximate temporal kernels in prior work [1], the approach in this paper is significantly more general in that it can be composed with any a-temporal deep architecture and the authors show results for RNNs, CNNs, and attention-based models. The predictive performance of the approach also appears to be consistently better than baselines and it works particularly well on the challenging case of irregularly sampled data.
In terms of weaknesses, the reviewers had a number of questions about the paper. The authors updated the paper to include some more recent models including ODE-RNNs. This material is currently presented in the appendices and needs to be moved into the main paper. Several of the reviewers also had technical questions questions that are in fact addressed in the manuscript; however, the authors are relying heavily on the appendices to present many important details and the paper is currently over 30 pages long. The frequent references to the appendix for additional details makes the paper a challenging read. The authors have already done some work to address clarity by adding a new figure, but should prioritize moving additional key details into the main body of the paper to improve readability.
[1] http://auai.org/uai2015/proceedings/papers/41.pdf | train | [
"ZKTMBwzvLav",
"LksBrZh5pD1",
"9ksVwZW1qCy",
"Lsjkk5joWYZ",
"m_S6cffD5Es",
"0IEeBqvnAL",
"6wT2Q-vIPSD",
"9YipEv-204i",
"IC8nQwTunHp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"##### Post-rebuttal update\n\nI've read the rebuttal and updated my score.\n\n---------------\n\nThis paper proposes a deep learning model for incorporating temporal information by composing the NN-GP kernel and a temporal stationary kernel through a product. The temporal stationary kernel is represented using its... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
2,
2
] | [
"iclr_2021_whE31dn74cL",
"iclr_2021_whE31dn74cL",
"0IEeBqvnAL",
"IC8nQwTunHp",
"9YipEv-204i",
"LksBrZh5pD1",
"ZKTMBwzvLav",
"iclr_2021_whE31dn74cL",
"iclr_2021_whE31dn74cL"
] |
iclr_2021_Srmggo3b3X6 | For self-supervised learning, Rationality implies generalization, provably | We prove a new upper bound on the generalization gap of classifiers that are obtained by first using self-supervision to learn a representation r of the training~data, and then fitting a simple (e.g., linear) classifier g to the labels. Specifically, we show that (under the assumptions described below) the generalization gap of such classifiers tends to zero if C(g)≪n, where C(g) is an appropriately-defined measure of the simple classifier g's complexity, and n is the number of training samples. We stress that our bound is independent of the complexity of the representation r.
We do not make any structural or conditional-independence assumptions on the representation-learning task, which can use the same training dataset that is later used for classification. Rather, we assume that the training procedure satisfies certain natural noise-robustness (adding small amount of label noise causes small degradation in performance) and rationality (getting the wrong label is not better than getting no label at all) conditions that widely hold across many standard architectures.
We also conduct an extensive empirical study of the generalization gap and the quantities used in our assumptions for a variety of self-supervision based algorithms, including SimCLR, AMDIM and BigBiGAN, on the CIFAR-10 and ImageNet datasets. We show that, unlike standard supervised classifiers, these algorithms display small generalization gap, and the bounds we prove on this gap are often non vacuous. | poster-presentations | The paper offers a new take on generalization, motivated by the empirical success of self-supervised learning. Two reviewers found the contribution novel and interesting, and recommend acceptance (with one reviewer championing for it). Two reviewers remain skeptical about the value of the paper, and the authors are encouraged to add a discussion about the points made in these reviews.
I agree with the positive reviewers and would like to recommend acceptance. | train | [
"RELWFkTuRLr",
"WOmE58coiZ-",
"hsh1OzdktLA",
"duycoU77MjM",
"ybQd4nZvxw5",
"v3uxCNzhtD",
"yCty4nVSs7",
"MdtYCLpPF8a",
"OaKprR-2VC",
"chMnpXRQjss",
"LNl_1HjhMUD"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The present paper aims to understand the generalization capability of self-supervised learning algorithms that fine-tune a simple linear classifier to the labels. Analyzing generalization in this case is challenging due to a data re-use problem: the same training data that is used for self-supervised learning is a... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_Srmggo3b3X6",
"hsh1OzdktLA",
"duycoU77MjM",
"OaKprR-2VC",
"chMnpXRQjss",
"RELWFkTuRLr",
"LNl_1HjhMUD",
"iclr_2021_Srmggo3b3X6",
"iclr_2021_Srmggo3b3X6",
"iclr_2021_Srmggo3b3X6",
"iclr_2021_Srmggo3b3X6"
] |
iclr_2021_Wi5KUNlqWty | How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision | Attention mechanism in graph neural networks is designed to assign larger weights to important neighbor nodes for better representation. However, what graph attention learns is not understood well, particularly when graphs are noisy. In this paper, we propose a self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs. Specifically, we exploit two attention forms compatible with a self-supervised task to predict edges, whose presence and absence contain the inherent information about the importance of the relationships between nodes. By encoding edges, SuperGAT learns more expressive attention in distinguishing mislinked neighbors. We find two graph characteristics influence the effectiveness of attention forms and self-supervision: homophily and average degree. Thus, our recipe provides guidance on which attention design to use when those two graph characteristics are known. Our experiment on 17 real-world datasets demonstrates that our recipe generalizes across 15 datasets of them, and our models designed by recipe show improved performance over baselines. | poster-presentations | Two reviewers are very positive about this paper and recommend acceptance, one indicates rejection and one is on the fence. Although all referees appreciate the extensive experiments and analysis presented in the paper, their main concerns are related to the limited superiority of the method wrt state of the art [R1], seemingly arbitrary choices and questionable assumptions [R4]. The rebuttal adequately addresses R1's concerns by highlighting statistical significance of the results, and partially covers R4's concerns. Although the proposed approach may be perceived as incremental [R1, R2, R3, R4], the authors argue that introducing self-supervision to graph attention is not trivial, and emphasize their findings on how/when this is beneficial. Moreover, R2 and R3 acknowledge that the contribution of the paper holds promise, is worth exploring, and may be useful to the research community. Most reviewers are satisfied with the answers in the rebuttal. After discussion, three referees lean towards acceptance and the fourth reviewer does not oppose the decision. I agree with their assessment and therefore recommend acceptance. Please do include your comments regarding the choice of average degree and homophily in the final version of paper. | train | [
"TX4JV7-EeLC",
"nYsyAOUSMI6",
"iMkhL_qmEBe",
"efk4WS9qL7",
"cLmMGAi42rT",
"4HdyYiYniKR",
"XV-2nCSD0zx",
"bxd5kTltVEU",
"GUGjDIJ7MXm",
"oGOSyw_tZGG"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"********Summary\nIn this paper, they introduced self-supervised graph attention network (SuperGAT), which is claimed to perform well in noisy graphs. They used information in the edges as an indicator of importance of relations in the graph, then they learn the relational importance using self-supervised attention... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
4
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2021_Wi5KUNlqWty",
"iclr_2021_Wi5KUNlqWty",
"iclr_2021_Wi5KUNlqWty",
"oGOSyw_tZGG",
"TX4JV7-EeLC",
"GUGjDIJ7MXm",
"bxd5kTltVEU",
"iclr_2021_Wi5KUNlqWty",
"iclr_2021_Wi5KUNlqWty",
"iclr_2021_Wi5KUNlqWty"
] |
iclr_2021_DEa4JdMWRHp | Interpretable Models for Granger Causality Using Self-explaining Neural Networks | Exploratory analysis of time series data can yield a better understanding of complex dynamical systems. Granger causality is a practical framework for analysing interactions in sequential data, applied in a wide range of domains. In this paper, we propose a novel framework for inferring multivariate Granger causality under nonlinear dynamics based on an extension of self-explaining neural networks. This framework is more interpretable than other neural-network-based techniques for inferring Granger causality, since in addition to relational inference, it also allows detecting signs of Granger-causal effects and inspecting their variability over time. In comprehensive experiments on simulated data, we show that our framework performs on par with several powerful baseline methods at inferring Granger causality and that it achieves better performance at inferring interaction signs. The results suggest that our framework is a viable and more interpretable alternative to sparse-input neural networks for inferring Granger causality. | poster-presentations | The proposed approach is interesting and is differentiated enough from the recent body of work on Neural Network Granger causal modeling as it offers a mechanism for detecting signs of causality.
The authors have satisfactorily addressed the points raised in the reviews. In particular relationship with prior work and novelty of the contributions are now clearly articulated. The added discussion on the superiority of TCDF on simulated fMRI experiments is insightful. Though prediction error is only a proxy for the task at hand, the readers will appreciate the added evaluation.
The proposed approach to stability evaluation leveraging the time-reversal trick is novel and particularly pertinent, and could motivate some interesting follow-up work on this topic. It is also important that the authors have characterized the computational advantage of the approach. | test | [
"wXFx_7a0zZt",
"3KTLLIX3Ud_",
"438GP39Li8o",
"9C_ZDPTY0n",
"v4l93UBN_yX",
"NHF61pmZDjF",
"ZwQSo_1tTg-",
"RwfUDpBs0W",
"2Gkzc8Cp-H9",
"h2zsAi3jtqv",
"khz1rBADi01",
"GL25BZf1vqH",
"cre2OujuVuo",
"bEiOLmRcWok"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"# Introduction\n\nThe introduction is solid. My comment here is that there ought to be more discussion on sign detection. You leave even the definition until section 2.1, where a lay reader may struggle to understand what you are talking about if they are merely skimming your paper (as many do). Hence, for readabi... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_DEa4JdMWRHp",
"438GP39Li8o",
"khz1rBADi01",
"ZwQSo_1tTg-",
"bEiOLmRcWok",
"iclr_2021_DEa4JdMWRHp",
"RwfUDpBs0W",
"wXFx_7a0zZt",
"GL25BZf1vqH",
"cre2OujuVuo",
"v4l93UBN_yX",
"iclr_2021_DEa4JdMWRHp",
"iclr_2021_DEa4JdMWRHp",
"iclr_2021_DEa4JdMWRHp"
] |
iclr_2021_-QxT4mJdijq | Meta-learning Symmetries by Reparameterization | Many successful deep learning architectures are equivariant to certain transformations in order to conserve parameters and improve generalization: most famously, convolution layers are equivariant to shifts of the input. This approach only works when practitioners know the symmetries of the task and can manually construct an architecture with the corresponding equivariances. Our goal is an approach for learning equivariances from data, without needing to design custom task-specific architectures. We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data. Our method can provably represent equivariance-inducing parameter sharing for any finite group of symmetry transformations. Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks. | poster-presentations | The paper proposes an approach to meta-learning symmetries. While several approaches have recently emerged with similar goals, and sometimes greater convenience and empirical performance, the proposed approach has some interesting characteristics, such as changing properties of the architecture to extrapolate these symmetries. There was a quite a spread of opinions about the paper, the empirical results were not strong, and updates to the paper focused on helpful text additions, but did not substantively improve the evaluation or experiments. Notwithstanding, the paper is conceptually interesting, there are no major flaws, and there is sufficient support for it. | train | [
"QeYEbyGmlh7",
"cmJiSMaf1ig",
"KqBcgaD4iNq",
"1EPoTonKzTR",
"lai5OwCgJhL",
"JAEVlBEtxsc",
"5vFLQLQhYbp",
"I7DBClaAfvC",
"LJ-ldIPj5Ui",
"TEcMu9lvXcD",
"QbagW2O08yI",
"noutxvzC6hP"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose MSR, a parametrization of convolutional kernels that allows for meta-learning symmetries shared between several tasks. Each kernel is represented as a product of a structure matrix and a vector of the kernel weights. The kernel weights are updated during the inner loop. The stru... | [
9,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2021_-QxT4mJdijq",
"iclr_2021_-QxT4mJdijq",
"1EPoTonKzTR",
"lai5OwCgJhL",
"JAEVlBEtxsc",
"I7DBClaAfvC",
"cmJiSMaf1ig",
"QbagW2O08yI",
"noutxvzC6hP",
"QeYEbyGmlh7",
"iclr_2021_-QxT4mJdijq",
"iclr_2021_-QxT4mJdijq"
] |
iclr_2021_eIHYL6fpbkA | Removing Undesirable Feature Contributions Using Out-of-Distribution Data | Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability of UID data and dependence of algorithms on pseudo-labels. Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues. We show how to improve generalization theoretically using OOD data in each learning scenario and complement our theoretical analysis with experiments on CIFAR-10, CIFAR-100, and a subset of ImageNet. The results indicate that undesirable features are shared even among image data that seem to have little correlation from a human point of view. We also present the advantages of the proposed method through comparison with other data augmentation methods, which can be used in the absence of UID data. Furthermore, we demonstrate that the proposed method can further improve the existing state-of-the-art adversarial training. | poster-presentations | This paper studies the effect of using unlabelled out-of-distribution (OOD) data in the training procedure to improve robust (and standard) accuracies. The main algorithmic contribution is a data-augmentation based robust training algorithm to train a loss which is carefully designed to benefit from the additional OOD data. What's also interesting is that the OOD data is fed with random labels to the training procedure. As demonstrated in the theoretical results, this way of feeding OOD data helps to remove the dependency to non-robust features and hence improves robustness.
As pointed out by all the reviewers (which I agree with), the idea of using unlabelled OOD data at training is novel/interesting, and the paper also shows how this can be done algorithmically. The numerical results also confirm the effectiveness of the proposed methods. | train | [
"Wnj4fkb9zxo",
"PaYewOuqTdI",
"qRDQ5cQ3QqF",
"kmpGkSmPBXb",
"e4SD6dxcQiT",
"18gEqUccMDk",
"qdqWjsY6J0_",
"fMx6_9mW71K",
"QRenmeVMn7q",
"fzyC6jlF_jE",
"GGnQB44chM7",
"DgcQLZepC6v"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new data augmentation method that utilizes out-of-distribution data for enhancing generalizability for both supervised and adversarial learning. While most of existing data augmentation methods explore auxiliary unlabeled in-distribution data, this paper tries to leverage out-of-distribution ... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2021_eIHYL6fpbkA",
"iclr_2021_eIHYL6fpbkA",
"18gEqUccMDk",
"e4SD6dxcQiT",
"QRenmeVMn7q",
"PaYewOuqTdI",
"Wnj4fkb9zxo",
"DgcQLZepC6v",
"fzyC6jlF_jE",
"GGnQB44chM7",
"iclr_2021_eIHYL6fpbkA",
"iclr_2021_eIHYL6fpbkA"
] |
iclr_2021_a2gqxKDvYys | Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models | Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO). In this setting, variational posteriors are often only partially conditioned. While the true posteriors depend, e.g., on the entire sequence of observations, approximate posteriors are only informed by past observations. This mimics the Bayesian filter---a mixture of smoothing posteriors. Yet, we show that the ELBO objective forces partially-conditioned amortised posteriors to approximate products of smoothing posteriors instead. Consequently, the learned generative model is compromised. We demonstrate these theoretical findings in three scenarios: traffic flow, handwritten digits, and aerial vehicle dynamics. Using fully-conditioned approximate posteriors, performance improves in terms of generative modelling and multi-step prediction. | poster-presentations | The paper studies how suboptimal conditioning sets create
suboptimal variational approximations in variational inference with amortization in state space models.
While the point made about the role of the conditioning set is not a new one, the point was carried out further and
more clearly in this paper than previous works. Addressing a couple of issues would
make the paper stronger:
- Really boiling down in the experiments to know for what models/data
the "full" approach would add value would provide concrete guidance
to the community.
- Notation choices in the paper are rough. For example, Appendix A.2
reads like a type mismatch since the w on the left is a function of
z but is also equal to a function of z and C.
- Adding a more detailed description of the complement of C in the
main text | train | [
"n-xGnha8jyy",
"z2EveTG5Cjy",
"4SiYSFzX3r0",
"UfYS87An3Gd",
"g6Gai5ASnYq",
"PTVlliGXV3",
"PnNNT1t20PG",
"ZOeV3kdLIq5",
"jYWYmqr_Usk",
"3aW-a1yPE6x",
"fd6aa0I-ns",
"QO_JhhZfI9W",
"YkO3GmSn5Q3"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I should add: I remain in favor of acceptance.",
"Regarding DKF: I think it does more than 'hint'- it plainly explains how to factorize the posterior. It may not do so in the experiments but the algorithmic contribution of a work can be and often is more than what accompanying experiments suggest.\nI think the r... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"z2EveTG5Cjy",
"3aW-a1yPE6x",
"iclr_2021_a2gqxKDvYys",
"jYWYmqr_Usk",
"ZOeV3kdLIq5",
"iclr_2021_a2gqxKDvYys",
"fd6aa0I-ns",
"QO_JhhZfI9W",
"4SiYSFzX3r0",
"YkO3GmSn5Q3",
"iclr_2021_a2gqxKDvYys",
"iclr_2021_a2gqxKDvYys",
"iclr_2021_a2gqxKDvYys"
] |
iclr_2021_0IO5VdnSAaH | On the Universality of the Double Descent Peak in Ridgeless Regression | We prove a non-asymptotic distribution-independent lower bound for the expected mean squared generalization error caused by label noise in ridgeless linear regression. Our lower bound generalizes a similar known result to the overparameterized (interpolating) regime. In contrast to most previous works, our analysis applies to a broad class of input distributions with almost surely full-rank feature matrices, which allows us to cover various types of deterministic or random feature maps. Our lower bound is asymptotically sharp and implies that in the presence of label noise, ridgeless linear regression does not perform well around the interpolation threshold for any of these feature maps. We analyze the imposed assumptions in detail and provide a theory for analytic (random) feature maps. Using this theory, we can show that our assumptions are satisfied for input distributions with a (Lebesgue) density and feature maps given by random deep neural networks with analytic activation functions like sigmoid, tanh, softplus or GELU. As further examples, we show that feature maps from random Fourier features and polynomial kernels also satisfy our assumptions. We complement our theory with further experimental and analytic results. | poster-presentations | This paper shows that the double descent phenomenon of ridgeless regression appears under considerably general settings of the input distributions by showing a lower bound of the excess risk. The analysis covers various types of input distributions including deterministic and random feature maps and its asymptotic sharpness is also shown.
One reviewer raised a concern about its novelty compared with existing work, but the authors properly clarified the novelty in the rebuttal and updated version of the manuscript. Although there were some other minor concerns, the reviewers all agree that this paper gives a valuable theoretical result supporting universality of double descent phenomenon. I also concur with this assessment. I think this paper is a solid theoretical paper giving an informative result as a piece of researches in double descent. Thus, I would recommend acceptance of this paper. | train | [
"SDvSLJGlwB9",
"UoAyPlNID5v",
"k2NnJt7blcQ",
"9rpNt62CnMv",
"Rlv1KstfLZf",
"sineSyi6v1X",
"XMMqP6uQBss",
"BsZt3HrISIV",
"kHYxLVmN-d-"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the phenomenon of double descent for ridgeless regression. They show that when the label noise in the regression problem is lower bounded, the test error for regression must peak at the interpolation threshold (n=p) before descending again in the over-parameterized regime and that this holds with... | [
7,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"iclr_2021_0IO5VdnSAaH",
"XMMqP6uQBss",
"BsZt3HrISIV",
"kHYxLVmN-d-",
"SDvSLJGlwB9",
"iclr_2021_0IO5VdnSAaH",
"iclr_2021_0IO5VdnSAaH",
"iclr_2021_0IO5VdnSAaH",
"iclr_2021_0IO5VdnSAaH"
] |
iclr_2021_DNl5s5BXeBn | Fair Mixup: Fairness via Interpolation | Training classifiers under fairness constraints such as group fairness, regularizes the disparities of predictions between the groups. Nevertheless, even though the constraints are satisfied during training, they might not generalize at evaluation time. To improve the generalizability of fair classifiers, we propose fair mixup, a new data augmentation strategy for imposing the fairness constraint. In particular, we show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups. We use mixup, a powerful data augmentation strategy to generate these interpolates. We analyze fair mixup and empirically show that it ensures a better generalization for both accuracy and fairness measurement in tabular, vision, and language benchmarks. | poster-presentations | # Paper Summary
The goal of this paper is to improve generalization of fairness metrics by borrowing ideas from "mixup", which attempts to improve generalization in the non-fairness setting by introducing convex combinations of training examples as virtual examples.
They adapt this idea by interpolating between protected *groups*, and adding a regularizer that forces the classifier to vary smoothly along this interpolation path. To this end, they show that, for a particular interpolation function, the (empirical) disparity in the fairness metric is upper bounded by their proposed regularizer (which depends both on the fairness metric, and the interpolation function). They consider two fairness metrics (disparate impact and equalized odds) and two interpolation functions (convex combinations in the feature space, or in a latent space).
As Reviewer 4 points out, the above is not a complete explanation for why their regularizer works: they've only really shown that it upper bounds the empirical disparity in the fairness metric (and we could have regularized this empirical disparity directly, and indeed they do so, as a baseline, in their experiments). Presumably the intuition is that their regularizer is improving generalization by (implicitly) depending on virtual examples, but this isn't made explicit.
In a "theoretical analysis" section, they give closed form solutions using classification loss, along with L2 regularization and either (i) a regularizer penalizing the true disparity of impact or (ii) their proposed regularizer (which upper bounds the former). Both reviewer 4 and I seem to doubt if this adds much insight (the other reviewers didn't discuss this section).
They close with experiments on Adult, CelebA, and Jigsaw Toxicity, all of which show dramatic performance gains using their regularizer. However, they only compare to one external baseline (adversarial debiasing).
# Pros
1. Reviewers agreed that the paper was well-written
1. The derivation of their regularizer is somewhat complex, but is described step-by-step, and very clearly
1. Adapting mixup to the problem of improving fairness generalization seems natural and intuitive, but this intuition is maybe given short shrift in the later sections
1. Experiments show impressive results
# Cons
1. Reviewer 1 notes that having the expected value of the classification function be equal for both protected groups does not imply fairness, since the classification function would presumably be thresholded to make hard classification decisions
1. Reviewer 4 points out that they do not actually explain why their regularizer will improve generalization better than the "usual" disparity regularizer. Instead, they only show that it upper-bounds the empirical disparity in the fairness metric. Presumably, the intuition is that their mixup regularizer is doing something like adding "virtual samples"
1. I would like to see a more detailed explanation of how their regularizer is implemented, in the main text (they only say that it "can be easily optimized by computing the Jacobian of f on mixup samples")
1. Reviewers 1 and 2 would like more external baselines (there is only one at the moment, "adversarial robustness"), with reviewer 1 suggesting early stopping. The authors added a new early stopping experiment on CelebA to the appendix, but it would be nice to have this baseline included in all experiments in the main text
# Conclusion
Three of the four reviewers recommended acceptance, with the "reject" reviewer scoring it "5: weak reject". This reviewer had three main criticisms: (i) matching expected classification functions is not the same as matching classification *decisions*, (ii) fairness problems might not have a generalization problem to begin with, and (iii) the experiments don't include enough external baselines. I disagree with the second point, but agree with the other two. I think the third is the most critical, since the first could be solved in many cases by e.g. sampling instead of making hard deterministic decisions.
Overall, my opinion is that this is a borderline paper, but that it falls on the "accept" side of the boundary. The idea is intuitive, and exposition is clear, the derivation is quite interesting, and the experimental results are (aside from not having enough baselines) impressive. | train | [
"vbP0DQITaTE",
"fRZfRCSPjzV",
"VBSHruufJe",
"QPliKJw9tA1",
"Gdw1FlnHVe",
"J2QcRCD1sLz",
"FIlPQvcvRS7",
"TXuuoFI2fkG",
"OW_vurCvmQ5",
"WHLna0jP1ho",
"yhRNG1NDk68",
"74Jmt-gLXj",
"IF-ebW-xth",
"LJX4XNAV8vS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes \"fair mixup\" for training fair classifiers. Inspired by the mixup algorithm, which was presented to improve the generalization performance in Zhang et al., 2018b, fair mixup pick two samples from two different sensitive groups. Instead of regularizing the gap (e.g., \\delta DP), the authors... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2021_DNl5s5BXeBn",
"VBSHruufJe",
"TXuuoFI2fkG",
"J2QcRCD1sLz",
"FIlPQvcvRS7",
"WHLna0jP1ho",
"74Jmt-gLXj",
"IF-ebW-xth",
"LJX4XNAV8vS",
"vbP0DQITaTE",
"iclr_2021_DNl5s5BXeBn",
"iclr_2021_DNl5s5BXeBn",
"iclr_2021_DNl5s5BXeBn",
"iclr_2021_DNl5s5BXeBn"
] |
iclr_2021_-bdp_8Itjwp | Self-supervised Learning from a Multi-view Perspective | As a subset of unsupervised representation learning, self-supervised representation learning adopts self-defined signals as supervision and uses the learned representation for downstream tasks, such as object detection and image captioning. Many proposed approaches for self-supervised learning follow naturally a multi-view perspective, where the input (e.g., original images) and the self-supervised signals (e.g., augmented images) can be seen as two redundant views of the data. Building from this multi-view perspective, this paper provides an information-theoretical framework to better understand the properties that encourage successful self-supervised learning. Specifically, we demonstrate that self-supervised learned representations can extract task-relevant information and discard task-irrelevant information. Our theoretical framework paves the way to a larger space of self-supervised learning objective design. In particular, we propose a composite objective that bridges the gap between prior contrastive and predictive learning objectives, and introduce an additional objective term to discard task-irrelevant information. To verify our analysis, we conduct controlled experiments to evaluate the impact of the composite objectives. We also explore our framework's empirical generalization beyond the multi-view perspective, where the cross-view redundancy may not be clearly observed. | poster-presentations | This paper received borderline reviews, but all lean toward acceptance.
The reviews highlighted strengths in the paper, citing that they liked the main idea and its mathematical treatment:
* R3: "I liked the abstraction proposed by authors and particularly liked the way authors set up the Definition 1 and analysis afterwards"
* R3 post-discussion: "I recommend accept because authors have a solid theory which would be useful for the self-supervised learning community."
* R4: "This work presents a very detailed theoretical analysis for self-supervised learning objectives. The idea of inverse predictive learning for filtering task irrelevant information is interesting."
* R2: "I like the idea of discarding the redundant task-irrelevant information to improve the self-supervised learning"
However, there was a consensus among reviewers that the experimental validation was weak, both in terms of not showing enough improvement on enough examples and in terms of studying the effect of certain hyperparameters:
* R2: "lack of persuasive experiment results to prove the effectiveness of the proposed method. In fig.3, the improvements on two dataset are marginally, which can not convince me. The \lambda (λ_IP) in proposed objective function seems not robust to different datasets, which makes me doubt about the generalization of this method."
* R3: "Ratings can be improved further if authors can relate experimental setup more to the theory which I find slightly disconnected"
* R3 post-discussion: "All reviewers have concerns about lack of solid experimental evidence [...] I can not improve my score further because of weak experimental evidence."
* R1: "The experiments are conducted in a controlled way [...] Traditional uncontrolled experiments [...] are suggested."
* R4: "The variation in the performance shown in Figure 3 is very marginal. [...] Figure 5 a shows some results on Omniglot, but the improvement shown there is very marginal. [...]"
* R4: "weights required for inverse predictive learning in the loss formulation is not trivial. [...] Is there a simple way to determine this weights without exhaustive search on target dataset?"
* R4: "However, it is not clear from the experimental results if this is really effective."
The authors' revisions aim to improve the discussion of the $\lambda_\text{IP}$ parameter.
Given these experimental limitations, my recommendation is for acceptance but with a low confidence score. | train | [
"7dHAGcdt8br",
"zLPNnduMq65",
"fWW7uf4W7pc",
"35TPgdvqUeY",
"htAdHgL90rV",
"Ec3S1bM8MSm",
"EIKDfLpMnsh",
"6basoTDBQss",
"KB0SWOiO6A",
"z3cRCEyQc8e"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank all reviewers for the thoughtful feedback. We have addressed the concerns from the reviewers below and provided the suggested modifications in the revised manuscript and highlight them in red.",
"[Remarks on the formulations of inverse predictive learning and contrastive learning]\n\nWe are happy to tal... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"iclr_2021_-bdp_8Itjwp",
"EIKDfLpMnsh",
"z3cRCEyQc8e",
"6basoTDBQss",
"6basoTDBQss",
"KB0SWOiO6A",
"iclr_2021_-bdp_8Itjwp",
"iclr_2021_-bdp_8Itjwp",
"iclr_2021_-bdp_8Itjwp",
"iclr_2021_-bdp_8Itjwp"
] |
iclr_2021_IMPA6MndSXU | Integrating Categorical Semantics into Unsupervised Domain Translation | While unsupervised domain translation (UDT) has seen a lot of success recently, we argue that mediating its translation via categorical semantic features could broaden its applicability. In particular, we demonstrate that categorical semantics improves the translation between perceptually different domains sharing multiple object categories. We propose a method to learn, in an unsupervised manner, categorical semantic features (such as object labels) that are invariant of the source and target domains. We show that conditioning the style encoder of unsupervised domain translation methods on the learned categorical semantics leads to a translation preserving the digits on MNIST↔SVHN and to a more realistic stylization on Sketches→Reals. | poster-presentations | This paper studies the problem of unsupervised domain translation. Here translation does not refer to language translation. Instead, it refers to the idea of transferring high-level semantic features. Specifically, the authors look at digit style transfer (between MNIST/postal address numbers and SVHN/street view house numbers) and Sketches to Reals. The visuals look very convincing and the empirical results are strong, too. There is one weaker review but the authors address the concerns in their response and the reviewer did unfortunately not respond despite promting. | test | [
"t5uy6ymx2k8",
"0QTBpZviYlm",
"4kw1n-dg1nw",
"T6VR86BBpQE",
"yKhzWcAILMD",
"QImrjRyj1T_",
"pYAr9vBRnwG",
"IP43esCmYOR",
"cLkTyXHyIK1"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for their review and appreciate the constructive comments.\n\nOur framework for learning domain invariant categorical semantics (Section 3.1) is agnostic to the instantiation of each constituent. Moreover, we believe that such instantiation should depend on the particular problem that a pract... | [
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"QImrjRyj1T_",
"pYAr9vBRnwG",
"pYAr9vBRnwG",
"IP43esCmYOR",
"cLkTyXHyIK1",
"iclr_2021_IMPA6MndSXU",
"iclr_2021_IMPA6MndSXU",
"iclr_2021_IMPA6MndSXU",
"iclr_2021_IMPA6MndSXU"
] |
iclr_2021_aYuZO9DIdnn | The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods | A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis. In this work, we highlight the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods. This step typically corresponds to a whitened dictionary of patches, and gives rise to a data-driven convolutional kernel methods.We extensively study its effect, demonstrating it is the key ingredient for high performance of these methods. Specifically, we show that one of the simplest instances of such kernel methods, based on a single layer of image patches followed by a linear classifier is already obtaining classification accuracies on CIFAR-10 in the same range as previous more sophisticated convolutional kernel methods. We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods. This is a new baseline for object recognition without representation learning methods, that initiates the investigation of convolutional kernel models on ImageNet. We conduct experiments to analyze the dictionary that we used, our ablations showing they exhibit low-dimensional properties. | poster-presentations | This paper studies the patch-based convolutional kernels for image classification, and finds that making the kernel dependent on data is necessary for designing competitive kernels for image classification. The proposed simple method shows comparable results to those end-to-end deeper architectures on CIFAR-10 and ImageNet datasets.
All reviewers feel that the paper is interesting, important, and the performance is impressive. During the rebuttal, the authors have addressed most of the questions and concerns raised by the reviewers. In particular, authors have clarified the motivation, discussed the model size of the proposed method (requested by R1), added precise details about the spectrum definition and intrinsic dimension (requested by R4), and taken the suggestions from all reviewers to improve their paper.
After rebuttal, all reviewers agree on accepting the paper. After checking the discussions between the authors and reviewers, I am convinced that the original concerns of the reviewers are addressed. Hence, I recommend that this paper be accepted.
| test | [
"dhElVqc8Jnq",
"cJpDuuI2QT",
"e_7cBeOczFr",
"A17frgaW-PZ",
"9K5Y-jeEAfz",
"9E7vzzQgn6b",
"EV1T25xtL3S",
"0W_3RvXjrua",
"4X5LfHyzCEz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a powerful non-learning Kernal based baseline for ImageNet classification. The proposed non-learning Kernal based baseline (which can be interpretable to a vector quantization) shows comparable results (88.5) with AlexNet (89.1) in CIFAR-10 top-1 accuracy. The ImageNet result (39.4) shows that ... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
2,
-1,
-1,
-1,
-1,
-1,
5,
2
] | [
"iclr_2021_aYuZO9DIdnn",
"iclr_2021_aYuZO9DIdnn",
"iclr_2021_aYuZO9DIdnn",
"4X5LfHyzCEz",
"cJpDuuI2QT",
"dhElVqc8Jnq",
"0W_3RvXjrua",
"iclr_2021_aYuZO9DIdnn",
"iclr_2021_aYuZO9DIdnn"
] |
iclr_2021_MmCRswl1UYl | Open Question Answering over Tables and Text | In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question. Most open QA systems have considered only retrieving information from unstructured text. Here we consider for the first time open QA over {\em both} tabular and textual data and present a new large-scale dataset \emph{Open Table-and-Text Question Answering} (OTT-QA) to evaluate performance on this task. Most questions in OTT-QA require multi-hop inference across tabular data and unstructured text, and the evidence required to answer a question can be distributed in different ways over these two types of input, making evidence retrieval challenging---our baseline model using an iterative retriever and BERT-based reader achieves an exact match score less than 10\%. We then propose two novel techniques to address the challenge of retrieving and aggregating evidence for OTT-QA. The first technique is to use ``early fusion'' to group multiple highly relevant tabular and textual units into a fused block, which provides more context for the retriever to search for. The second technique is to use a cross-block reader to model the cross-dependency between multiple retrieved evidence with global-local sparse attention. Combining these two techniques improves the score significantly, to above 27\%. | poster-presentations | This paper presents a new dataset for open domain QA where the evidence required for answering a question is gathered from both structured data as well as unstructured data. The authors first show that a standard iterative retriever with a BERT based reader performs poorly on this task. They then propose fused retrieval (grouping relevant tabular and textual elements) followed by a cross-block reader which improves performance.
R4 has raised strong objections about the artificiality of the dataset. I agree with that and it is unfortunate that the authors did not adequately address the reviewer's concern but instead digressed a bit. As suggested by R4, the authors should tone down their claims about the nature of the dataset. The authors should also simplify the presentation of the dataset as suggested by R2 and not make it unnecessarily complex for the reader.
However, overall, based on reviewer feedback, the authors have made significant changes to the paper. In particular they have added more baselines, ablation studies and error analysis which makes the paper much more informative.
I am okay with this paper getting accepted with the assumption that the authors will make the changes suggested above.
| train | [
"peL9FAtu89r",
"3HCymMP9uf",
"Zqpo_TUdq_R",
"y1LrnCGw1Am",
"ygvrrv2pP0",
"tb1fxiSytxG",
"_L6lfQlYfWX",
"w-8kKf-rwc1",
"WxVGhraAqrG",
"EEw0skkUGpV",
"kFngTXTHjD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new setting of open-domain question answering. Usually, we only retrieve question-related text from the web or Wikipedia for answering questions. The authors build up a new dataset which need to retrieve both text and the corresponding table to answer open-domain questions. This setting is mo... | [
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_MmCRswl1UYl",
"iclr_2021_MmCRswl1UYl",
"iclr_2021_MmCRswl1UYl",
"ygvrrv2pP0",
"3HCymMP9uf",
"peL9FAtu89r",
"kFngTXTHjD",
"WxVGhraAqrG",
"EEw0skkUGpV",
"iclr_2021_MmCRswl1UYl",
"iclr_2021_MmCRswl1UYl"
] |
iclr_2021_9uvhpyQwzM_ | Evaluation of Similarity-based Explanations | Explaining the predictions made by complex machine learning models helps users to understand and accept the predicted outputs with confidence. One promising way is to use similarity-based explanation that provides similar instances as evidence to support model predictions. Several relevance metrics are used for this purpose. In this study, we investigated relevance metrics that can provide reasonable explanations to users. Specifically, we adopted three tests to evaluate whether the relevance metrics satisfy the minimal requirements for similarity-based explanation. Our experiments revealed that the cosine similarity of the gradients of the loss performs best, which would be a recommended choice in practice. In addition, we showed that some metrics perform poorly in our tests and analyzed the reasons of their failure. We expect our insights to help practitioners in selecting appropriate relevance metrics and also aid further researches for designing better relevance metrics for explanations. | poster-presentations | This paper performs an empirical comparison of similarity-based attribution methods, which aim to "explain" model predictions via training samples. To this end, the authors propose a handful of metrics intended to measure the acceptability of such methods. While one reviewer took issue with the proposed criteria, the general consensus amongst reviewers is that this provides at least a start for measuring and comparing instance-attribution methods.
In sum, this is a worthwhile contribution to the interpretability literature that provides measures for comparing and contrasting explanation-by-training-example methods. | train | [
"D_KrIrp4jk",
"bX5DMTb2TaT",
"oX-wyCSz_5",
"G9-jfaR6eq",
"XbQQi2X9Bl",
"6Bd_J8w5G-",
"j4u0LLphjrK",
"TBj9uhHfBfv",
"6fXmxnb2wCH",
"a6oHf3PMlO"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your understanding of our work. Let us add just one more point about the narrowness of our scope you pointed out. \nExplanation using similar examples is one important class of explanation methods, as some literature has considered. For example, [Ref1:Sec6], [Ref2:Sec3.2.4], and [Ref3:Sec1.1] raise s... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"oX-wyCSz_5",
"iclr_2021_9uvhpyQwzM_",
"G9-jfaR6eq",
"bX5DMTb2TaT",
"TBj9uhHfBfv",
"6fXmxnb2wCH",
"a6oHf3PMlO",
"iclr_2021_9uvhpyQwzM_",
"iclr_2021_9uvhpyQwzM_",
"iclr_2021_9uvhpyQwzM_"
] |
iclr_2021_wXgk_iCiYGo | A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima | Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks in practice. SGD is known to find a flat minimum that often generalizes well. However, it is mathematically unclear how deep learning can select a flat minimum among so many minima. To answer the question quantitatively, we develop a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. To the best of our knowledge, we are the first to theoretically and empirically prove that, benefited from the Hessian-dependent covariance of stochastic gradient noise, SGD favors flat minima exponentially more than sharp minima, while Gradient Descent (GD) with injected white noise favors flat minima only polynomially more than sharp minima. We also reveal that either a small learning rate or large-batch training requires exponentially many iterations to escape from minima in terms of the ratio of the batch size and learning rate. Thus, large-batch training cannot search flat minima efficiently in a realistic computational time. | poster-presentations | The paper analyzes the behavior of SGD using diffusion theory. They focus on the problem of escaping from a minimum (Kramers escape problem) and derive the escape time of continuous-time SGD and Langevin dynamics. The analysis is done under various assumptions which although might not always hold in practice do not seem completely unreasonable and have been used in prior work. Overall, this is a valuable contribution which is connected to some active research questions regarding the flatness of minima found by SGD (with potential connections to generalization). I would advise the authors to improve the quality of the writing and address other problems raised by the reviewers. I think this would help the paper maximize its impact. | train | [
"gD63KPyyEuY",
"mpJCvpL2QgG",
"4RZJ7vlWLU",
"PQkcP6yiYMo",
"Bcd1lPXtTLE",
"Ik6igW91eL1",
"uVn8nVrX1W",
"J4FJkpDV5Kh",
"oNKioZnB7NX",
"pQ-FamlsN5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper develops a density diffusion theory (DDT) to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. In particular, this paper theoretically and empirically prove that SGD favors flat minima exponentially more than sharp minima, while gradient descent (GD) wit... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2021_wXgk_iCiYGo",
"iclr_2021_wXgk_iCiYGo",
"mpJCvpL2QgG",
"gD63KPyyEuY",
"pQ-FamlsN5",
"oNKioZnB7NX",
"iclr_2021_wXgk_iCiYGo",
"mpJCvpL2QgG",
"iclr_2021_wXgk_iCiYGo",
"iclr_2021_wXgk_iCiYGo"
] |
iclr_2021_fgd7we_uZa6 | How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks? | A recent line of research on deep learning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ϵ−1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ϵ−1. Our results push the study of over-parameterized deep neural networks towards more practical settings. | poster-presentations | The paper studies the convergence rate and generalization of deep ReLU networks trained with gradient descent and SGD in the NTK regime. Although the analysis technique is not really novel and heavily relies on past results, the paper is easy to follow and does provide some nice improvements compared to prior work (e.g. it require less overparametrization, and the NTRF function class is allowed to misclassify a fraction of the training data). Some of the results are very incremental, e.g. the generalization bound for GD seems to simply combine existing bounds on the Rademacher complexity from Bartlett et al. 2017 and from Cao et al. 2019. Nevertheless, the paper does have the potential to yield further improvements in the field and I therefore recommend acceptance as a poster. | train | [
"AEfxCwRJqb_",
"t77AbPgxpr",
"IOJ9sEPqMKN",
"gHxjWk-Fwz_",
"_VPvW09CBcf",
"U_CYJ2lDHBU",
"rXXUPkE3x_u",
"ybGEC5t5ys2",
"2tM2Kvbvs4H",
"x0rMFWIIMkj",
"2_xEAqlEuUc"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your further question. We would like to clarify that we don’t require the output of the neural network to be constantly normalized after training. Since we are using logistic loss, the neural network after training will have a large margin on the training data, i.e., for each training data point, the ou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
2,
4
] | [
"t77AbPgxpr",
"gHxjWk-Fwz_",
"iclr_2021_fgd7we_uZa6",
"2tM2Kvbvs4H",
"ybGEC5t5ys2",
"x0rMFWIIMkj",
"2_xEAqlEuUc",
"iclr_2021_fgd7we_uZa6",
"iclr_2021_fgd7we_uZa6",
"iclr_2021_fgd7we_uZa6",
"iclr_2021_fgd7we_uZa6"
] |
iclr_2021_YHdeAO61l6T | Auction Learning as a Two-Player Game | Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. We propose two conceptual deviations from their approach which result in enhanced performance. First, we use recent results in theoretical auction design to introduce a time-independent Lagrangian. This not only circumvents the need for an expensive hyper-parameter search (as in prior work), but also provides a single metric to compare the performance of two auctions (absent from prior work). Second, the optimization procedure in previous work uses an inner maximization loop to compute optimal misreports. We amortize this process through the introduction of an additional neural network. We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. Both results together further imply a novel formulation of Auction Design as a two-player game with stationary utility functions. | poster-presentations | There is a lot of agreement on this paper, also reflected in the ratings. There were some technical comments initially, on the approach not being IC and interpretable, missing links to other works and technical descriptions of the network and experiments. The authors cleared up many of these issues though with their responses, providing good arguments in favor of their work. In general, reviewers agree the paper would be interesting to be included in ICLR. | train | [
"ex7GTAQSQOI",
"ya2PlycZc3k",
"IwQE5oOPUbq",
"kir3V3ZcLVk",
"5ZE-oAsONLm",
"uf8XlFUooiu",
"yqJ-xm5DdMh",
"fLrMlFTnmy",
"Mi-z7GafN8",
"af6xSG-e5S6",
"8OA9Ly_RxSe",
"b-Vzhp9l15w",
"Dksfqwb99CG",
"If2H_AYKZgp",
"CY-biY4Lk-l"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for getting back to us and reconsidering your evaluation!\n\nIt is possible to consider larger settings of the same order of magnitude, but substantially larger settings (100 bidders with 100 objects for example) are computationally prohibitive in terms of compute and GPU memory for current methods (ours... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"IwQE5oOPUbq",
"iclr_2021_YHdeAO61l6T",
"8OA9Ly_RxSe",
"5ZE-oAsONLm",
"uf8XlFUooiu",
"Dksfqwb99CG",
"CY-biY4Lk-l",
"Mi-z7GafN8",
"b-Vzhp9l15w",
"If2H_AYKZgp",
"ya2PlycZc3k",
"iclr_2021_YHdeAO61l6T",
"iclr_2021_YHdeAO61l6T",
"iclr_2021_YHdeAO61l6T",
"iclr_2021_YHdeAO61l6T"
] |
iclr_2021_sCZbhBvqQaU | Robust Reinforcement Learning on State Observations with Learned Optimal Adversary | We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise. With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found, which is guaranteed to obtain the worst case agent reward. For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones. To enhance the robustness of an agent, we propose a framework of alternating training with learned adversaries (ATLA), which trains an adversary online together with the agent using policy gradient following the optimal adversarial attack framework. Additionally, inspired by the analysis of state-adversarial Markov decision process (SA-MDP), we show that past states and actions (history) can be useful for learning a robust agent, and we empirically find a LSTM based policy can be more robust under adversaries. Empirical evaluations on a few continuous control environments show that ATLA achieves state-of-the-art performance under strong adversaries. Our code is available at https://github.com/huanzhang12/ATLA_robust_RL. | poster-presentations | The paper describes a new technique to train an adversarial MDP to perturb the observations provided by the environment. This adversarial MDP is then used to train an RL agent to be more robust. Since the adversarial agent essentially defines an observation distribution for the environment, the RL agent needs to optimize a POMDP. This is nice work that was unanimously praised by the reviewers. It produces stronger adversaries and more robust RL agents than previous work. This represents an important contribution to the state of the art of robust RL. | train | [
"vEFm4Yx8QQ",
"20tv9w68FUI",
"Nurk0Mo-DDs",
"a0PZbyAcnxv",
"XNEMnR2FrZh",
"jp5_qVB-b5u",
"tWzIi0FsQjt",
"zjW8gy6u_Rj",
"oaVa5eMhyic",
"f7IPgfJ8BQ8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThis paper proposes to improve the robustness of a reinforcement learning agent by alternatively training an agent and an adversary who perturbs the state observations. The learning of an “optimal” adversary for a fixed policy is based on the theory of SA-MDP in prior work. The learning of an optimal pol... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
2
] | [
"iclr_2021_sCZbhBvqQaU",
"vEFm4Yx8QQ",
"vEFm4Yx8QQ",
"vEFm4Yx8QQ",
"f7IPgfJ8BQ8",
"oaVa5eMhyic",
"zjW8gy6u_Rj",
"iclr_2021_sCZbhBvqQaU",
"iclr_2021_sCZbhBvqQaU",
"iclr_2021_sCZbhBvqQaU"
] |
iclr_2021_-6vS_4Kfz0 | Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning | For deep neural network accelerators, memory movement is both energetically expensive and can bound computation. Therefore, optimal mapping of tensors to memory hierarchies is critical to performance. The growing complexity of neural networks calls for automated memory mapping instead of manual heuristic approaches; yet the search space of neural network computational graphs have previously been prohibitively large. We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces, that combines graph neural networks, reinforcement learning, and evolutionary search. A set of fast, stateless policies guide the evolutionary search to improve its sample-efficiency. We train and validate our approach directly on the Intel NNP-I chip for inference. EGRL outperforms policy-gradient, evolutionary search and dynamic programming baselines on BERT, ResNet-101 and ResNet-50. We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads. | poster-presentations | Most of the reviewers agree that this paper presents interesting ideas for an important problem. The paper could be further improved by having a thorough discussion of related works (e.g. Placeto) and construct proxy baselines that reflect these approaches.
The meta-reviewer decided to accept the paper given the positive aspects, and encourages the author to further improve the paper per review comments.
Thank you for submitting the paper to ICLR.
| train | [
"xrGMPNaZ3iE",
"0dMM1sAcune",
"g07Ws9sL6kr",
"PQRjqcyQ2h",
"CvfwPhx8Jky",
"ckZy3Nu4wDw",
"OG0PshTN1wX",
"sRbFFceOyWg",
"Pv8809vdKdU",
"OB0PdkdJdp",
"AH4texsP3xW",
"ftoNpMG-2pw",
"7EpA3WMOSNi",
"g_FuxBPM3U",
"dCCxpALWJ4",
"XlRlf2Pb_M"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overall, I feel the revision seems fine to me. As a last comment, I would like to see the reference to the AutoTVM, Chameleon that addresses the code optimization side of the work as I mentioned in the original review. Overall, I am satisfied with the authors' response, hence increased my score from 6 to 7.",
"W... | [
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"ckZy3Nu4wDw",
"PQRjqcyQ2h",
"iclr_2021_-6vS_4Kfz0",
"OG0PshTN1wX",
"iclr_2021_-6vS_4Kfz0",
"AH4texsP3xW",
"g07Ws9sL6kr",
"g07Ws9sL6kr",
"dCCxpALWJ4",
"XlRlf2Pb_M",
"ftoNpMG-2pw",
"7EpA3WMOSNi",
"g_FuxBPM3U",
"CvfwPhx8Jky",
"iclr_2021_-6vS_4Kfz0",
"iclr_2021_-6vS_4Kfz0"
] |
iclr_2021_TK_6nNb_C7q | Hierarchical Autoregressive Modeling for Neural Video Compression | Recent work by Marino et al. (2020) showed improved performance in sequential density estimation by combining masked autoregressive flows with hierarchical latent variable models. We draw a connection between such autoregressive generative models and the task of lossy video compression. Specifically, we view recent neural video compression methods (Lu et al., 2019; Yang et al., 2020b; Agustssonet al., 2020) as instances of a generalized stochastic temporal autoregressive transform, and propose avenues for enhancement based on this insight. Comprehensive evaluations on large-scale video data show improved rate-distortion performance over both state-of-the-art neural and conventional video compression methods. | poster-presentations | All reviewers recommend acceptance. The authors have addressed several of the reviewers' concerns in their comments, conducted additional experiments, and updated the manuscript accordingly.
A concern was raised regarding the size of the dataset introduced and used by the authors for this work. However, I agree with the authors that it doesn't necessarily make sense to compare this to datasets designed for training video classification and/or generation models; In the compression setting, the quality of individual data points matters much more than their quantity, as the authors argue.
Reviewer 2 was curious about the potential of a pre-trained optical flow module. I believe the authors have convincingly argued that end-to-end learning is likely to be more effective and practical (and indeed, there is plenty of evidence for this in other ML contexts where training data is not scarce). I agree that a direct comparison in the paper would have been interesting, but this would constitute a significant investment of time and effort on the authors' part (as they also point out, training such a module separately could actually be more difficult), and I think it would be unreasonable to make this a condition for acceptance. | train | [
"7-Z1EqFgza0",
"3qK18Rk2P6G",
"cjP_wsA1Q6J",
"qKujB2dXf_a",
"_jisFjKm9ph",
"ythIy6iwAI",
"9zBXREFLfZz",
"E6QWNNab2J5",
"E53JULIrOUO",
"GZ5TYqNMAe"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"#### Summary\nIn this paper, the authors provide a new interpretation of existing video compression models. Their perspective is that a video decoder is a stochastic temporal autoregressive model with latent variables. The introduced latent variables could be either used for providing more expressive power for 1) ... | [
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2021_TK_6nNb_C7q",
"iclr_2021_TK_6nNb_C7q",
"iclr_2021_TK_6nNb_C7q",
"9zBXREFLfZz",
"iclr_2021_TK_6nNb_C7q",
"3qK18Rk2P6G",
"cjP_wsA1Q6J",
"7-Z1EqFgza0",
"GZ5TYqNMAe",
"iclr_2021_TK_6nNb_C7q"
] |
iclr_2021_71zCSP_HuBN | Individually Fair Rankings | We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases. | poster-presentations | The paper focuses on individual fair ranking and proposes an approach for that based on optimal transport. The reviewers are in general positive about the paper, however, there are a a couple of concerns that I believe should be addressed before publication.
First, I find the treatment of the term "counterfactual" misleading in the paper. Counterfactual fairness has been proposed in the literature as a causal notion of individual fairness. However, as far as I can see in the paper, there is not such a causal treatment of counterfactuals in the paper. Thus, I suggest the authors to reconsider their treatment of counterfactuals in the paper, as it may trigger confusion. Second, I also agree with R1 that it is unfair as SenSTIR is the only algorithm to use the same kind of "counterfactual" data than the one used for the evaluation.
| train | [
"T5fyUFKLny",
"SNro9Gkxnae",
"DoDqSF9QsXx",
"nzlIPrYK3Hy",
"P7TvKKf3LM2",
"R1vkwnd66d",
"lHxjEhp8LZM",
"_ckRsDkn29J",
"bXsbyBuf9An",
"bhxdTs20MFB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper address the problem of fair ranking. The authors use a notion of individual fairness, meaning that two similar inputs should receive similar outputs. The presented method uses a transport based regularizer to reach fairness.\nThe authors present a new Algorithm SenSTIR and test it on a synthetic and two ... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"iclr_2021_71zCSP_HuBN",
"iclr_2021_71zCSP_HuBN",
"_ckRsDkn29J",
"P7TvKKf3LM2",
"bXsbyBuf9An",
"T5fyUFKLny",
"bhxdTs20MFB",
"iclr_2021_71zCSP_HuBN",
"iclr_2021_71zCSP_HuBN",
"iclr_2021_71zCSP_HuBN"
] |
iclr_2021_pAbm1qfheGk | Learning Neural Generative Dynamics for Molecular Conformation Generation | We study how to generate molecule conformations (i.e., 3D structures) from a molecular graph. Traditional methods, such as molecular dynamics, sample conformations via computationally expensive simulations. Recently, machine learning methods have shown great potential by training on a large collection of conformation data. Challenges arise from the limited model capacity for capturing complex distributions of conformations and the difficulty in modeling long-range dependencies between atoms. Inspired by the recent progress in deep generative models, in this paper, we propose a novel probabilistic framework to generate valid and diverse conformations given a molecular graph. We propose a method combining the advantages of both flow-based and energy-based models, enjoying: (1) a high model capacity to estimate the multimodal conformation distribution; (2) explicitly capturing the complex long-range dependencies between atoms in the observation space. Extensive experiments demonstrate the superior performance of the proposed method on several benchmarks, including conformation generation and distance modeling tasks, with a significant improvement over existing generative models for molecular conformation sampling. | poster-presentations | The paper combines flow-based and energy-based models to generate molecular conformations given a molecular graph.
For this, a continuous flow model is used to map the graph-based molecular representation into a distribution over conformations.
An energy-based model (EBM) is used to further help the model capture long-range atomic interactions. The proposed method is compared with strong baselines: CVGAE, GraphDG, and RDKit.
The authors addressed most of the reviewers' concerns in the rebuttal.
All the reviewers agree on acceptance. | test | [
"PeQs67uJBHa",
"qJJSA7lulX-",
"bCwwKaf4vH-",
"cF4hoy-JvMH",
"jmYvdtUD7te",
"AWPNGlczfPx",
"Il56hIlxdEE",
"NVYvFeg_CUA",
"OQopXt-Z89F",
"TgmbFYL30tU",
"LnQoFGQTj5y"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"The authors of this manuscript proposed a generative dynamics system for the modelling and generation of 3D conformations of molecules. Specifically, there are three components: (1) conditional graph continuous flow (CGCF) to transform random noise to distances, (2)a closed-form distribution p(R|d, G), and (3) an... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2021_pAbm1qfheGk",
"LnQoFGQTj5y",
"OQopXt-Z89F",
"PeQs67uJBHa",
"TgmbFYL30tU",
"TgmbFYL30tU",
"LnQoFGQTj5y",
"iclr_2021_pAbm1qfheGk",
"iclr_2021_pAbm1qfheGk",
"iclr_2021_pAbm1qfheGk",
"iclr_2021_pAbm1qfheGk"
] |
iclr_2021_hr-3PMvDpil | Efficient Certified Defenses Against Patch Attacks on Image Classifiers | Adversarial patches pose a realistic threat model for physical world attacks on autonomous systems via their perception component. Autonomous systems in safety-critical domains such as automated driving should thus contain a fail-safe fallback component that combines certifiable robustness against patches with efficient inference while maintaining high performance on clean inputs. We propose BagCert, a novel combination of model architecture and certification procedure that allows efficient certification. We derive a loss that enables end-to-end optimization of certified robustness against patches of different sizes and locations. On CIFAR10, BagCert certifies 10.000 examples in 43 seconds on a single GPU and obtains 86% clean and 60% certified accuracy against 5x5 patches. | poster-presentations | The paper develops a novel provable defense against patch-based adversasrial attacks on image classification system, by combining a novel architecture and certification procedure. The theoretical and experimental contributions are convincing and clearly advance the state of the art in provable defenses against adversarial perturbations.
The questions raised by the reviewers were addressed convincingly by the authors during the rebuttal phase, leading to unanimous consensus amongst reviewers towards acceptance. I recommend acceptance. | train | [
"0UiH8JN5s2k",
"J6C5snKgEwR",
"7teoKF66Xpz",
"dBGmH7mHeVP",
"7f1rImWOwuC",
"RQcub27um4C",
"8kcqdHhSeGN",
"wCjfEX7E1Kv",
"4P_QTkuPr00",
"MwVKmHxYUdW",
"Ev-AhjOKGA4"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I would like to thank the authors for providing additional explanations, and appreciate the authors' hard work. After reading the other reviews, the authors response and the updated manuscript, I'm satisfied with the revised content. ",
"We have uploaded a revised version of our submission based on the comments... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"7teoKF66Xpz",
"iclr_2021_hr-3PMvDpil",
"wCjfEX7E1Kv",
"Ev-AhjOKGA4",
"Ev-AhjOKGA4",
"4P_QTkuPr00",
"MwVKmHxYUdW",
"iclr_2021_hr-3PMvDpil",
"iclr_2021_hr-3PMvDpil",
"iclr_2021_hr-3PMvDpil",
"iclr_2021_hr-3PMvDpil"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.