paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_7WwYBADS3E_
Learning Lagrangian Fluid Dynamics with Graph Neural Networks
We present a data-driven model for fluid simulation under Lagrangian representation. Our model uses graphs to describe the fluid field, where physical quantities are encoded as node and edge features. Instead of directly predicting the acceleration or position correction given the current state, we decompose the simula...
withdrawn-rejected-submissions
The consensus recommendation is that the paper is not ready for publication at this time.
train
[ "WJjKK8AK3i", "wD1seHegagq", "oQ_7xxwJFB", "_IBn3_WFEqH", "ChxCF623jHA", "GhuzwMMhrZG", "MBNgUrZ_wf", "ZNwMUHawj8S", "NJ43REVk7CR", "nWfAa1iqjtg", "SzYC-_SgCdD", "SGWpP6K-ek9", "IqNghfCgNus", "3162qUy6Ivd", "pKSIpJdDov", "MePdS4nmc-l" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\nThe paper presents a method for learning Lagrangian fluid dynamics from MPS data. By injecting domain knowledge, and separately training subcomponents of the solver, it achieves low error rates on e.g. divergence.\n\n### Recommendation\nThere are a lot of different approaches for learning physical dyn...
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_7WwYBADS3E_", "oQ_7xxwJFB", "IqNghfCgNus", "SGWpP6K-ek9", "GhuzwMMhrZG", "_IBn3_WFEqH", "iclr_2021_7WwYBADS3E_", "iclr_2021_7WwYBADS3E_", "pKSIpJdDov", "SzYC-_SgCdD", "WJjKK8AK3i", "3162qUy6Ivd", "MePdS4nmc-l", "MBNgUrZ_wf", "iclr_2021_7WwYBADS3E_", "iclr_2021_7WwYBADS3E_" ]
iclr_2021_qoTcTS9-IZ-
Dynamically Stable Infinite-Width Limits of Neural Classifiers
Recent research has been focused on two different approaches to studying neural networks training in the limit of infinite width (1) a mean-field (MF) and (2) a constant neural tangent kernel (NTK) approximations. These two approaches have different scaling of hyperparameters with the width of a network layer and as a ...
withdrawn-rejected-submissions
This paper proposes a general framework to study the limit behavior of neural models with respect to the scaling of hyperparameters in terms of network width, which covers existing mean-field (MF) and neural tangent kernel (NTK) limits, as well as other new limit models that were not discovered before. While the review...
train
[ "9t_VGS5ys3G", "3Mno_f9gMj", "rs41M8-oYIm", "j368WZS_im", "jc2rFiwzczz", "ID6DQrQEV_d", "fKDXBkoHxd3", "m_gPDhnIo3Y", "MC8j2v4eTa", "y4hk0iI3jTR", "ElcQaHj0tZc", "_ZC206F-Kb", "f36_I7ZdNTV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper analyzes joint scalings of the parameter initialization and the learning rate, with respect to the limit of infinite width, in the context of two-layer neural networks with stochastic gradient descent and binary logistic loss. It proposes some “dynamically stable” conditions and identifies a range of sca...
[ 3, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_qoTcTS9-IZ-", "iclr_2021_qoTcTS9-IZ-", "jc2rFiwzczz", "iclr_2021_qoTcTS9-IZ-", "9t_VGS5ys3G", "9t_VGS5ys3G", "9t_VGS5ys3G", "9t_VGS5ys3G", "_ZC206F-Kb", "f36_I7ZdNTV", "3Mno_f9gMj", "iclr_2021_qoTcTS9-IZ-", "iclr_2021_qoTcTS9-IZ-" ]
iclr_2021_uFkGzn9RId8
The act of remembering: A study in partially observable reinforcement learning
Partial observability remains a major challenge for reinforcement learning (RL). In fully observable environments it is sufficient for RL agents to learn memoryless policies. However, some form of memory is necessary when RL agents are faced with partial observability. In this paper we study a lightweight approach: we ...
withdrawn-rejected-submissions
This paper presents a refreshening insight into the classical idea of using external memory for reinforcement learning agents that learn and act in partially observable environments. The authors investigate a number of different memory architectures (Ok, OAk, Kk) and provide an insightful discussion on why we want to r...
val
[ "5WGYZuVKmvc", "ISJXOZ6cqR1", "m4tix5dluYG", "8_WqDURMhH-", "bgTGqH8GPHb", "lUr-UdT72K", "laNvRIY16bA", "n3iLeIp22e", "nPspvV0M9dI", "1Z5odIYOJPZ", "uQ-wloBINuY", "LSBm1UX0ISh", "L1J86AFsebT" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on reinforcement learning in partially-observable environments, and revisits the approach that consists of extending the agent with an external memory. The main contribution of the paper is the proposal (and evaluation) of adding an action to the agent, that allows it to push its current observa...
[ 7, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_uFkGzn9RId8", "n3iLeIp22e", "bgTGqH8GPHb", "iclr_2021_uFkGzn9RId8", "nPspvV0M9dI", "laNvRIY16bA", "8_WqDURMhH-", "5WGYZuVKmvc", "LSBm1UX0ISh", "L1J86AFsebT", "iclr_2021_uFkGzn9RId8", "iclr_2021_uFkGzn9RId8", "iclr_2021_uFkGzn9RId8" ]
iclr_2021_JywMsiz_NtO
Enforcing Predictive Invariance across Structured Biomedical Domains
Many biochemical applications such as molecular property prediction require models to generalize beyond their training domains (environments). Moreover, natural environments in these tasks are structured, defined by complex descriptors such as molecular scaffolds or protein families. Therefore, most environments are ei...
withdrawn-rejected-submissions
The paper has some interesting points in extending IRM to regret minimization, and extending to structured environments. I can see the writing has been improved in the revision. The main criticism arises from the experiment, which can be improved in several aspects. The reviews have been quite detailed and helpful.
train
[ "Z9k1Hy_Uu8", "bJmI93A2d8u", "VktEFQtDdGN", "ttKrMzcs7Qx", "tiWV9JeCTth", "ogTBvd4LwuB", "jG007KzK5mq", "zQQzNXoJtdE", "e1gN9nglljD" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have improved the presentation of section 3 with better explanation of notations.\n\nQ1: The author argues that their methods work in both standard and structured settings in the introduction section, but only structured prediction is done.\n\nIn Table 1, we also report the performance of RGM under the standard...
[ -1, -1, -1, -1, -1, 6, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "ogTBvd4LwuB", "jG007KzK5mq", "zQQzNXoJtdE", "e1gN9nglljD", "iclr_2021_JywMsiz_NtO", "iclr_2021_JywMsiz_NtO", "iclr_2021_JywMsiz_NtO", "iclr_2021_JywMsiz_NtO", "iclr_2021_JywMsiz_NtO" ]
iclr_2021_vrCiOrqgl3B
Outlier Robust Optimal Transport
Optimal transport (OT) provides a way of measuring distances between distributions that depends on the geometry of the sample space. In light of recent advances in solving the OT problem, OT distances are widely used as loss functions in minimum distance estimation. Despite its prevalence and advantages, however, OT is...
withdrawn-rejected-submissions
The paper proposes a novel approach to detect outliers using Optimal transport. the authors prove a very interesting relation between Outlier robust OT and solving OT with a thresholded loss. Numerical experiments show that the proposed approach indeed work for outlier detection. The paper had mixed reviews and the ...
val
[ "8AWoydPV_lb", "Sw_25JMQt7r", "Bv-pHxTCQs", "sPVQpynwhv", "MJVsWa26RA1", "jZYsxXh9MYL", "yXZPvbDPQ1", "H4HeikdtvCE", "1LuCbmrFU0z", "GVgAIgNPPHJ", "1r1Cby51vg7", "A-6sV5mnL1d", "MbX5tBwchtP", "HLW7-8aFEf0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "SUMMARY\n#######\n\nThe present paper proposes a way to robustify Optimal Transport (OT) with respect to outliers.\n\nAssuming that one of the distributions on which OT is computed is $\\epsilon$ corrupted (the second distribution being a parametrized distribution one wants to make close to the first one), authors...
[ 4, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021_vrCiOrqgl3B", "yXZPvbDPQ1", "iclr_2021_vrCiOrqgl3B", "MJVsWa26RA1", "jZYsxXh9MYL", "1LuCbmrFU0z", "MbX5tBwchtP", "HLW7-8aFEf0", "Bv-pHxTCQs", "1r1Cby51vg7", "8AWoydPV_lb", "iclr_2021_vrCiOrqgl3B", "iclr_2021_vrCiOrqgl3B", "iclr_2021_vrCiOrqgl3B" ]
iclr_2021_ADwLLmSda3
Neural Nonnegative CP Decomposition for Hierarchical Tensor Analysis
There is a significant demand for topic modeling on large-scale data with complex multi-modal structure in applications such as multi-layer network analysis, temporal document classification, and video data analysis; frequently this multi-modal data has latent hierarchical structure. We propose a new hierarchical nonne...
withdrawn-rejected-submissions
The paper presents a hierarchical version of NMF for the CP decomposition of tensors. The idea is similar to Chinocki etal 2007 and extends Gao etal 2019, and in Chinocki was presented for the standard linear formulation with regularisation terms. The extension here doesn't use the standard ALS algorithm but rather p...
train
[ "Vn_oLejGtUy", "O3jR1sF5KvZ", "i5anZM6ZAHx", "t89JQQWj5mg", "Mc_hS8ckfri", "K6ETrXC4LI1", "-cdahy05q9w", "QEhAb6CGh98", "UnzjBDwuh5a", "v4AVLiYSJXi", "QNyHwqLn7t" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "SUMMARY:\n\nThis paper presents a hierarchical nonnegative CP tensor decomposition method. It also proposes a training method that leverages forward and backward propagation. The method is tested on both synthetic and real datasets. These experiments illustrate how the method can be used to discover topics and how...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2021_ADwLLmSda3", "t89JQQWj5mg", "UnzjBDwuh5a", "K6ETrXC4LI1", "v4AVLiYSJXi", "Vn_oLejGtUy", "QNyHwqLn7t", "QNyHwqLn7t", "iclr_2021_ADwLLmSda3", "iclr_2021_ADwLLmSda3", "iclr_2021_ADwLLmSda3" ]
iclr_2021_bzVsk7bnGdh
"Hey, that's not an ODE'": Faster ODE Adjoints with 12 Lines of Code
Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; els...
withdrawn-rejected-submissions
The paper focuses on NeuralODE and shows that for the implementation popular among ML community, one of the equations is not an ODE and can be replaced by an integral. This is implemented using "seminorm" (just assigning zero weight to the last equation). Pros: - Well written - Useful to replace the "standard" implem...
train
[ "Xyu_zl7F8G", "3JTEobPOfw6", "Jg5mWp0xDV", "hWLDVmfBTEz", "oZPfAya-fE", "-ItJx8ccTTU", "k5YfgHI5a3V", "MiqWkYwQhVb", "D05sA15auld", "hGh0yvK5aJ", "5W4nyI2Qzxr", "3If4ranM5Cb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "Summarizing the paper claims\n------------------------------------------\nThe paper addresses the problem of reducing the number of function evaluations (NFEs) during neural ODE training with adaptive solver and the adjoint method. Namely, the authors claim that for the variety of applications, NFEs at backward p...
[ 5, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_bzVsk7bnGdh", "iclr_2021_bzVsk7bnGdh", "iclr_2021_bzVsk7bnGdh", "oZPfAya-fE", "D05sA15auld", "3If4ranM5Cb", "3JTEobPOfw6", "Jg5mWp0xDV", "Xyu_zl7F8G", "5W4nyI2Qzxr", "iclr_2021_bzVsk7bnGdh", "iclr_2021_bzVsk7bnGdh" ]
iclr_2021_mnj-9lYJgu
DEEP ADAPTIVE SEMANTIC LOGIC (DASL): COMPILING DECLARATIVE KNOWLEDGE INTO DEEP NEURAL NETWORKS
We introduce Deep Adaptive Semantic Logic (DASL), a novel framework for automating the generation of deep neural networks that incorporates user-provided formal knowledge to improve learning from data. We provide formal semantics that demonstrate that our knowledge representation captures all of first order logic and t...
withdrawn-rejected-submissions
The reviewers and AC appreciate the improvements made to the paper and thank the authors for engaging with the reviewer questions. There are now quite a few neuro-symbolic approaches, and they are all rather similar. This places a larger burden on the authors to have a thorough and systematic experimental comparison an...
train
[ "fzBfbUAN43q", "uphm0LlmGo8", "fBhs5y5HGQi", "Ukj4fE0cXi7", "NUtY_n7oLs1", "fkoGkGOPpd1", "gd5qHRmIMLS", "IECaaJv_Px", "L0sNcrkk_q", "fUctSyLSl8", "EiTQDKOizoK", "WD8PenZRfix", "gCdgw9Fj-rU", "joLrAW-yx9", "2WmWEi1TQtC", "_YW35qRncE4", "MF_VS1hjQXY", "jlaIPuRFEfY" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper integrates a neural encoding of first-order logic with deep learning architectures, supplementing training data with declarative knowledge. The approach is experimentally evaluated on MNIST and visual predicate detection, demonstrating a reduction in data requirements. \n\nThe paper deals with an import...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_mnj-9lYJgu", "fBhs5y5HGQi", "gCdgw9Fj-rU", "NUtY_n7oLs1", "WD8PenZRfix", "2WmWEi1TQtC", "IECaaJv_Px", "fUctSyLSl8", "iclr_2021_mnj-9lYJgu", "EiTQDKOizoK", "WD8PenZRfix", "fzBfbUAN43q", "jlaIPuRFEfY", "MF_VS1hjQXY", "_YW35qRncE4", "iclr_2021_mnj-9lYJgu", "iclr_2021_mnj-9lYJ...
iclr_2021_T3RyQtRHebj
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
In contrast to traditional weight optimization in a continuous space, we demonstrate the existence of effective random networks whose weights are never updated. By selecting a weight among a fixed set of random values for each individual connection, our method uncovers combinations of random weights that match the perf...
withdrawn-rejected-submissions
The idea behind this paper is to develop a training algorithm that chooses among a fixed set of weights for each true weight in a neural network. The results are reasonable -- though difficult to quantify as either good or surprising -- performance from the algorithm. A perhaps interesting point is that additional fin...
train
[ "rEl_LNjrzSe", "2PjiHeEhDmd", "jAIYNm12iVn", "jnHeHdZoHCO", "KXQhVumopp", "hokBmto1dL2", "mZXe27rVIbN", "9qj1b8IZ5FD", "M6hHJw3aDh-", "KltxCaP2GMv", "4ariJ2gt7Ww", "VLkB9b5fFxA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "### Summary\n\nThe paper investigates a type of neural network in which one of K possible fixed weights is chosen in each neuronal connection. The weights themselves are fixed and random, but the scores that determine which of the weights is chosen are updated through back-propagation using a straight-through esti...
[ 5, 7, 6, -1, -1, -1, 4, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2021_T3RyQtRHebj", "iclr_2021_T3RyQtRHebj", "iclr_2021_T3RyQtRHebj", "9qj1b8IZ5FD", "rEl_LNjrzSe", "mZXe27rVIbN", "iclr_2021_T3RyQtRHebj", "hokBmto1dL2", "KXQhVumopp", "4ariJ2gt7Ww", "2PjiHeEhDmd", "jAIYNm12iVn" ]
iclr_2021_WDVD4lUCTzU
Universal Sentence Representations Learning with Conditional Masked Language Model
This paper presents a novel training method, Conditional Masked Language Modeling (CMLM), to effectively learn sentence representations on large scale unlabeled corpora. CMLM integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. Our English CMLM mode...
withdrawn-rejected-submissions
This paper proposes a Conditional Masked Language Modeling (CMLM) method to enhance the MLM by conditioning on the contextual information. All of the reviewers think the results are good. However, the reviewers also think the intuition and experiments are not so convincing. The responses and revisions still not satis...
train
[ "FdjrveSXho", "I_H2kz-72M", "RUu_AFkp7jK", "W0yDIwgcCIn", "1jVUti2kkB4", "geo0tdb3NW-", "vpFoFpu2UF", "BqWIXVyTp3", "0vG2tl203W", "4bJBmzhP0EO", "O1lJjJzcIbi" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents Conditional Masked Language Modeling (CMLM), which integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. It is shown that the English CMLM model achieves strong performance on SentEval, and outperforms models learned using (s...
[ 6, 4, -1, -1, 5, 7, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2021_WDVD4lUCTzU", "iclr_2021_WDVD4lUCTzU", "I_H2kz-72M", "1jVUti2kkB4", "iclr_2021_WDVD4lUCTzU", "iclr_2021_WDVD4lUCTzU", "FdjrveSXho", "1jVUti2kkB4", "geo0tdb3NW-", "geo0tdb3NW-", "iclr_2021_WDVD4lUCTzU" ]
iclr_2021_ku4sJKvnbwV
Model-Based Reinforcement Learning via Latent-Space Collocation
The ability to construct and execute long-term plans enables intelligent agents to solve complex multi-step tasks and prevents myopic behavior only seeking the short-term reward. Recent work has achieved significant progress on building agents that can predict and plan from raw visual observations. However, existing vi...
withdrawn-rejected-submissions
This work applies collocation, a well known trajectory optimization technique, to the problem of planning in learned visual latent spaces. Evaluations show that collocation-based optimization outperforms shooting via CEM (PlaNet) and shooting via gradient descent. Pros: - I agree with the reviewers that this idea mak...
val
[ "BKUpMJe2Fz", "PCH864P5UCH", "gjuDWLVveZ", "Kc6HQnhkNp7", "PaPsLydRIdG", "MGZVo9L26H4", "_dmQX8GmnMX", "JEdLQUx1yU9", "mDsl81nfwJj", "YvuUhpNUl89", "HKXPP76_xLY", "70TVoTS5cm6", "gFNczQyK3P_", "upVecSzaLT", "kG1vHs4Z2bS", "xaBZSScVJKm", "EPK-7AsjYz2", "8IuMBdisFb8", "GiK0Ken4PBK"...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_re...
[ "#### Summary:\nIn this paper, the authors propose to replace commonly-used shooting-based methods for action sequence planning in learned latent-space dynamics models by a collocation-based method. They argue that shooting-based methods exhibit problematic behavior especially for sparse-reward and long-horizon tas...
[ 4, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_ku4sJKvnbwV", "iclr_2021_ku4sJKvnbwV", "iclr_2021_ku4sJKvnbwV", "MGZVo9L26H4", "iclr_2021_ku4sJKvnbwV", "_dmQX8GmnMX", "JEdLQUx1yU9", "mDsl81nfwJj", "70TVoTS5cm6", "HKXPP76_xLY", "kG1vHs4Z2bS", "gFNczQyK3P_", "upVecSzaLT", "8IuMBdisFb8", "PCH864P5UCH", "GiK0Ken4PBK", "BKUp...
iclr_2021_cy0jU8F60Hy
ACT: Asymptotic Conditional Transport
We propose conditional transport (CT) as a new divergence to measure the difference between two probability distributions. The CT divergence consists of the expected cost of a forward CT, which constructs a navigator to stochastically transport a data point of one distribution to the other distribution, and that of a b...
withdrawn-rejected-submissions
The paper proposes a new measure of difference between two distributions using conditional transport. The paper considers an important problem. However, some major concerns remain after the discussion among the reviewers. In particular, the paper focuses on the evaluation on a toy dataset. It is unclear whether the cl...
test
[ "v6Ke_-8VcY", "3IJo46wxawY", "B-bWkkRxXR", "fHhoAnAt739", "8cXuASVu2oI", "7oSeFZbZgkW", "Gu0eymHoptB", "VOsmFLSsipk", "6ZkuHaucD1j", "91ad1PlH0RQ", "3ZUZyS3pj23", "aINGQJ-tFA", "cjR6FfFsTX7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new transport-based divergence between distributions (CT) and a variant for empirical distributions (ACT). The new divergence is claimed to be more suitable for learning deep generative models than existing divergences like KL, JS (as in the vanilla GAN) and Wasserstein (as used in WGAN and it...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_cy0jU8F60Hy", "v6Ke_-8VcY", "aINGQJ-tFA", "iclr_2021_cy0jU8F60Hy", "3IJo46wxawY", "Gu0eymHoptB", "cjR6FfFsTX7", "6ZkuHaucD1j", "91ad1PlH0RQ", "3ZUZyS3pj23", "cjR6FfFsTX7", "iclr_2021_cy0jU8F60Hy", "iclr_2021_cy0jU8F60Hy" ]
iclr_2021_I6NRcao1w-X
Robust Reinforcement Learning using Adversarial Populations
Reinforcement Learning (RL) is an effective tool for controller design but can struggle with issues of robustness, failing catastrophically when the underlying system dynamics are perturbed. The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribu...
withdrawn-rejected-submissions
The paper studies reinforcement learning in the presence of (adversarial) perturbations in the underlying system dynamics. The main (novel) observation is that  agents trained against a single policy may overfit  to that policy and hence will lack robustness to new/unseen policies. The paper proposes a population-based...
train
[ "5wc1ZUqVmM", "4Sxhv4hLdA4", "QhiKuJ-3g5", "9BpeBbEExeK", "oJTo2y13-30", "w1P3IlRf0mX", "584Irr-P0Li", "vWhPaPbY8OF", "ofSjCEwNT3p", "Y_KVkMV00e3", "3V1w9WdTcA2", "qBMXqL73SFy", "Z7C1EW5zyP2", "ltkTgMokWXJ", "rY8B3gLhFbN", "13vwhyz4HtM", "DhQXct3DIFZ", "OCTwp4ylrkl" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes to improve robustness in reinforcement learning via a population of diverse adversaries, where previous works mainly focus on the use a single adversary to mitigate the problem that the trained policy could be highly exploitable by the adversary. Specifically, at each iteration, it ran...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_I6NRcao1w-X", "QhiKuJ-3g5", "9BpeBbEExeK", "oJTo2y13-30", "3V1w9WdTcA2", "Y_KVkMV00e3", "iclr_2021_I6NRcao1w-X", "rY8B3gLhFbN", "Y_KVkMV00e3", "13vwhyz4HtM", "ltkTgMokWXJ", "OCTwp4ylrkl", "qBMXqL73SFy", "DhQXct3DIFZ", "5wc1ZUqVmM", "iclr_2021_I6NRcao1w-X", "iclr_2021_I6NRc...
iclr_2021_jnRqf0CzBK
Hierarchical Probabilistic Model for Blind Source Separation via Legendre Transformation
We present a novel blind source separation (BSS) method, called information geometric blind source separation (IGBSS). Our formulation is based on the log-linear model equipped with a hierarchically structured sample space, which has theoretical guarantees to uniquely recover a set of source signals by minimizing the K...
withdrawn-rejected-submissions
The focus of the submission is blind source separation (BSS). The authors propose a log-linear model based formulation to tackle the task and to relax assumption/restrictions (linear mixing, non-convex objective, ...) present in previous techniques. They use the maximum likelihood approach [Eq. (3)] with natural gradie...
test
[ "f-HH1jyx3hZ", "LtnF0MNDOf-", "CCdkkzVvjp", "dNGDZwYWXF1", "WrsaN4uUz-Q", "8NT26nJWo8b", "kcONaD8ysNn", "D6zdz3qRhku", "1Q-KtA0an5f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Additional comments: I have read the authors' response and the other reviews. While my initial concerns have been mostly addressed, there are still concerns from the other reviews and I have revised my score accordingly.\n\nIn addition, I am not completely satisfied with the authors' response concerning higher-ord...
[ 4, 3, 4, -1, -1, -1, -1, -1, 2 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_jnRqf0CzBK", "iclr_2021_jnRqf0CzBK", "iclr_2021_jnRqf0CzBK", "1Q-KtA0an5f", "f-HH1jyx3hZ", "CCdkkzVvjp", "D6zdz3qRhku", "LtnF0MNDOf-", "iclr_2021_jnRqf0CzBK" ]
iclr_2021_JCz05AtXO3y
Structural Landmarking and Interaction Modelling: on Resolution Dilemmas in Graph Classification
Graph neural networks are promising architecture for learning and inference with graph-structured data. However, generating informative graph level features has long been a challenge. Current practice of graph-pooling typically summarizes a graph by squeezing it into a single vector. This may lead to significant loss...
withdrawn-rejected-submissions
The proposed approach seems to have elements of novelty, it is well presented and reasonably motivated by the authors. In addition, empirical results seem to be promising. However, although rebuttal helped to clarify some of the pending issues, there are concerns on the fact that the raised issue about "resolution di...
train
[ "iNSvPiMTZ3T", "41uKgglah9f", "fFl3FKino9P", "EQzi_ra-YKQ", "eYhoYLBs_F2", "_q4DeZ7Qk8C", "58Rkr8TTTc7", "MmhB9AVDAN1", "bgT389zonRm", "WKlGckq2I6f", "cK_4v1WfYvQ" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nThe proposed SLIM algorithm organizes graph neural networks around substructures surrounding \"landmarks\" in the graph. In addition to presenting the three steps of the SLIM algorithm (sub-structure embedding, sub-structure landmarking, and \"identity-preserving\" graph pooling), the authors compa...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 2, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_JCz05AtXO3y", "fFl3FKino9P", "_q4DeZ7Qk8C", "iclr_2021_JCz05AtXO3y", "EQzi_ra-YKQ", "EQzi_ra-YKQ", "cK_4v1WfYvQ", "iNSvPiMTZ3T", "WKlGckq2I6f", "iclr_2021_JCz05AtXO3y", "iclr_2021_JCz05AtXO3y" ]
iclr_2021_7TBP8k7TLFA
Universal Approximation Theorem for Equivariant Maps by Group CNNs
Group symmetry is inherent in a wide variety of data distributions. Data processing that preserves symmetry is described as an equivariant map and often effective in achieving high performance. Convolutional neural networks (CNNs) have been known as models with equivariance and shown to approximate equivariant maps for...
withdrawn-rejected-submissions
This work studies the question of universal approximation with neural networks under general symmetries. For this purpose, the authors first leverage existing universal approximation results with shallow fully connected networks defined on infinite-dimensional input spaces, that are then upgraded to provide Universal A...
train
[ "R4hVaLHE2Yq", "kxROIr6WRkv", "V8G0XBHUgd", "Oq1KRgU6jW5", "PRXDimWQrK", "TdJw9N7QJzT", "z5L-dI10H1", "1LZfLKI6YKX", "zk9i62blat2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proves a universal approximation theorem for equivariant maps by group convolutional networks in an extremely general setting. The proof applies to discrete and continuous settings, including infinite dimensional ones. \n\nThe general idea of the proof is as follows. First, it is shown that an equivarian...
[ 7, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_7TBP8k7TLFA", "iclr_2021_7TBP8k7TLFA", "kxROIr6WRkv", "zk9i62blat2", "zk9i62blat2", "R4hVaLHE2Yq", "zk9i62blat2", "iclr_2021_7TBP8k7TLFA", "iclr_2021_7TBP8k7TLFA" ]
iclr_2021_clyAUUnldg
AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient
The local gradient points to the direction of the steepest slope in an infinitesimal neighborhood. An optimizer guided by the local gradient is often trapped in local optima when the loss landscape is multi-modal. A directional Gaussian smoothing (DGS) approach was recently proposed in (Zhang et al., 2020) and used to ...
withdrawn-rejected-submissions
The main goal of this paper is to develop a new adaptive strategy to remove the need of hyperparameter fine tuning, which hinders the performance of DGS (Zhang et al., 2020) method. This paper applies a line-search of the step-size parameter of DGS to reduce tuning. A heuristic update rule of the smooth parameter in ...
test
[ "iCcFxA8OKTp", "8FowPOU-Yn", "Q7f9MDKWF3P", "QzoDj8t_sfX", "HUbYYLZ8wz", "CYhx3o2WXl8", "o3FGam6lpsA", "Zn_SkoeQQ2V", "byDpyNNKKz" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the constructive comments and suggestions. We address the reviewer’s concerns below. \n\n1.\t**How the tuning heuristic are tied to DGS:** We agree that line search is applicable to a wide variety of optimization algorithms. However, using line search is only beneficial if the search dire...
[ -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "CYhx3o2WXl8", "o3FGam6lpsA", "Zn_SkoeQQ2V", "byDpyNNKKz", "iclr_2021_clyAUUnldg", "iclr_2021_clyAUUnldg", "iclr_2021_clyAUUnldg", "iclr_2021_clyAUUnldg", "iclr_2021_clyAUUnldg" ]
iclr_2021_3rRgu7OGgBI
Bi-tuning of Pre-trained Representations
It is common within the deep learning community to first pre-train a deep neural network from a large-scale dataset and then fine-tune the pre-trained model to a specific downstream task. Recently, both supervised and unsupervised pre-training approaches to learning representations have achieved remarkable advances, wh...
withdrawn-rejected-submissions
The authors propose an alternative fine-tuning procedure by introducing a projection head and two new losses to be combined with the vanilla cross-entropy loss. The authors introduce and jointly optimize the standard cross-entropy loss, the contrastive cross-entropy loss for classifier head and the categorical contrast...
train
[ "oE5AeIRldAh", "hIEUS4ek8ra", "b3VD3CHe-F", "7I8U3rWjnnz", "jMgpdKUb8O", "IoGKtJZWdxl", "J3dkiG6Kr96", "Tb26MnxH27n", "PMQYOdoBgH", "6ICVy8Qh6gJ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed response and for revising the paper. I have read your response and will take it into account in the discussion period.", "Thanks for your insightful review. We will address your comments through both feedback and revision.\n\n**Q: Novelty and naive combination of the losses:**\n\n- Th...
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "b3VD3CHe-F", "J3dkiG6Kr96", "PMQYOdoBgH", "Tb26MnxH27n", "PMQYOdoBgH", "6ICVy8Qh6gJ", "iclr_2021_3rRgu7OGgBI", "iclr_2021_3rRgu7OGgBI", "iclr_2021_3rRgu7OGgBI", "iclr_2021_3rRgu7OGgBI" ]
iclr_2021_aa0705s2Qc
Measuring Visual Generalization in Continuous Control from Pixels
Self-supervised learning and data augmentation have significantly reduced the performance gap between state and image-based reinforcement learning agents in continuous control tasks. However, it is still unclear whether current techniques can face the variety of visual conditions required by real-world environments. We...
withdrawn-rejected-submissions
The paper proposes a modification to the DeepMind Control Suite to measure generalization with respect to visual variation. The authors run baseline experiments against their new benchmark and discover, unsurprisingly, that RL agents learning from visual observations overfit to spurious details of the observations. Re...
val
[ "l1EYKS4p_yp", "vmPm1AySLGX", "mwKOuQYI5Sv", "4IqABPzp-3s", "l1DWUQL4DB4", "C3mQqt_ZIL-", "0_MxeXMdeR", "IHCeoJcDP1M", "nv3IM_jTTO" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new dataset for continuous control tasks based on the DeemMind control suite, but with varying backgrounds, colors and lighting/camera conditions. Instead of the fixed appearance in the original suite the frames from this dataset contain a lot of varying structure. The purpose of the dataset ...
[ 6, 6, -1, -1, -1, -1, -1, 5, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_aa0705s2Qc", "iclr_2021_aa0705s2Qc", "iclr_2021_aa0705s2Qc", "nv3IM_jTTO", "vmPm1AySLGX", "l1EYKS4p_yp", "IHCeoJcDP1M", "iclr_2021_aa0705s2Qc", "iclr_2021_aa0705s2Qc" ]
iclr_2021_PP4KyAaBoBK
Human Perception-based Evaluation Criterion for Ultra-high Resolution Cell Membrane Segmentation
Computer vision technology is widely used in biological and medical data analysis and understanding. However, there are still two major bottlenecks in the field of cell membrane segmentation, which seriously hinder further research: lack of sufficient high-quality data and lack of suitable evaluation criteria. In order...
withdrawn-rejected-submissions
This paper focuses on a segmentation of cell imagery (as opposed to the more commonly studied domain of "natural images"). Among its contributions are a novel metric for evaluation of results and a novel dataset. These are acknowledged by the reviewers as strengths. Multiple issues raised in the initial reviews were ad...
train
[ "OVHCJC8hAL8", "rFuS2-DNzFa", "bCaoWfycn4G", "WJHZhfY98U", "7cDgs3_vOye", "6QoFilAlKlu", "EXaEY5tQxPF", "S7g7fgp8cf", "BN8x7-702w2", "Ia6N9sYjIG4", "_IsfITHAAu3", "-uRV39aK_vr", "VQr9qshpfBD", "z8TWdEDZWCg", "WyjEnv_cQXH", "CpHgIMCSkSp", "zytwIea6ZPN", "vWOUO82-Rh", "54rx81T41no"...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer"...
[ "Summary:\n\nThis work presents two contributions towards cell membrane segmentation. First, it introduces a new labelled database for this purpose. The authors claim that this is the largest labelled database of high resolution Electron-Microscopy images for this purpose. Second, the work tackles the issue that th...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_PP4KyAaBoBK", "iclr_2021_PP4KyAaBoBK", "efYk86zOz33", "efYk86zOz33", "WyjEnv_cQXH", "OVHCJC8hAL8", "WyjEnv_cQXH", "WyjEnv_cQXH", "WyjEnv_cQXH", "WyjEnv_cQXH", "WyjEnv_cQXH", "WyjEnv_cQXH", "OVHCJC8hAL8", "OVHCJC8hAL8", "OVHCJC8hAL8", "l8gAcrXvpBn", "l8gAcrXvpBn", "rFuS2-...
iclr_2021_MmcywoW7PbJ
Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning
It is of significance for an agent to learn a widely applicable and general-purpose policy that can achieve diverse goals including images and text descriptions. Considering such perceptually-specific goals, the frontier of deep reinforcement learning research is to learn a goal-conditioned policy without hand-crafted...
withdrawn-rejected-submissions
This work extends previous work on unsupervised learning of goal-conditioned policies: an abstract skill policy, which drives exploration of the state space, is used to propose goals as well as derive rewards for a goal conditioned policy. Reviewers agreed the approach was novel and interesting. All reviewers raised ...
train
[ "htP0G5-KPo8", "pCNcqICiHhO", "DkwCSvOg6yF", "w5i-6aximb1", "216ybEPg3VK", "DjT0Q-VIoet", "xpW559FqwqU", "0DfMC4omevt", "r4whiPhwuqD", "STUK3Pdj2mB", "jCz0x99FUvI", "xhdFtfJ7IPK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a method for combining intrinsic motivation on a state space with goal-conditioned reinforcement learning (GCRL), where goals are defined in some “perceptual space,” such as text or images, which describe the current state. The authors assume access to a renderer that maps states to perceptual ...
[ 6, 6, -1, -1, -1, 7, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_MmcywoW7PbJ", "iclr_2021_MmcywoW7PbJ", "STUK3Pdj2mB", "iclr_2021_MmcywoW7PbJ", "xpW559FqwqU", "iclr_2021_MmcywoW7PbJ", "r4whiPhwuqD", "htP0G5-KPo8", "DjT0Q-VIoet", "pCNcqICiHhO", "xhdFtfJ7IPK", "iclr_2021_MmcywoW7PbJ" ]
iclr_2021_P84ryxVG6tR
REPAINT: Knowledge Transfer in Deep Actor-Critic Reinforcement Learning
Accelerating the learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low or unknown. In this work, we propose a REPresentation-And-INstance Transfer algorith...
withdrawn-rejected-submissions
The is a borderline paper with the reviewers split in their recommendations. The decision is therefore not easy. The work is promising, but a key concern is that the contribution appears incremental: the paper proposes to alternate between kickstarting, which is itself not entirely new as an idea, with a simple insta...
train
[ "I0GM-sP200R", "u-6X2pooksL", "8jIQcayUYQ-", "SLfwLJmv0Uo", "6j7U-nF2Eyc", "qFq7S2bKBMi", "bQx1P9D-cHZ", "fmOQxWJWdg", "UzYOk2JMxrV", "klf4sFyOsT", "qqGD10wblp", "kpSB9fmpTfi", "-D6onUNQjet", "fWUQyK9ZT7", "dqJNvnyYz_j", "Rgm_UCAatAW", "lAGJrDcGV6S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Below I list the strengths and weaknesses of the paper in my opinion. Overall I vote to accept the paper for now, but my final decision will depend on the authors' clarifications to my questions below.\n\nStrengths:\n - The algorithm is a simple combination of ideas each of which seems important according to the r...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_P84ryxVG6tR", "8jIQcayUYQ-", "lAGJrDcGV6S", "I0GM-sP200R", "I0GM-sP200R", "Rgm_UCAatAW", "lAGJrDcGV6S", "iclr_2021_P84ryxVG6tR", "iclr_2021_P84ryxVG6tR", "iclr_2021_P84ryxVG6tR", "klf4sFyOsT", "-D6onUNQjet", "dqJNvnyYz_j", "6j7U-nF2Eyc", "iclr_2021_P84ryxVG6tR", "iclr_2021_P...
iclr_2021_Hrtbm8u0RXu
Provable Memorization via Deep Neural Networks using Sub-linear Parameters
It is known that Θ(N) parameters are sufficient for neural networks to memorize arbitrary N input-label pairs. By exploiting depth, we show that Θ(N2/3) parameters suffice to memorize N pairs, under a mild condition on the separation of input points. In particular, deeper networks (even with width 3) are shown to memor...
withdrawn-rejected-submissions
The AC, the reviewers, and the authors had many discussions about the results in the paper during the discussion period. Below is a brief summary. 1. The paper shows that with $O(N^{⅔})$ parameters, a feedforward neural network can memorize $N$ inputs with arbitrary labels if the inputs satisfy some mild assumptions....
train
[ "-46mkP3lD1X", "Vl32bwa6hCw", "ip5E2vlOsQ", "FbkOhhZCJva", "EjBGlorQEjB", "5zQUELrynUn", "8gMBW412DNW", "4we_TlLDQAp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "==== Summary ====\n\nThe paper studies the memorization capacity of deep networks as a function of the number of parameters. Many prior works have shown that to memorize $N$ examples $O(N)$ parameters are sufficient and that to memorize any set of $N$ examples $\\Omega(N)$ parameters are necessary. This work shows...
[ 7, 6, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_Hrtbm8u0RXu", "iclr_2021_Hrtbm8u0RXu", "iclr_2021_Hrtbm8u0RXu", "EjBGlorQEjB", "4we_TlLDQAp", "Vl32bwa6hCw", "-46mkP3lD1X", "iclr_2021_Hrtbm8u0RXu" ]
iclr_2021_bMzj6hXL2VJ
Ordering-Based Causal Discovery with Reinforcement Learning
It is a long-standing question to discover causal relations among a set of variables in many empirical sciences. Recently, Reinforcement Learning (RL) has achieved promising results in causal discovery. However, searching the space of directed graphs directly and enforcing acyclicity by implicit penalties tend to be in...
withdrawn-rejected-submissions
In this paper, the authors propose an RL-based method for learning DAGs based on searching over causal orders instead of graphs. Order search for learning DAGs is a well-studied problem, and it is well-known that this can relieve some of the burden of searching through the space of DAGs. Several reviewers raised legiti...
train
[ "JtOFbuwVEig", "NHEEi5CnxH9", "pngJllAGVcB", "SZjrJEV9paX", "8Wbbz98Vtn", "oe-YswK-who", "JnIBSE1tWg", "L0z2e5IYdIx", "7IDoCO4oqL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an RL-based method to learn the causal ordering of variables. Specifically, it formulates the ordering search problem as a Markov decision process, and then uses different reward designs to optimize the ordering generating model. Compared to [1], which uses RL to search in the DAG space, the pr...
[ 5, 5, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_bMzj6hXL2VJ", "iclr_2021_bMzj6hXL2VJ", "7IDoCO4oqL", "NHEEi5CnxH9", "L0z2e5IYdIx", "JtOFbuwVEig", "iclr_2021_bMzj6hXL2VJ", "iclr_2021_bMzj6hXL2VJ", "iclr_2021_bMzj6hXL2VJ" ]
iclr_2021_c3MWGN_cTf
Policy Optimization in Zero-Sum Markov Games: Fictitious Self-Play Provably Attains Nash Equilibria
Fictitious Self-Play (FSP) has achieved significant empirical success in solving extensive-form games. However, from a theoretical perspective, it remains unknown whether FSP is guaranteed to converge to Nash equilibria in Markov games. As an initial attempt, we propose an FSP algorithm for two-player zero...
withdrawn-rejected-submissions
The paper shows that a form of Fictitious Self-Play converges to the Nash equilibria in Markov games. Understanding the theoretical properties of Fictitious Self-Play is important, however the paper in its current form is not ready for publication. The paper needs a more thorough discussion on related works, the assump...
train
[ "Br9DS7Idyx-", "jywN-FDGUmx", "DO6IHoztMLs", "OMMQI_ORRqe", "hp2crz0ucd", "NkM-HfdPPnE", "PNvNQ1S0c5D", "y6DIzfMGfcE", "4S2IT363AP4", "w_DSeRvAgtZ" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe appreciate all the reviewers for their valuable feedbacks.\n\nWe'd like to highlight our contribution, address some common concerns raised by the reviewers and list the changes made to our revised submission.\n\nContribution\n\nWe propose an FSP-type algorithm, which we call smooth FSP, for two-player zero-su...
[ -1, -1, -1, -1, -1, -1, 6, 5, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 2 ]
[ "iclr_2021_c3MWGN_cTf", "PNvNQ1S0c5D", "OMMQI_ORRqe", "y6DIzfMGfcE", "w_DSeRvAgtZ", "4S2IT363AP4", "iclr_2021_c3MWGN_cTf", "iclr_2021_c3MWGN_cTf", "iclr_2021_c3MWGN_cTf", "iclr_2021_c3MWGN_cTf" ]
iclr_2021_nCY83KxoehA
Automated Concatenation of Embeddings for Structured Prediction
Pretrained contextualized embeddings are powerful word representations for structured prediction tasks. Recent work found that better word representations can be obtained by concatenating different types of embeddings. However, the selection of embeddings to form the best concatenated representation usually varies depe...
withdrawn-rejected-submissions
The paper proposes a method for using multiple word embeddings in structured prediction tasks. The reviewers shared the concerns that the method seems rather specific to this use case and the empirical improvements do not justify the complexity of the approach. They also questioned the definition of the method as "arch...
train
[ "QzpGk_O_TP4", "aycUbSu26Y", "n3PLzndLxs", "MzXdCaU-6T8", "blxJzPwEoWp", "ToV5VebKh12", "chMnEt5Zu_t", "5kup9Wg2jCB", "p-VWUFC-njL", "6FJjIP-PykY", "NyAPJqvXtyH", "nQ-wfd0Xx7a" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper explores a way of learning how to automatically construct a concatenated set of embeddings for structured prediction tasks in NLP. The paper's model takes up to L embeddings concatenated together and feeds them into standard models (BiLSTM-CRFs or the BiLSTM-Biaffine technique of Dozat and Manning) to t...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_nCY83KxoehA", "iclr_2021_nCY83KxoehA", "MzXdCaU-6T8", "blxJzPwEoWp", "iclr_2021_nCY83KxoehA", "iclr_2021_nCY83KxoehA", "aycUbSu26Y", "nQ-wfd0Xx7a", "QzpGk_O_TP4", "NyAPJqvXtyH", "iclr_2021_nCY83KxoehA", "iclr_2021_nCY83KxoehA" ]
iclr_2021_f0sNwNeqqxx
Practical Locally Private Federated Learning with Communication Efficiency
Federated learning (FL) is a technique that trains machine learning models from decentralized data sources. We study FL under local differential privacy constraints, which provides strong protection against sensitive data disclosures via obfuscating the data before leaving the client. We identify two major concerns in ...
withdrawn-rejected-submissions
This paper studies differentially private, communication-efficient training methods for federated learning. While the problem studied in this paper is well-motivated and interesting, the reviewers raised several concerns about the paper. Despite the authors' reconstruction protection explanation, the concern over large...
train
[ "-g5UcFoRR2w", "JEheugrE6O2", "bC6lRAmhKmJ", "M86Xh8ycyw-", "pJUl0TTdQe8", "tC7dMDeRKV1", "2H1XWBRVl-y", "g095W6xXt7", "819q7DEAbn" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for your comments. Below we address your specific points:\n\nOn choosing large epsilons: We provide a detailed introduction to the reconstruction attack and protection guarantees in the revision, and discussed the choice in the experimental section.\n\nStructures and writing: We've adjusted a...
[ -1, -1, -1, -1, -1, 5, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, 2, 3, 4, 4 ]
[ "tC7dMDeRKV1", "2H1XWBRVl-y", "g095W6xXt7", "819q7DEAbn", "iclr_2021_f0sNwNeqqxx", "iclr_2021_f0sNwNeqqxx", "iclr_2021_f0sNwNeqqxx", "iclr_2021_f0sNwNeqqxx", "iclr_2021_f0sNwNeqqxx" ]
iclr_2021_j5d9qacxdZa
Energy-Based Models for Continual Learning
We motivate Energy-Based Models (EBMs) as a promising model class for continual learning problems. Instead of tackling continual learning via the use of external memory, growing models, or regularization, EBMs have a natural way to support a dynamically-growing number of tasks and classes and less interference with old...
withdrawn-rejected-submissions
There were opinions on both sides of this paper from the reviewers. Reviewers were excited by the novel application of energy-based models (EBMs) to continual learning and the resulting performance gains, but were concerned by the more direct application of EBMs (which has been explored in other work, and here adapted...
train
[ "vfi-ebed8xR", "hIiRQTDtMlH", "3dStbOjNRc2", "Kvjx6vnlJZZ", "u_Eb_XWUFGx", "GUSjLzwVgJx", "99DmF0MoOPW", "ygYIPalcupQ", "B1RUu7zAN8b", "UOI4M6M-65", "QzeltLz-bzH", "qZ74kfSktaL", "g3L9ZPZ84I1" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear AnonReviewer2,\n\nThank you very much for your thorough and insightful review. We spent a large amount of work answering the questions initially requested. We would appreciate it if you could take a look at the revised version and re-evaluate our work.\n\nMany thanks!\n\nPaper Authors", "Dear Reviewer,\n\nT...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "QzeltLz-bzH", "3dStbOjNRc2", "iclr_2021_j5d9qacxdZa", "iclr_2021_j5d9qacxdZa", "iclr_2021_j5d9qacxdZa", "qZ74kfSktaL", "ygYIPalcupQ", "QzeltLz-bzH", "3dStbOjNRc2", "g3L9ZPZ84I1", "iclr_2021_j5d9qacxdZa", "iclr_2021_j5d9qacxdZa", "iclr_2021_j5d9qacxdZa" ]
iclr_2021_Atpv9GUhRt6
Learning from multiscale wavelet superpixels using GNN with spatially heterogeneous pooling
Neural networks have become the standard for image classification tasks. On one hand, convolutional neural networks (CNNs) achieve state-of-the-art performance by learning from a regular grid representation of images. On the other hand, graph neural networks (GNNs) have shown promise in learning image classification fr...
withdrawn-rejected-submissions
This paper received mixed reviews. One reviewer is positive, while the remaining three reviewers are either negative or feel that the paper is below the threshold for acceptance. The ideas presented in the paper are interesting and novel - this was acknowledged by three of the reviewers, even those who did not recomm...
train
[ "sjz4oTE2tKz", "Ln_Qm5Q7-4L", "QLkJYcIa12u", "PqviE3VCiYV", "PfPXYPdJmj_", "JkTCVP39ucb", "ixmuJV47EU", "aVOg4c9fnfg", "fop7RntH1M3" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "## Part 2/2\n\n> **There are two primary weaknesses: First, SLIC is a poor baseline and many other superpixel methods have been developed since 2010 - \"Superpixel Segmentation using Linear Spectral Clustering\" or \"Robust superpixels using color and contour features along linear path\" or even the recent ``Simpl...
[ -1, -1, -1, -1, -1, 2, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "Ln_Qm5Q7-4L", "JkTCVP39ucb", "ixmuJV47EU", "fop7RntH1M3", "aVOg4c9fnfg", "iclr_2021_Atpv9GUhRt6", "iclr_2021_Atpv9GUhRt6", "iclr_2021_Atpv9GUhRt6", "iclr_2021_Atpv9GUhRt6" ]
iclr_2021_-5W5OBfFlwX
Regret Bounds and Reinforcement Learning Exploration of EXP-based Algorithms
EXP-based algorithms are often used for exploration in multi-armed bandit. We revisit the EXP3.P algorithm and establish both the lower and upper bounds of regret in the Gaussian multi-armed bandit setting, as well as a more general distribution option. The analyses do not require bounded rewards compared to classical ...
withdrawn-rejected-submissions
This paper consider a classical multi-armed bandit problem (then a more general RL setting) and prove some upper and lower bounds, in cases that were not explicitly studied in the literature. However, those results are very incremental and do not justify (maybe yet, going beyond the sub-Gaussian case could be interest...
train
[ "52NbQDyoZJp", "yH5T7jSyXd", "oRW-HRfImn0", "mY9SU72M6-b", "CM_0qtnWh6B", "mr6VACkYmZ9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer,\n\nThank you very much for providing these very detailed comments. We highly value your feedback and appreciate your time and effort. Below are responses and our perspectives regarding the comments. \n\nRegarding the unbounded rewards, reward in our case can be arbitrarily large for every time step ...
[ -1, -1, -1, 4, 4, 4 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "mY9SU72M6-b", "CM_0qtnWh6B", "mr6VACkYmZ9", "iclr_2021_-5W5OBfFlwX", "iclr_2021_-5W5OBfFlwX", "iclr_2021_-5W5OBfFlwX" ]
iclr_2021_98ntbCuqf4i
MQES: Max-Q Entropy Search for Efficient Exploration in Continuous Reinforcement Learning
The principle of optimism in the face of (aleatoric and epistemic) uncertainty has been utilized to design efficient exploration strategies for Reinforcement Learning (RL). Different from most prior work targeting at discrete action space, we propose a generally information-theoretic exploration principle called Max-Q ...
withdrawn-rejected-submissions
The paper contributes to the community by introducing an approximation to distribution Q functions, based on the epistemic and aleatoric uncertainty. The reviewers believe the ideas make sense. However the presentation and its experiment results make it hard for them to understand some important details. For example, t...
train
[ "TRAckQEI_ct", "jaVNxsrorAk", "Xe6HPoLQhCJ", "-lERuumpCP7", "GpNUv95AZZa", "7O-hHfpeJos", "7z2e34yFC95", "cUmkYrEJGQm", "r-FE_4YDv7n", "umHwKNCpb_1", "6XZfa778gXU", "OVog7hbw0BY", "2lWYm0iIR7U" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "【Q1:I felt that the paper isn't well written and discusses a lot of different concepts in a haphazard manner. There are a lot of equations and symbols in the text without proper explanation and context which make it difficult to gather the main contribution. The language used gets vague in many statements made in ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3, 3 ]
[ "umHwKNCpb_1", "6XZfa778gXU", "OVog7hbw0BY", "2lWYm0iIR7U", "iclr_2021_98ntbCuqf4i", "7z2e34yFC95", "r-FE_4YDv7n", "-lERuumpCP7", "iclr_2021_98ntbCuqf4i", "iclr_2021_98ntbCuqf4i", "iclr_2021_98ntbCuqf4i", "iclr_2021_98ntbCuqf4i", "iclr_2021_98ntbCuqf4i" ]
iclr_2021_whNntrHtB8D
Gradient Based Memory Editing for Task-Free Continual Learning
Prior work on continual learning often operate in a “task-aware” manner, by assuming that the task boundaries and identifies of the data examples are known at all times. While in practice, it is rarely the case that such information are exposed to the methods (i.e., thus called “task-free”)–a setting that is relatively...
withdrawn-rejected-submissions
The range of the initial reviews was fairly high with overall scores ranging from 4 to 7. The authors provided a good response that answered most of the reviewers' comments and questions. One of the reviewers even increased their score following the authors' response. The focus of some of our discussions and what ul...
train
[ "RM3qGt9pr8", "WEys4feIdki", "CuMaSQTpTRP", "3Jx27EKn6Fy", "Aj3dUm_xl22", "lhXErUPOrSZ", "i6bw1W9715L", "QmFMSXGtAAj", "f-ZwHWwUpUe", "UAYMVvcvTld", "Hg7njTZ8L7o", "JAiWUXy2XRE" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\n\nThis paper proposes a task-free continual learning method called GMED that extends experience replay.\nThe key idea is to modify the individual data points in the replay memory to maximize the one-step forgetting at the current time step.\n\n---\n\n## Pros\n\n- The idea of editing the data in a repla...
[ 6, 3, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_whNntrHtB8D", "iclr_2021_whNntrHtB8D", "Aj3dUm_xl22", "f-ZwHWwUpUe", "i6bw1W9715L", "WEys4feIdki", "RM3qGt9pr8", "WEys4feIdki", "Hg7njTZ8L7o", "JAiWUXy2XRE", "iclr_2021_whNntrHtB8D", "iclr_2021_whNntrHtB8D" ]
iclr_2021_P5RQfyAmrU
Model-centric data manifold: the data through the eyes of the model
We discover that deep ReLU neural network classifiers can see a low-dimensional Riemannian manifold structure on data. Such structure comes via the local data matrix, a variation of the Fisher information matrix, where the role of the model parameters is taken by the data variables. We obtain a foliation of the data do...
withdrawn-rejected-submissions
The paper defines a "local data matrix" (inspired from local Fisher matrix) and uses it to obtain a foliation in the data space. This provides a lens to view the data space from model's perspective. While the idea is interesting, reviewers have two main concerns from the reviewers which are not fully addressed in the a...
train
[ "5cmtO9ziMfV", "YfXHcAe47bA", "kNyMe_qw8Gk", "6u_R7X30Ok2", "Fqg_NdPGGyF", "SWCxnuW8qJI", "en0gLSYa0EJ", "bcCkCT6lxe", "g6-G-Qjelre" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update after response:\nWhile I appreciate the authors' attempt to address my concerns, the fact that model is required to not be fully trained is concerning. It was in this context that I suggested label smoothing - that training on smoothed labels might address a sparse G matrix, but it seems like this point was...
[ 4, 5, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2021_P5RQfyAmrU", "iclr_2021_P5RQfyAmrU", "bcCkCT6lxe", "g6-G-Qjelre", "YfXHcAe47bA", "5cmtO9ziMfV", "iclr_2021_P5RQfyAmrU", "iclr_2021_P5RQfyAmrU", "iclr_2021_P5RQfyAmrU" ]
iclr_2021_PGmqOzKEPZN
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation
The estimation of the ratio of two probability densities has garnered attention as the density ratio is useful in various machine learning tasks, such as anomaly detection and domain adaptation. To estimate the density ratio, methods collectively known as direct density ratio estimation (DRE) have been explored. These ...
withdrawn-rejected-submissions
This paper discusses the likelihood ratio estimation using the Bregman divergence. The authors consider the 'train-loss hacking', which is an overfitting issue causing minus infinity for the divergence. They introduce non-negative correction for the divergence under the assumption that we have knowledge on the upper...
val
[ "fTcvK-jL8Cx", "vYb05dHYHmM", "CR_3DBq6QL8", "qrV9Jd3ZJIT", "G386-PP42vk", "aJTiNFMGTsi", "01MMxCvFeKn", "atkzQAWwHRX", "mYz0sYiGePz" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper addresses an issue that arises in a particular formulation of the density ratio estimation problem. Namely, when one tries to directly fit a density estimation ratio, while minimizing Bergman divergence, it may be that overfitting causes the minimization problem to diverge to minus-infinity.\nThe paper s...
[ 6, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2021_PGmqOzKEPZN", "iclr_2021_PGmqOzKEPZN", "fTcvK-jL8Cx", "01MMxCvFeKn", "mYz0sYiGePz", "atkzQAWwHRX", "iclr_2021_PGmqOzKEPZN", "iclr_2021_PGmqOzKEPZN", "iclr_2021_PGmqOzKEPZN" ]
iclr_2021_RB0iNPXIj60
BBRefinement: an universal scheme to improve precision of box object detectors
We present a conceptually simple yet powerful and flexible scheme for refining predictions of bounding boxes. Our approach is trained standalone on GT boxes and can then be combined with an object detector to improve its predictions. The method, called BBRefinement, uses mixture data of image information and the object...
withdrawn-rejected-submissions
The reviewers appreciate the simplicity of the approach, but found the exposition lacking. There were also concerns about strong similarities to CascadeRCNN, which were not resolved in the rebuttal. In the end all reviewers recommend rejection. The AC sees no reason to overturn this recommendation.
train
[ "BES1WrhrSLp", "Z-FFs6ZNKHE", "Mr2xc4y2Ud4", "KQUHQCuTT3E", "8Tl8_fItiAh", "EeEjMA9iSgc", "kpEVNHqabFy", "6ztLO2NuAx", "cNLrqvODeX8", "RgmaLuaZ1w3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "-The idea of this paper is just to crop the detections and then forwarded to a second stage for more accurate predictions. This idea can be traced back to the original R-CNN paper, which is not even referred and discussed. There are also many papers having a second stage to refine the detection predictions, e.g. C...
[ 2, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "iclr_2021_RB0iNPXIj60", "iclr_2021_RB0iNPXIj60", "KQUHQCuTT3E", "6ztLO2NuAx", "cNLrqvODeX8", "BES1WrhrSLp", "RgmaLuaZ1w3", "iclr_2021_RB0iNPXIj60", "iclr_2021_RB0iNPXIj60", "iclr_2021_RB0iNPXIj60" ]
iclr_2021_uUAuBTcIIwq
Unsupervised Learning of Global Factors in Deep Generative Models
We present a novel deep generative model based on non i.i.d. variational autoencoders that captures global dependencies among observations in a fully unsupervised fashion. In contrast to the recent semi-supervised alternatives for global modeling in deep generative models, our approach combines a mixture model in the l...
withdrawn-rejected-submissions
This paper aims at learning disentangled representation at different level without the supervision signal of group information. To achieve this, the proposed UG-VAE model uses both global variable $\beta$ to represent common information shared across all data, as well as a mixture of Gaussian prior for the local latent...
test
[ "c3vuQ1R7lWh", "Hiyg7mr2UFL", "WtsEdbF_YS8", "VhhnUEBXqqn", "6cG-LkyjZ3_", "Yi6lmYB8J8w", "cPrADm5B4wt", "-Fu88smDktG", "30eW2phOPLN", "C3bX5tNDW6y", "ZYhw4nAYInt", "YAEKBH2xrFc", "tKvIDhu4Z38", "sLfjmXdHN37", "aiPvQDL0uQy", "SL8-bibhplU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a deep generative model based on the non i.i.d. VAE framework in an unsupervised version. The model which combines a mixture prior in the local latent space with global latent space has three advantages: First, the latent space can capture interpretable features. Second, the model performs doma...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_uUAuBTcIIwq", "VhhnUEBXqqn", "iclr_2021_uUAuBTcIIwq", "Yi6lmYB8J8w", "tKvIDhu4Z38", "C3bX5tNDW6y", "c3vuQ1R7lWh", "SL8-bibhplU", "aiPvQDL0uQy", "aiPvQDL0uQy", "WtsEdbF_YS8", "WtsEdbF_YS8", "sLfjmXdHN37", "iclr_2021_uUAuBTcIIwq", "iclr_2021_uUAuBTcIIwq", "iclr_2021_uUAuBTcIIw...
iclr_2021_bQNosljkHj
On the Geometry of Deep Bayesian Active Learning
We present geometric Bayesian active learning by disagreements (GBALD), a framework that performs BALD on its geometric interpretation interacting with a deep learning model. There are two main components in GBALD: initial acquisitions based on core-set construction and model uncertainty estimation with those initial ...
withdrawn-rejected-submissions
All reviewers express concerns, such as about the presentation, the situation of the paper w.r.t. prior work, the experimental evaluation etc., and recommend rejection.
train
[ "F2Mwm4cZDGj", "L4q3pmQInKY", "n9Owdb_UfEM", "TfycMbWNgS", "CUINwqrJ2D", "QD7jBjJHGFX", "XfZfSMBEq5Y", "5h66gfRhjS", "xsPqMmwQV12", "9LV4SVZKlaW" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. The contributions are highlighted in Page 2. \n\n Our Bayesian active learning from geometry may benefit those ICLR members who are anxious about his/her deep model without reasonable theoretical explanation. Therefore, our work brings new insights for active learning with deep representation. \n\n\n\n2. Acti...
[ -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "XfZfSMBEq5Y", "9LV4SVZKlaW", "5h66gfRhjS", "iclr_2021_bQNosljkHj", "QD7jBjJHGFX", "xsPqMmwQV12", "iclr_2021_bQNosljkHj", "iclr_2021_bQNosljkHj", "iclr_2021_bQNosljkHj", "iclr_2021_bQNosljkHj" ]
iclr_2021_sr68jSUakP
Orthogonal Subspace Decomposition: A New Perspective of Learning Discriminative Features for Face Clustering
Face clustering is an important task, due to its wide applications in practice. Graph-based face clustering methods have recently made a great progress and achieved new state-of-the-art results. Learning discriminative node features is the key to further improve the performance of graph-based face...
withdrawn-rejected-submissions
During the discussion phase, although the reviewers acknowledge the effectiveness of the proposed approach, they raised the concern about the novelty of the paper. In my opinion, I also agree that the novelty is not well justified in this paper. In the related work section, although the authors put an effort to review...
train
[ "7SFwYSBVy3p", "y3aCswE7azm", "cDcqXXdJVf", "_eW3HkDFUYd", "65DRjPJmLA9", "6w6DnJkDS2V", "W-KysCpU47J" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Novelty:**\n\nWe propose a new method for learning discriminative features, i.e., from the perspective of subspace learning and feature selection. \n\nFor the purpose of learning discriminative representations, \nalmost all previous methods in the area of deep learning aim to increase the inter-class distance or...
[ -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_sr68jSUakP", "65DRjPJmLA9", "W-KysCpU47J", "6w6DnJkDS2V", "iclr_2021_sr68jSUakP", "iclr_2021_sr68jSUakP", "iclr_2021_sr68jSUakP" ]
iclr_2021_uMDbGsVjCS4
Frequency-aware Interface Dynamics with Generative Adversarial Networks
We present a new method for reconstructing and refining complex surfaces based on physical simulations. Taking a roughly approximated simulation as input, our method infers corresponding spatial details while taking into account how they evolve over time. We consider this problem in terms of spatial and temporal freq...
withdrawn-rejected-submissions
This paper addresses the problem of super-resolution of coarse physical simulations into fine-grained video by satisfying some physical properties. The method uses generative models for sequences of images (conditional GANs with spatial and temporal discriminators), that take into account both the multiplicity of reali...
train
[ "bw8n5gtUoPS", "hqST0CH4VgZ", "NNVMQPjQC_-", "ggU0pZYXBzS", "o2ArWXe4GlZ", "kskV924ERdU", "9hiF4UAdyJr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose to use the same formulation of adversarial losses over space and time as done by Xie et. al. The block-wise frequency evaluation is a new contribution of the paper. \n\nThe paper reads more like a report as opposed to showing unique insights for the niche problem the authors have tackled. There...
[ 3, -1, -1, -1, -1, 4, 5 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_uMDbGsVjCS4", "kskV924ERdU", "bw8n5gtUoPS", "9hiF4UAdyJr", "iclr_2021_uMDbGsVjCS4", "iclr_2021_uMDbGsVjCS4", "iclr_2021_uMDbGsVjCS4" ]
iclr_2021_3UTezOEABr
TimeAutoML: Autonomous Representation Learning for Multivariate Irregularly Sampled Time Series
Multivariate time series (MTS) data are becoming increasingly ubiquitous in diverse domains, e.g., IoT systems, health informatics, and 5G networks. To obtain an effective representation of MTS data, it is not only essential to consider unpredictable dynamics and highly variable lengths of these data but also important...
withdrawn-rejected-submissions
The paper introduces an AutoML method for irregular multivariate time series. The method automates the selection of the configuration as well as the hyperparameter optimization depending on the task. A Bayesian approach handles the network structure search while VAEs + attention is used to learn representations from...
train
[ "aDKFKTlQFDM", "y3oTsxRIjFJ", "Y2JPyJwZdGE", "gKZwDDrnJ_U", "PPnnvyo2ftA", "96xIeYoxzX5", "JA186rnDH2A" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I carefully read the paper and it was interesting.\nThe results seem promising; however, the novelty and motivations are not that satisfied.\nDetailed comments are as follows.\n\n1. Challenge 1 (Trial / Error)\n- I think the first challenge (trial/error) is usual and many hyper-parameter tuning algorithms and tool...
[ 4, -1, -1, -1, -1, 3, 4 ]
[ 4, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_3UTezOEABr", "aDKFKTlQFDM", "96xIeYoxzX5", "JA186rnDH2A", "iclr_2021_3UTezOEABr", "iclr_2021_3UTezOEABr", "iclr_2021_3UTezOEABr" ]
iclr_2021_K6YbHUIWHOy
Memory Augmented Design of Graph Neural Networks
The expressive power of graph neural networks (GNN) has drawn much interest recently. Most existent work focused on measuring the expressiveness of GNN through the task of distinguishing between graphs. In this paper, we inspect the representation limits of locally unordered messaging passing (LUMP) GNN architecture th...
withdrawn-rejected-submissions
In this paper, the authors proposed a method to handle the problem of LUMP GNN architecture. This problem is indeed important and the proposed method has some merits. However, the proposed approach is only applicable to node classification. Moreover, the proposed approach shows the similar theoretical results of Sato ...
train
[ "Xkme65EMlk", "gtay4hJRePO", "jNmC7EsN60g", "Yyrca1DabZE", "UyRd-MBF_sG", "WaP8RljG9vi", "hNXFsQtnT64", "-T-QHMz9P6s", "U0aQ5KXTy0s" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "==========Summary==========\n\nIn this paper, the authors study how to enhance the expressive power of GNNs by memory augmentation. In particular, the authors focus on the cases of the \"locally indistinguishable\" property, demonstrate why existing GNNs fail to differentiate such structures, and propose a memory ...
[ 5, 5, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_K6YbHUIWHOy", "iclr_2021_K6YbHUIWHOy", "iclr_2021_K6YbHUIWHOy", "-T-QHMz9P6s", "Xkme65EMlk", "gtay4hJRePO", "U0aQ5KXTy0s", "iclr_2021_K6YbHUIWHOy", "iclr_2021_K6YbHUIWHOy" ]
iclr_2021_D04TGKz5rfF
A frequency domain analysis of gradient-based adversarial examples
It is well known that deep neural networks are vulnerable to adversarial examples. We attempt to understand adversarial examples from the perspective of frequency analysis. Several works have empirically shown that the gradient-based adversarial attacks perform differently in the low-frequency and high-frequency part o...
withdrawn-rejected-submissions
This work performs a frequency domain analysis on gradient-based adversarial perturbations. The authors argue that the perturbation deltas are largely concentrated in the high frequency domain and suggest a low pass filtering technique to improve the robustness of image classifiers. Reviewers raised several concerns as...
train
[ "HxBeYiYABTh", "6HnFbG6C02o", "vMSsUOU-m4", "93zNFrtDYL0", "ivacPLo7Hv", "35ST9EmfrMb", "mGzfAt4VSi", "5aWY-59oZst", "s9_GLwmOeld", "PQ6TETYyHAr", "H0Zs2LvhRLz" ]
[ "official_reviewer", "public", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper analyzes the frequency spectrum of adversarial perturbations during normal training. The authors show that the low frequency component (LFC) of adversarial perturbation is increasing during training, but it is not increasing fast enough, so the LFC of adversarial perturbation is not as dense as the inpu...
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 5, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_D04TGKz5rfF", "mGzfAt4VSi", "H0Zs2LvhRLz", "s9_GLwmOeld", "HxBeYiYABTh", "PQ6TETYyHAr", "5aWY-59oZst", "iclr_2021_D04TGKz5rfF", "iclr_2021_D04TGKz5rfF", "iclr_2021_D04TGKz5rfF", "iclr_2021_D04TGKz5rfF" ]
iclr_2021_YQVjbJPnPc9
Predictive Attention Transformer: Improving Transformer with Attention Map Prediction
Transformer is a ubiquitous model for natural language processing and has also attracted wide attentions in other domains such as computer vision. The self-attention maps, learned independently for each layer, are indispensable for a transformer model to encode the dependencies among input tokens, however, learning the...
withdrawn-rejected-submissions
Multiple reviewers point out the interesting improvement to mix attention maps at different layers via convolution based prediction modules. This module is sufficient to show improvements only on encoder side while comparing to concurrent work Synthesizer. However, the novelty of the work is limited as compared to othe...
train
[ "LkSsjLSXnXM", "dYszi1w7cs7", "SKJIwAvXDP7", "-HeKlmM1KVi", "tomzp9D_dtj", "ZDCm4TuJTSC", "gkiYyR1lPE5", "8-Yx5MkuEYB", "61OcQkl3z8k", "mEz5bdrZOqp", "TS8M5EHlDcx", "0ckHRz8VuWc", "VEg9zo1hDhb", "0qDrGg14JGM", "E2ngssr6WG8", "13C7i0iSHqV", "IN_7RoUPm9D", "qcAYZpe-h7", "XLTc-sefSj...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "...
[ "1. We add evaluation results on RoBERTa-Large and T5-Base, which verify the superiority of PA-BERT on more pre-trained models. Impressively, PA-T5-Base outperforms Synthesizer on the same backbone model without the need of pre-training from scratch again. We also want to address that the conclusion of Synthesizer ...
[ -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_YQVjbJPnPc9", "mEz5bdrZOqp", "tomzp9D_dtj", "iclr_2021_YQVjbJPnPc9", "ZDCm4TuJTSC", "8-Yx5MkuEYB", "0ckHRz8VuWc", "gkiYyR1lPE5", "VEg9zo1hDhb", "13C7i0iSHqV", "iclr_2021_YQVjbJPnPc9", "E2ngssr6WG8", "TS8M5EHlDcx", "-HeKlmM1KVi", "0qDrGg14JGM", "XLTc-sefSjl", "qcAYZpe-h7", ...
iclr_2021_8CjVaaSSVxg
Learning Predictive Communication by Imagination in Networked System Control
Dealing with multi-agent control in networked systems is one of the biggest challenges in Reinforcement Learning (RL) and limited success has been presented compared to recent deep reinforcement learning in single-agent domain. However, obstacles remain in addressing the delayed global information where each agent lear...
withdrawn-rejected-submissions
This paper proposes a technique of communicating predicted local states between agents in multi-agent reinforcement learning to deal with the delay in communication. While the paper addresses an important practical problem, the reviewers have concerns about the insufficiency of novelty and experimental validation.
train
[ "tZW8nNarvv", "Py76_cgf74H", "28LUygPh-SE", "v6Y1mFRdvxO", "HqoUpiBgaSe", "9450cuUlatr", "ZI_xXkZa4W", "D1NtKLi2tYd" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to communicate predicted local states between neighboring agents to address the problem of delayed information in networked multi-agent reinforcement learning. To enable agents to predict future states, a world model is learned at each agent. It is empirically demonstrated that the proposed meth...
[ 4, -1, -1, -1, -1, -1, 4, 5 ]
[ 4, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2021_8CjVaaSSVxg", "28LUygPh-SE", "v6Y1mFRdvxO", "D1NtKLi2tYd", "tZW8nNarvv", "ZI_xXkZa4W", "iclr_2021_8CjVaaSSVxg", "iclr_2021_8CjVaaSSVxg" ]
iclr_2021_YzgAOeA67xX
Stable Weight Decay Regularization
Weight decay is a popular regularization technique for training of deep neural networks. Modern deep learning libraries mainly use L2 regularization as the default implementation of weight decay. \citet{loshchilov2018decoupled} demonstrated that L2 regularization is not identical to weight decay for adaptive gradient m...
withdrawn-rejected-submissions
The paper proposes a novel way to have weight decay-like update rule. Empirically, the authors claim that it improves generalization when applied to momentum-based optimizers and optimizers with coordinate-wise learning rates. This paper has been thoroughly discussed, both in public and private mode. The strength of t...
train
[ "GPSY2oklaRz", "e-UuszURwzN", "d0Y2DeCSPcV", "9agPUC6Gt1u", "0zuPwz9Nin", "fY8EdjDC8bd", "-awd0khPabj", "9_TP9f1twb", "9JK__IKHEMN", "ZvXhf_Kherb", "jbXQPbStVVH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "## Summary\n\nIn this paper the authors introduce the notion of stable weight decay. The stable weight decay property can be defined in dimension 1 as follow: the effective learning rate represents an amount of time ellapsed between two iteration. The weight decay factor normalized (in log space) by the time ellas...
[ 5, 5, 6, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_YzgAOeA67xX", "iclr_2021_YzgAOeA67xX", "iclr_2021_YzgAOeA67xX", "ZvXhf_Kherb", "iclr_2021_YzgAOeA67xX", "e-UuszURwzN", "GPSY2oklaRz", "d0Y2DeCSPcV", "jbXQPbStVVH", "iclr_2021_YzgAOeA67xX", "iclr_2021_YzgAOeA67xX" ]
iclr_2021_X9LHtgR4vq
Bractivate: Dendritic Branching in Medical Image Segmentation Neural Architecture Search
Researchers manually compose most neural networks through painstaking experimentation. This process is taxing and explores only a limited subset of possible architecture. Researchers design architectures to address objectives ranging from low space complexity to high accuracy through hours of experime...
withdrawn-rejected-submissions
This paper would greatly benefit from some reorganization/rewriting since, as pointed out by some of the reviewers, it’s hard to follow in its current form. While a biologically inspired NAS algorithm could be an interesting direction to explore, the current paper falls short in providing evidence that the approach is ...
train
[ "zX76m061gA1", "IBFdRqFRuH", "vG4RWv-7FLt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a neural architecture search (NAS) algorithm inspired by brain physiology. In particular, they propose a NAS algorithm based on neural dendritic branching, and apply it to three different segmentation tasks (namely cell nuclei, electron microscopy, and chest X-ray lung segmentation). The author...
[ 3, 4, 4 ]
[ 3, 4, 4 ]
[ "iclr_2021_X9LHtgR4vq", "iclr_2021_X9LHtgR4vq", "iclr_2021_X9LHtgR4vq" ]
iclr_2021_5K8ZG9twKY
Efficient Estimators for Heavy-Tailed Machine Learning
A dramatic improvement in data collection technologies has aided in procuring massive amounts of unstructured and heterogeneous datasets. This has consequently led to a prevalence of heavy-tailed distributions across a broad range of tasks in machine learning. In this work, we perform thorough empirical studies to show...
withdrawn-rejected-submissions
This paper studies the problem of multivariate mean estimation with a focus on the heavy-tailed setting. The authors give an algorithm for this estimation task and then use it (in essentially a black-box manner) to obtain heavy-tailed estimators for various supervised learning tasks. As pointed out by one of the review...
train
[ "n6fcOxjmver", "1LZaiBFgPWY", "KRSvwVRfxXK", "rtr_j0w6--", "G2UsfkE72Qr", "f_1sVOQCjnZ", "6HCaKJQVf4L", "s-7mh69s5b5", "A6dVXQg7qUR", "tO52JyhEsKQ", "ra8mhXwJ_Ox", "VK-kyXv7zK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for their detailed reviews. I have updated my score\n\n---\n\nThe submission presents a robust estimator for the mean of heavy-tailed distributions for application to neural network training. Understanding and dealing with the distribution of the noise in stochastic optimization for machine lea...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2021_5K8ZG9twKY", "KRSvwVRfxXK", "s-7mh69s5b5", "VK-kyXv7zK", "tO52JyhEsKQ", "rtr_j0w6--", "n6fcOxjmver", "ra8mhXwJ_Ox", "iclr_2021_5K8ZG9twKY", "iclr_2021_5K8ZG9twKY", "iclr_2021_5K8ZG9twKY", "iclr_2021_5K8ZG9twKY" ]
iclr_2021_7YctWnyhjpL
Multi-Task Learning by a Top-Down Control Network
As the range of tasks performed by a general vision system expands, executing multiple tasks accurately and efficiently in a single network has become an important and still open problem. Recent computer vision approaches address this problem by branching networks, or by a channel-wise modulation of the network feature...
withdrawn-rejected-submissions
The paper is very interesting and novel, and all reviewers are of the same opinion. The main concern, however, is on the experimental section that is limited to image classification benchmarks and that some critical comparisons are missing (e.g. clarify factors that play key role in improvement, more computation and t...
train
[ "YHBpmHsCN4J", "xi8TWWQFRVX", "Ia0LKIXdNcq", "r_cce5u9SgU", "radcywwa017", "Er-JzIIOl3y", "2Im2GKhT5un", "zV6WAIo4qcI" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper a novel top-down control network is introduced for multi-task learning. Different from the traditional bottom-up attention models, the authors introduce a top-down module to modify the activation of recognition network based on different tasks. Specifically,the proposed module consists of three ident...
[ 7, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_7YctWnyhjpL", "iclr_2021_7YctWnyhjpL", "iclr_2021_7YctWnyhjpL", "YHBpmHsCN4J", "zV6WAIo4qcI", "2Im2GKhT5un", "iclr_2021_7YctWnyhjpL", "iclr_2021_7YctWnyhjpL" ]
iclr_2021__sSHg203jSu
Data-aware Low-Rank Compression for Large NLP Models
The representations learned by large-scale NLP models such as BERT have been widely used in various tasks. However, the increasing model size of the pre-trained models also brings the efficiency challenges, including the inference speed and the model size when deploying the model on devices. Specifically, most operatio...
withdrawn-rejected-submissions
This paper proposes a method for compressing weight matrices in large scale pre-trained NLP encoders (like BERT) through low-rank decompositions of both fully connected and self-attention layers. The method is used to compress and speedup pre-trained models. Experiments measure timing on a single CPU thread and demonst...
train
[ "uMdTUK-2RHt", "74bi8yPrzN0", "z8xMupPpWR", "a2jckV4QCL4", "Z5Qwd0JIyf", "rlYIxRhBJn", "BG4QZnUKNee", "64He9zp5QHA", "_XEacCfnCvf", "IyBbbPn3Yum", "X_MB0qeOe8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper studies a technique to increase the inference speed and decrease model sizes of pretrained NLP models such as BERT. Since most operations in BERT consist of matrix multiplications, the authors conduct empirical experiments to show that while matrices themselves are not low-rank, the learned re...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021__sSHg203jSu", "iclr_2021__sSHg203jSu", "a2jckV4QCL4", "Z5Qwd0JIyf", "rlYIxRhBJn", "74bi8yPrzN0", "uMdTUK-2RHt", "IyBbbPn3Yum", "X_MB0qeOe8", "iclr_2021__sSHg203jSu", "iclr_2021__sSHg203jSu" ]
iclr_2021_yEnaS6yOkxy
Class Balancing GAN with a Classifier in the Loop
Generative Adversarial Networks (GANs) have swiftly evolved to imitate increasingly complex image distributions. However, majority of the developments focus on performance of GANs on balanced datasets. We find that the existing GANs and their training regimes which work well on balanced datasets fail to be effective in...
withdrawn-rejected-submissions
The authors have provided very detailed responses and added additional experimental results, which have helped address some of the referees' concerns. However, since the modification made to a vanilla GAN algorithm is relatively small, the reviewers are hoping to see the experiments on more appropriate real-world datas...
train
[ "Fm9JEvNOmHX", "ohBUTWW1pb-", "hk7AFnAt9dS", "_QhabSy_jPu", "7t6yZcxzfFl", "WquLCjHNgW", "7CoQ9WzMCYG", "RBIwkLn4aYd", "Z2Lg3QDOrtL", "BTiKjJKSx6f", "C-0hAOsZ6Ch" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "**Overview**: The paper presents a simple regularizer term that aims to force a GAN to generate samples following a uniform distribution over different classes. The regularizer depends on a classifier that works well on an imbalanced or long-tailed dataset. The paper presents experiments on CIFAR-10 and LSUN that ...
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_yEnaS6yOkxy", "iclr_2021_yEnaS6yOkxy", "iclr_2021_yEnaS6yOkxy", "ohBUTWW1pb-", "C-0hAOsZ6Ch", "hk7AFnAt9dS", "hk7AFnAt9dS", "iclr_2021_yEnaS6yOkxy", "Fm9JEvNOmHX", "Fm9JEvNOmHX", "iclr_2021_yEnaS6yOkxy" ]
iclr_2021_KjeUNkU2d26
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent latent spaces. Aiming for unsupervised disentanglement, we introduce an inductive bias to our formulation by assigning different and independent roles to content and style when approximating t...
withdrawn-rejected-submissions
The paper proposes an approach to defining/tackling the question of separating "style" and "content" of images, and introduces a novel way to learn representation that disentangle these aspects of images. I think it offers some new ideas. The reviewers were split on the evaluation. Among the chief concerns with the ini...
val
[ "sUKXcTf0aPE", "ZIBydlj58CP", "rr7PdzOTJei", "mI0RpywmduU", "8PE-cnw-KnB", "FS3J3Bjujkj", "T8-T10YRgDT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, authors introduce a new approach to content-style (C-S) disentanglement for multimodal unsupervised image-to-image translation. The main idea behind the proposed method is that the content information is encoded into the latent space common for both source and target domains, while the domain-specif...
[ 4, 7, -1, -1, -1, -1, 4 ]
[ 4, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2021_KjeUNkU2d26", "iclr_2021_KjeUNkU2d26", "iclr_2021_KjeUNkU2d26", "ZIBydlj58CP", "T8-T10YRgDT", "sUKXcTf0aPE", "iclr_2021_KjeUNkU2d26" ]
iclr_2021_QSMvGB5j5-
Higher-order Structure Prediction in Evolving Graph Simplicial Complexes
Dynamic graphs are rife with higher-order interactions, such as co-authorship relationships and protein-protein interactions in biological networks, that naturally arise between more than two nodes at once. In spite of the ubiquitous presence of such higher-order interactions, limited attention has been paid to the hig...
withdrawn-rejected-submissions
This paper proposes a method for predicting higher-order structure in time-varying graphs. The paper was reviewed by three expert reviewers, and while they expressed appreciation for the sensible solution, they have remaining concerns about the novel contributions and comparisons (analytical and empirical) with previou...
train
[ "LzhiXiUJU9", "fpD00uEiMx", "ynevdce6SBP", "hlGqVn5iVK1", "zl5HUL2ilG", "BMQxp3Y-imn", "2NORIWlbcP", "btjzXalj86n", "b6027vtsA6a", "B3t3bwlJvZE", "3Z1eC_-aXe3", "uBp2EOU5S0", "Jrd2km6-jP_" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your clear feedback. Below, we answer your queries.\n\n**R3A3**: Your storage complexity ignores the storage of history of the evolution of the graph, which may have significant impact on the storage complexity, to the best of my understanding.\n\n**Ans**: Yes, if we took into account the storage cost t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4 ]
[ "ynevdce6SBP", "iclr_2021_QSMvGB5j5-", "B3t3bwlJvZE", "zl5HUL2ilG", "b6027vtsA6a", "Jrd2km6-jP_", "3Z1eC_-aXe3", "uBp2EOU5S0", "BMQxp3Y-imn", "2NORIWlbcP", "iclr_2021_QSMvGB5j5-", "iclr_2021_QSMvGB5j5-", "iclr_2021_QSMvGB5j5-" ]
iclr_2021_OZgVHzdKicb
Reinforcement Learning with Bayesian Classifiers: Efficient Skill Learning from Outcome Examples
Exploration in reinforcement learning is, in general, a challenging problem. In this work, we study a more tractable class of reinforcement learning problems defined by data that provides examples of successful outcome states. In this case, the reward function can be obtained automatically by training a classifier to c...
withdrawn-rejected-submissions
Summary: This paper introduces a method to try to learn in environments where a person specifies successful outcomes but there is no environmental reward signal. I'd personally be interested in knowing where people were able to easily provide such successful outcomes instead of, for instance, providing demonstrations...
train
[ "NutPG60XnTO", "0mkkGx_usM", "KdS-k4Y6b4q", "SRTYvjm1djG", "JCLmUikd1sm", "kLB-kDjODxx", "HyBQ9df5ADb", "dqfl6ywvfC", "Dg2FJnkax57", "Lj4LMQbAlP", "VX-yNYjtNor", "ceqd-qYB6uy", "YtgcML0R5lK", "D_nKA6mil4m", "DfhATmoBU4F", "z50atj_uHZo", "7A6YymAe8l1", "EevTAO5IkCT", "4XbxC0PFRF",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ "This paper considers the problem of learning a policy for an MDP with unspecified reward, given user-provided goal states. To this end, a reward model and a policy are jointly learned: the reward model is the conditional normalized maximum likelihood (CNML) learned from a training set consisting of the example goa...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_OZgVHzdKicb", "iclr_2021_OZgVHzdKicb", "iclr_2021_OZgVHzdKicb", "VX-yNYjtNor", "iclr_2021_OZgVHzdKicb", "Dg2FJnkax57", "dqfl6ywvfC", "EevTAO5IkCT", "Lj4LMQbAlP", "D_nKA6mil4m", "YtgcML0R5lK", "DfhATmoBU4F", "g4dLxUPGPGa", "DfhATmoBU4F", "PtvRZQq3ux", "KdS-k4Y6b4q", "0mkkGx...
iclr_2021_I3zV6igAT9
Quantile Regularization : Towards Implicit Calibration of Regression Models
Recent works have shown that most deep learning models are often poorly calibrated, i.e., they may produce overconfident predictions that are wrong, implying that their uncertainty estimates are unreliable. While a number of approaches have been proposed recently to calibrate classification models, relatively l...
withdrawn-rejected-submissions
This paper provides a novel method for calibrating probabilistic regression models without requiring a held-out calibration set. The technical advances are interesting, and the experimental results look promising. The authors made a number of improvements based on the reviews, and the authors have done a good job with ...
train
[ "r-iaTl4DaGw", "pzzPyiDFop", "X_jcfAExumK", "j5B7mMmn0dg", "3Fw-tstgxLO", "V1NskoL4wJn", "HT6t8LCnui", "PVSp9_q1D4o", "4tyZRo2rsZh", "h1hPiRwH1Jw" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The manuscript discusses the side-effects (or drawbacks) of Isotonic regression and proposes an alternative approach for calibration in regression problems. The authors demonstrate the limitiation of Isotonoc regression such as nonsmooth PDFs and truncation of support under some constructions of the calibration da...
[ 6, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_I3zV6igAT9", "iclr_2021_I3zV6igAT9", "j5B7mMmn0dg", "r-iaTl4DaGw", "4tyZRo2rsZh", "h1hPiRwH1Jw", "PVSp9_q1D4o", "iclr_2021_I3zV6igAT9", "iclr_2021_I3zV6igAT9", "iclr_2021_I3zV6igAT9" ]
iclr_2021_ZVqZIA1GA_
Deformable Capsules for Object Detection
Capsule networks promise significant benefits over convolutional networks by storing stronger internal representations, and routing information based on the agreement between intermediate representations' projections. Despite this, their success has been mostly limited to small-scale classification datasets due to thei...
withdrawn-rejected-submissions
This work proposes capsule networks with deformable capsules for tackling object detection. All reviewers agreed that object detection is an important problem that is interesting to the ICLR community. Reviewers also agree that the proposed approach is novel and interesting, and in particular they mention that proposin...
train
[ "VOekTEqKm3", "RTQ1RDUaCmI", "AegWG-SA3no", "iTBqrDpmb7R", "Z5S2beTqQvh", "4sfPE3G60xb", "ewg_yz1Zjx", "-30Rk_cNJ32" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to use the capsules to perform object detection on COCO. Capsules, while showing promises, are usually too expensive for tasks beyond MNIST and Cifar. The authors propose three key improvements in DeformCaps, SplitCaps and SE-Routing to improve the efficiency and therefore allow capsules to be a...
[ 6, -1, -1, -1, -1, -1, 6, 4 ]
[ 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_ZVqZIA1GA_", "VOekTEqKm3", "iTBqrDpmb7R", "ewg_yz1Zjx", "4sfPE3G60xb", "-30Rk_cNJ32", "iclr_2021_ZVqZIA1GA_", "iclr_2021_ZVqZIA1GA_" ]
iclr_2021_uUTx2LOBMV
TextTN: Probabilistic Encoding of Language on Tensor Network
As a novel model that bridges machine learning and quantum theory, tensor network (TN) has recently gained increasing attention and successful applications for processing natural images. However, for natural languages, it is unclear how to design a probabilistic encoding architecture to efficiently and accurately lear...
withdrawn-rejected-submissions
While the submission has promising components, the reviewers were not able to reach a consensus to recommend acceptance. The main concerns is that (1) theorem statements and assumptions are not clearly explained, and (2) the novelty of the approach is not made clear, and (3) there remain concerns on whether the experim...
train
[ "Do9XIx618qZ", "6gLXBql8Cqp", "a5rhnCrToki", "e4kmWhTRd6k", "JNLj7KZMKYz", "mfHAcKbennE", "uaWQsMxRM1o", "DHzVxKHhyrE", "toVi7zvHNxk" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "After reading author replies:\nI would like to thank the authors to respond to my doubts on some of the results. But I decide to keep the review and the score, because Theorem 1 and Claim 1 are still not well explained. In particular, the explanation like \"if the 2nd inequality in Eq. 11 is violated, the network ...
[ 4, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, 2, 5, 2 ]
[ "iclr_2021_uUTx2LOBMV", "toVi7zvHNxk", "DHzVxKHhyrE", "Do9XIx618qZ", "uaWQsMxRM1o", "uaWQsMxRM1o", "iclr_2021_uUTx2LOBMV", "iclr_2021_uUTx2LOBMV", "iclr_2021_uUTx2LOBMV" ]
iclr_2021_EKw6nZ4QkJl
EM-RBR: a reinforced framework for knowledge graph completion from reasoning perspective
Knowledge graph completion aims to predict the new links in given entities among the knowledge graph (KG). Most mainstream embedding methods focus on fact triplets contained in the given KG, however, ignoring the rich background information provided by logic rules driven from knowledge base implicitly. To solve this pr...
withdrawn-rejected-submissions
The paper combines logical reasoning and statistical methods to improve knowledge graph completion. Rules are mined from the KG using AMIE and recursive backward steps are taken, using the mined rules, to determine if a fact is true. The reviewers agree that the paper can be improved by explaining more details of the m...
train
[ "LtqlSMX4WBH", "koRtu295chX", "Neu0y_PTYyo", "wICTiIp7eCS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The work utilizes relational background knowledge contained in logical rules to conduct multi-relational reasoning for knowledge graph (KG) completion. This is different from the superficial vector triangle linkage used in embedding models. It solves the KG completion task through rule-based reasoning rather than ...
[ 4, 4, 3, 3 ]
[ 4, 4, 3, 4 ]
[ "iclr_2021_EKw6nZ4QkJl", "iclr_2021_EKw6nZ4QkJl", "iclr_2021_EKw6nZ4QkJl", "iclr_2021_EKw6nZ4QkJl" ]
iclr_2021_HPGtPvFNROh
DROPS: Deep Retrieval of Physiological Signals via Attribute-specific Clinical Prototypes
The ongoing digitization of health records within the healthcare industry results in large-scale datasets. Manually extracting clinically-useful insight from such datasets is non-trivial. However, doing so at scale while simultaneously leveraging patient-specific attributes such as sex and age can assist with clinical-...
withdrawn-rejected-submissions
This paper proposes to learn clinical prototypes via supervised contrastive learning to facilitate the reliable retrieval of clinical information and clustering in large datasets. The presentation of the paper could be substantially improved – e.g., the overview and motivation of the paper, the definition of clinical p...
train
[ "WGkvnt0Nc5n", "SVMk14HMXG", "JLPndMZ3wti", "FS4IE7Xzj0j", "q__72SNxmqJ", "syOyRxg288S", "tBKcvoOiUV0", "YrVRR_bKDSb" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**THIS IS NOT AN NLP PAPER**\nWe would like to urge the reviewer to please re-read the manuscript in its entirety. This is because we believe the reviewer has either 1) not read the manuscript in the first place or 2) had given it a cursory glance. We arrive at this conclusion based on their feedback which claims ...
[ -1, -1, -1, -1, -1, 2, 4, 4 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "syOyRxg288S", "JLPndMZ3wti", "tBKcvoOiUV0", "q__72SNxmqJ", "YrVRR_bKDSb", "iclr_2021_HPGtPvFNROh", "iclr_2021_HPGtPvFNROh", "iclr_2021_HPGtPvFNROh" ]
iclr_2021_uDN8pRAdsoC
Hard Masking for Explaining Graph Neural Networks
Graph Neural Networks (GNNs) are a flexible and powerful family of models that build nodes' representations on irregular graph-structured data. This paper focuses on explaining or interpreting the rationale underlying a given prediction of already trained graph neural networks for the node classification task. Existing...
withdrawn-rejected-submissions
The paper provides a simple approach to explaining GNN predictions for each node by greedily selecting nodes or features in each computation graph so as to increase the fidelity score. The fidelity score is based on comparing the original GNN output to what is obtained with noisy versions of the masked nodes/features. ...
train
[ "KaUa_fxo27w", "foRQOofAM41", "90aVzcJWpHO", "XiVtSac-2Vj", "-_aPcFdaok3", "Vne1nN6insa", "5X9fwVTmHTP", "5AHxOpk3aJ", "nj1248dQWWG" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors address the problem of explaining the behaviour of graph neural networks (which operate on a computation graph based on their k-hop neighbourhood) such as a graph convolutional network (GCN). \n\nThe core idea is to identify, for each node v in the graph, the nodes and features of the graph most releva...
[ 5, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_uDN8pRAdsoC", "iclr_2021_uDN8pRAdsoC", "XiVtSac-2Vj", "-_aPcFdaok3", "KaUa_fxo27w", "foRQOofAM41", "nj1248dQWWG", "nj1248dQWWG", "iclr_2021_uDN8pRAdsoC" ]
iclr_2021_pTZ6EgZtzDU
Meta-Reinforcement Learning With Informed Policy Regularization
Meta-reinforcement learning aims at finding a policy able to generalize to new environments. When facing a new environment, this policy must explore to identify its particular characteristics and then exploit this information for collecting reward. We consider the online adaptation setting where the agent needs to tra...
withdrawn-rejected-submissions
This paper is borderline, as evidenced by all of the reviewer's scores. The pros are: - important and relevant topic - IMPORT is a reasonable, technically sound approach - paper is relatively clear The cons all lie in the experimental evaluation, and whether the experiments sufficiently back the claim that IMPORT ca...
train
[ "1wZ_1msAUp", "N7cQQ3JAylF", "AEUI9bYWjWB", "u0y8EZnGlQc", "oSoenKPAUZu", "hcSFAelbqs6", "lDBvKROuLaD", "1HbXwjL22a", "usM10FPGz0N", "Sp1BV-Y9_fz", "p4jEmtKcNeX", "ddNsGrtnZ2Z", "tsQc82EXoAx", "aJt5WdbuyvN", "MLSPpoZIi9w", "Fs43DgRTVxz", "k0TTsc15fb", "rBcclgsvzIs", "3n7XM1NsT6",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", ...
[ "Summary\\\nWhen the task descriptor is available as the privileged information, the authors propose a novel method to learn the policy that can benefit from privileged information. It is reward-driven learning and yet can make use of privileged information for efficient exploration. The advantage of the proposed m...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_pTZ6EgZtzDU", "iclr_2021_pTZ6EgZtzDU", "u0y8EZnGlQc", "usM10FPGz0N", "PRd2urm85hT", "bqsEOPprZ0q", "N7cQQ3JAylF", "1wZ_1msAUp", "Sp1BV-Y9_fz", "Fs43DgRTVxz", "tsQc82EXoAx", "aJt5WdbuyvN", "3n7XM1NsT6", "MLSPpoZIi9w", "iclr_2021_pTZ6EgZtzDU", "N7cQQ3JAylF", "1wZ_1msAUp", ...
iclr_2021_21aG-pxQWa
Counterfactual Fairness through Data Preprocessing
Machine learning has become more important in real-life decision-making but people are concerned about the ethical problems it may bring when used improperly. Recent work brings the discussion of machine learning fairness into the causal framework and elaborates on the concept of Counterfactual Fairness. In this paper,...
withdrawn-rejected-submissions
The paper introduces an approach to counterfactual fairness based on data pre-processing, and compare it to other two counterfactual fairness approaches on the Adult and COMPAS datasets. The reviewers are in agreement that, in its current state, the paper should not be accepted for publication at the venue. Their main...
val
[ "yBU6grqgBfw", "YFngJ6eu1PW", "tPQvAyInDqq", "RLkQXLS5OfL", "EWeKPpTDRyU", "QRquqbP3bQ", "lL19bF7BdrL" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a preprocessing method that eases fair learning called FLAP.\nThey focus on counterfactual fairness introduced by Kusner et al. 2017. Under certain conditions, the preprocessing allow to biased the data such that traditional learning becomes fair.\nThis is a very interesting idea that is novel ...
[ 5, -1, -1, -1, -1, 5, 4 ]
[ 3, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2021_21aG-pxQWa", "QRquqbP3bQ", "RLkQXLS5OfL", "lL19bF7BdrL", "yBU6grqgBfw", "iclr_2021_21aG-pxQWa", "iclr_2021_21aG-pxQWa" ]
iclr_2021_XwATtbX3oCz
Revisiting Point Cloud Classification with a Simple and Effective Baseline
Processing point cloud data is an important component of many real-world systems. As such, a wide variety of point-based approaches have been proposed, reporting steady benchmark improvements over time. We study the key ingredients of this progress and uncover two critical results. First, we find that auxiliary factors...
withdrawn-rejected-submissions
This paper received three recommendations of accept and one recommendation of reject. The paper is mixed. The results presented are both compelling and will have impact on the community. The AC does not agree with R2's views that the paper requires proposal of a novel method for acceptance. At the same time, the A...
train
[ "er2b-LloK_b", "hwCWYDgyPbp", "auip3k7_STE", "uOxvMyiXL5", "XzOtM0SesxI", "BDz7flgxRym", "jNsmmUFtBZ8", "Hf6aEadgrtH", "Bt2aQilrbFx", "-GX_nxwfb4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the factors that are related to point cloud classification but independent of model architecture. Then a light-weight projection-based model is proposed. Substantial experiments are conducted to show how the auxiliary factors affect the evaluation results and the proposed method can perform at s...
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_XwATtbX3oCz", "iclr_2021_XwATtbX3oCz", "iclr_2021_XwATtbX3oCz", "hwCWYDgyPbp", "er2b-LloK_b", "jNsmmUFtBZ8", "auip3k7_STE", "Bt2aQilrbFx", "-GX_nxwfb4", "iclr_2021_XwATtbX3oCz" ]
iclr_2021_CPfjKI8Yzx
Robust Imitation via Decision-Time Planning
The goal of imitation learning is to mimic expert behavior from demonstrations, without access to an explicit reward signal. A popular class of approaches infers the (unknown) reward function via inverse reinforcement learning (IRL) followed by maximizing this reward function via reinforcement learning (RL). The polici...
withdrawn-rejected-submissions
The reviewers highly appreciated the replies and the additional experiments. We also had a private discussion on the paper. To summarize: the replies alleviated quite a few concerns, however the consensus was that the paper still does not meet the bar for a highly competitive conference like ICLR. The idea of combinin...
train
[ "3x4dtQZ-RMh", "fIbA8jE_gri", "XHbRTNqt8M_", "4qxLVetvCxo", "ZUZPr-a1uf", "JaUinyWsfda", "yVMAdFw2SbB", "F1Cel71_t2W", "n8IYeFwvoPh", "Gx88r9Viblb", "kT6gbGvGWz-" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive comments! We would like to offer a summary of our rebuttals to address some common questions as well as highlight our contribution.\n\n* Re: Additional assumptions of our method \nAs mentioned in our individual rebuttals, our method aims to mitigate the perturba...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 5 ]
[ "iclr_2021_CPfjKI8Yzx", "ZUZPr-a1uf", "n8IYeFwvoPh", "Gx88r9Viblb", "JaUinyWsfda", "F1Cel71_t2W", "kT6gbGvGWz-", "iclr_2021_CPfjKI8Yzx", "iclr_2021_CPfjKI8Yzx", "iclr_2021_CPfjKI8Yzx", "iclr_2021_CPfjKI8Yzx" ]
iclr_2021_1AyPW2Emp6
Tight Second-Order Certificates for Randomized Smoothing
Randomized smoothing is a popular way of providing robustness guarantees against adversarial attacks: randomly-smoothed functions have a universal Lipschitz-like bound, allowing for robustness certificates to be easily computed. In this work, we show that there also exists a universal curvature-like bound for Gaussian ...
withdrawn-rejected-submissions
The authors develop a novel robustness certificate based on randomized smoothing that accounts for second-order smoothness of functions smoothed with Gaussian noise. They develop a variant of Gaussian smoothing based on these insights that improves sample-efficiency of randomized smoothing using gradient information. ...
train
[ "8SJ1s2l_Sb9", "n7Va0QZ0Mgq", "oBpqToxSB9G", "26KFeOGHh4_", "MqwKqVHNkyr", "-Ur7JNVYbI", "HLqDr-jbY2n", "hU5zF68a6hG", "ayGHDsDvMIN" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "While the benefits of the proposed methods are admittedly modest (although it is inaccurate to say there is \"no improvement\", especially for smaller certified radii), we believe that there is still merit to our work, for two reasons:\n\n- The tightness of the certificate proposed in Theorem 1 constitutes an impo...
[ -1, -1, -1, -1, -1, -1, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "n7Va0QZ0Mgq", "-Ur7JNVYbI", "iclr_2021_1AyPW2Emp6", "HLqDr-jbY2n", "hU5zF68a6hG", "ayGHDsDvMIN", "iclr_2021_1AyPW2Emp6", "iclr_2021_1AyPW2Emp6", "iclr_2021_1AyPW2Emp6" ]
iclr_2021_F438zjb-XaM
Crowd-sourced Phrase-Based Tokenization for Low-Resourced Neural Machine Translation: The case of Fon Language
Building effective neural machine translation (NMT) models for very low-resourced and morphologically rich African indigenous languages is an open challenge. Besides the issue of finding available resources for them, a lot of work is put into preprocessing and tokenization. Recent studies have shown that standard token...
withdrawn-rejected-submissions
The authors investigate different tokenization methods for the translation between French and Fon (an African low-resource language). Low-resource machine translation is a very important topic and it is great to see work on African languages - we need more of this! Unfortunately, the reviewers unanimously agree that t...
train
[ "TtTVGfJl-6G", "KjKVwahcz-", "XjVDidBdZoL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Edit after seeing others reviews -- I think I gave this paper a MUCH higher score than the other reviewers, simply because it is very novel with Fon language. I agree with all of your points about what is lacking, but in my mind, the novelty was enough to still give a 7. Now I definitely think that is too high. I ...
[ 5, 3, 4 ]
[ 5, 3, 4 ]
[ "iclr_2021_F438zjb-XaM", "iclr_2021_F438zjb-XaM", "iclr_2021_F438zjb-XaM" ]
iclr_2021_muppfCkU9H1
Multi-hop Attention Graph Neural Network
Self-attention mechanism in graph neural networks (GNNs) led to state-of-the-art performance on many graph representation learning task. Currently, at every layer, attention is computed between connected pairs of nodes and depends solely on the representation of the two nodes. However, such attention mechanism does not...
withdrawn-rejected-submissions
This paper has been reviewed by four knowledgeable referees. Two of them slightly leaned towards acceptance, whereas the other two suggested rejection. The main issues raised by the reviewers were (1) limited novelty [R1,R2], (2) missing baselines and ablations [R1,R3], (3) limited insights on the spectral analysis [R...
train
[ "NeieF-6Ywo", "CdS_VzRVA3v", "XZh2oE3QW6y", "sRe4AiG3c6l", "XE7XvVdDv6n", "DsdRzfqUlBF", "mNPmMALuOV6", "daqbE5dV7XI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "==== Summary ====\nThis paper proposes MAGNA, a multi-hop self-attention mechanism for attention based graph neural networks. The proposed method increases the receptive field at each layer, requiring less layers to achieve a large receptive field. Also, with the proposed method the attention coefficient between t...
[ 6, 5, 6, 5, -1, -1, -1, -1 ]
[ 4, 5, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2021_muppfCkU9H1", "iclr_2021_muppfCkU9H1", "iclr_2021_muppfCkU9H1", "iclr_2021_muppfCkU9H1", "NeieF-6Ywo", "CdS_VzRVA3v", "sRe4AiG3c6l", "XZh2oE3QW6y" ]
iclr_2021_GtCq61UFDId
SoCal: Selective Oracle Questioning for Consistency-based Active Learning of Cardiac Signals
The ubiquity and rate of collection of cardiac signals produce large, unlabelled datasets. Active learning (AL) can exploit such datasets by incorporating human annotators (oracles) to improve generalization performance. However, the over-reliance of existing algorithms on oracles continues to burden physicians. To min...
withdrawn-rejected-submissions
The reviewers still have several concerns about the paper after the author feedback stage: the novelty of the paper is not sufficient; the experimental results are not very encouraging. We encourage the authors fixing these issues in the next revision.
train
[ "KO32TrPRb9", "dfAG5aWOgOZ", "9_giOg6qbGv", "YifpUt5JjbM", "XlJJCfV820t", "KTupuLAv_Ao", "Ya3uaNiyn3", "_bOGNPR19_T", "Ep2qhFB9C1q" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an active learning framework called SoCal that is consistency-based and can decide between whether to make use of the oracle to provide a label or to make use of a pseudo-label generated by the algorithm itself instead. The proposed method hopes to address resource-constrained active learning sc...
[ 5, 5, -1, -1, -1, -1, -1, 4, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_GtCq61UFDId", "iclr_2021_GtCq61UFDId", "_bOGNPR19_T", "KTupuLAv_Ao", "Ep2qhFB9C1q", "dfAG5aWOgOZ", "KO32TrPRb9", "iclr_2021_GtCq61UFDId", "iclr_2021_GtCq61UFDId" ]
iclr_2021_nsZGadY22N4
Weighted Bellman Backups for Improved Signal-to-Noise in Q-Updates
Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from low signal and even instability in Q-learning because target values are derived from current Q-estimates, which are often noisy. To mitigate the issue, we propose...
withdrawn-rejected-submissions
The paper is acknowledged by all the reviewers as making a novel contribution -- the proposal to reweight state-action pairs depending on the variation in their Q-value estimates during learning. However, despite its extensive reporting of numerical experiments, its arguments in favor of the proposed approach are found...
train
[ "djRsDQUh2Yh", "kUmOpgOCP35", "lkvEDvsjCOc", "qx9rlmugn6J", "hQTrtB9nvZ", "nNwr4kfhOWH", "ECrlxhKdzWD", "ZJVqEXGknce", "h1MpxHTBccb", "tKcnvHG1AWI", "NPokX-84jPT", "9Sx5k_0-Ao-", "sqfC2ob0-bG", "Cnjf0GYLbiw", "_26Mc5QHPg" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use uncertainty estimates from an ensemble of action-values, to provide a weighting on the updates in Q-learning. The main idea is to use the sigmoid of the negative of this uncertainty in the next state, to produce a weighting between 0.5 and 1 to downweight updates with high uncertainty ta...
[ 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_nsZGadY22N4", "iclr_2021_nsZGadY22N4", "h1MpxHTBccb", "ECrlxhKdzWD", "nNwr4kfhOWH", "NPokX-84jPT", "sqfC2ob0-bG", "9Sx5k_0-Ao-", "tKcnvHG1AWI", "kUmOpgOCP35", "djRsDQUh2Yh", "_26Mc5QHPg", "Cnjf0GYLbiw", "iclr_2021_nsZGadY22N4", "iclr_2021_nsZGadY22N4" ]
iclr_2021_083vV3utxpC
Deep Partial Updating
Emerging edge intelligence applications require the server to continuously retrain and update deep neural networks deployed on remote edge nodes to leverage newly collected data samples. Unfortunately, it may be impossible in practice to continuously send fully updated weights to these edge nodes due to the highly cons...
withdrawn-rejected-submissions
The paper proposes an approach to selectively update the weights of neural networks in federated learning. This is an interesting and important problem. As several reviewers pointed out, this is highly related to pruning although with a different objective. It is an interesting paper but is a marginal case in the end ...
train
[ "hiBqWalxLy", "n2PiTrWYmb9", "t7gtu0tlTVd", "5g_-qQJYPcK", "RGkSN62HvEP", "R2v3ebVoWam", "1xCbA0hE8JA", "ZdFjGA3-1Vn", "zTCxJjY7kf7", "RaICKq1iSD3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThe paper proposes a weight-wise partial updating paradigm which adaptively selects a subset of weights to update at each training iteration while achieving comparable performance to full training. Experimental results demonstrate the effectiveness of the proposed partial updating method.\n\nStrengths:\n...
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_083vV3utxpC", "iclr_2021_083vV3utxpC", "iclr_2021_083vV3utxpC", "R2v3ebVoWam", "n2PiTrWYmb9", "iclr_2021_083vV3utxpC", "hiBqWalxLy", "RaICKq1iSD3", "t7gtu0tlTVd", "iclr_2021_083vV3utxpC" ]
iclr_2021_8QAXsAOSBjE
Reusing Preprocessing Data as Auxiliary Supervision in Conversational Analysis
Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction. Using noisy labels in single-task learning increases the risk of over-fitting. However, auxiliary tasks could improve the performance of the primary task learning. This appro...
withdrawn-rejected-submissions
The initial reviews for this paper were very borderline. The authors provided detailed responses as well as a few additional results and observations. The authors' responses answered the reviewers' questions and addressed their main comments (including in the discussion of related works as well as with more in-depth an...
train
[ "cCS6D8vToc6", "QUM3nV63U3", "jEyFWlmZ7oi", "Nlo6XRkW6p", "RvhmFINrfst", "ksiB4pssFyG", "EX1sY6EYN8K", "7HpzSL30Wfs", "vT7R7vTRDsn" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper addresses multi-task learning for multimodal emotion recognition on two existing datasets (IEMOCAP and SEMAINE). \nStrengths:\n*The issues addressed in this paper are very relevant. The use of a multi-task learning framework to tackle the lack of labeled data and the noisy labels for Affective computing ...
[ 5, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_8QAXsAOSBjE", "EX1sY6EYN8K", "cCS6D8vToc6", "cCS6D8vToc6", "7HpzSL30Wfs", "vT7R7vTRDsn", "iclr_2021_8QAXsAOSBjE", "iclr_2021_8QAXsAOSBjE", "iclr_2021_8QAXsAOSBjE" ]
iclr_2021_whAxkamuuCU
Symbol-Shift Equivariant Neural Networks
Neural networks have been shown to have poor compositionality abilities: while they can produce sophisticated output given sufficient data, they perform patchy generalization and fail to generalize to new symbols (e.g. switching a name in a sentence by a less frequent one or one not seen yet). In this paper, we define ...
withdrawn-rejected-submissions
This paper proposed a new type of models that are invariant to entities by exploring the symbolic property of entities. This problem is important in language modeling since it gives intrinsically more proper representation of sentences, which can better generalize to new entities. However I still suggest to reject th...
train
[ "XNt1--9TU1M", "jzebDRRnSL2", "Q2U0TQS3-ox", "zEtb74vbLS", "4JhpCp8C6Wg", "ie9TFHqrZC", "dsmZWVGCyAS", "l1HAr7rkbW6", "rtFrRWxONlX", "5VZxZOSYEgQ", "u0OEM78INz", "jE56DZiUXsL", "cy9owemQt8U", "5Y8sB-Z26HA", "vR4_t_lmP5S", "nJ1dB0KSpcK" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a new version of the manuscript with some of your recommendations to enhance the clarity of Section 4", "Thank you for the useful suggestions.\n\nTo answer to your specific points:\n\n* we would use your suggestion and refer more to Figure 3 to give the intuition of the projection matrix $B_\\va...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "4JhpCp8C6Wg", "4JhpCp8C6Wg", "zEtb74vbLS", "jE56DZiUXsL", "5VZxZOSYEgQ", "dsmZWVGCyAS", "l1HAr7rkbW6", "rtFrRWxONlX", "u0OEM78INz", "5Y8sB-Z26HA", "vR4_t_lmP5S", "nJ1dB0KSpcK", "iclr_2021_whAxkamuuCU", "iclr_2021_whAxkamuuCU", "iclr_2021_whAxkamuuCU", "iclr_2021_whAxkamuuCU" ]
iclr_2021_1dm_j4ciZp
How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers
Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyp...
withdrawn-rejected-submissions
This paper was referred to the ICLR 2021 Ethics Review Committee based on concerns about a potential violation of the ICLR 2021 Code of Ethics (https://iclr.cc/public/CodeOfEthics) raised by reviewers. The paper was carefully reviewed by two committe members, who provided a binding decision. The decision is "Significan...
train
[ "-Vrwhd-7EB", "Mfjm89qPVEO", "dww6WJJqB7Y", "orOFLqQTyQ", "XrGAWLrZ83B", "Jsaz81wrp3i", "EKV_W9HXpp", "EHT58aUgLfl", "opRFDpVTE4R", "p9hLSej8UG5", "r-WhStwy63P", "xbaJ3sbe5_W" ]
[ "public", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The initial reviewers for this paper flagged a key issue in ethics:\n1.\tThe paper presents a study with human experiments. However, there was no mention of an ethical review board being involved in the process. Additionally, there is a significant lack of information to who was involved with the study or how it w...
[ -1, 6, -1, -1, 7, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_1dm_j4ciZp", "iclr_2021_1dm_j4ciZp", "orOFLqQTyQ", "Jsaz81wrp3i", "iclr_2021_1dm_j4ciZp", "Mfjm89qPVEO", "EHT58aUgLfl", "XrGAWLrZ83B", "xbaJ3sbe5_W", "r-WhStwy63P", "iclr_2021_1dm_j4ciZp", "iclr_2021_1dm_j4ciZp" ]
iclr_2021_2HLTMwxOxwe
Learn what you can't learn: Regularized Ensembles for Transductive out-of-distribution detection
Machine learning models are often used in practice once they achieve good generalization results on in-distribution (ID) holdout data. To predict test sets in the wild, they should detect samples they cannot predict well. We show that current out-of-distribution (OOD) detection algorithms for neural networks produce un...
withdrawn-rejected-submissions
Although the rebuttal helped clarify the reviewers' confusion on notational confusion and the motivation of problem setup, all reviewers are still in a position of unable to championing the paper: - the technical concerns by Reviewer 4 need to addressed - the paper would have been stronger if baselines such as one-clas...
train
[ "b9l_7dojYHf", "J8Zp-VFfnxE", "dYcMzt_8JWv", "ZzE0MZcotc-", "k1pn5FhwCGf", "Yq73LuEnqs9", "DJF1GEPVWRJ", "Q-THa2STr7W", "qhhdxxGZlKJ", "flQFEEAQjUJ", "vGWS3143ls1", "EgdUVjmSXo0" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new approach to detect out-of-distribution (OOD) examples in the transductive setting. The idea is to train an ensemble of models that fit the in-distribution (ID) data well, but disagree with each other on OOD examples.\n\nPros\n+ Extensive experiments are conducted to compare the proposed a...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_2HLTMwxOxwe", "iclr_2021_2HLTMwxOxwe", "EgdUVjmSXo0", "vGWS3143ls1", "b9l_7dojYHf", "DJF1GEPVWRJ", "Q-THa2STr7W", "J8Zp-VFfnxE", "iclr_2021_2HLTMwxOxwe", "qhhdxxGZlKJ", "iclr_2021_2HLTMwxOxwe", "iclr_2021_2HLTMwxOxwe" ]
iclr_2021_wG5XIGi6nrt
Learning Private Representations with Focal Entropy
How can we learn a representation with good predictive power while preserving user privacy? We present an adversarial representation learning method to sanitize sensitive content from the representation in an adversarial fashion. Specifically, we propose focal entropy - a variant of entropy embedded in an a...
withdrawn-rejected-submissions
This paper focuses on a notion of privacy in learning representations. One of the primary concerns of the reviewers was clarity of the writing and results. Numerous concerns are mentioned in the reviews, and also more engagement with the fairness literature was desired. One reviewer felt that some of the claims in th...
train
[ "bL5Z_W7XfEg", "RpZr1ZmJ0eb", "S8Y8TsrgHc9", "qjODClYjqso", "N-rVES4i6o", "2-98eITShT", "2fLzVfVuS7h", "j5yB8TANtJq", "k26Ujfge5lR", "J94qMQPxRSW" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the valuable and constructive feedback. \n\n**Problem setup: I see that this isn't the same as fair RL but I struggle to figure out exactly what it is. Perhaps some examples in the text would help - for instance by the time that I see that ID is the sensitive attribute in CelebA I have a ...
[ -1, -1, -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 2, 3 ]
[ "RpZr1ZmJ0eb", "S8Y8TsrgHc9", "j5yB8TANtJq", "k26Ujfge5lR", "2fLzVfVuS7h", "J94qMQPxRSW", "iclr_2021_wG5XIGi6nrt", "iclr_2021_wG5XIGi6nrt", "iclr_2021_wG5XIGi6nrt", "iclr_2021_wG5XIGi6nrt" ]
iclr_2021_XZzriKGEj0_
Learning What Not to Model: Gaussian Process Regression with Negative Constraints
Gaussian Process (GP) regression fits a curve on a set of datapairs, with each pair consisting of an input point 'x' and its corresponding target regression value 'y(x)' (a positive datapair). But, what if for an input point 'x¯', we want to constrain the GP to avoid a target regression value 'y¯(x¯)' (a negative datap...
withdrawn-rejected-submissions
This paper is very pleasant to read. The reviewers also like the key idea discussed and find the targeted application interesting and practical. However, after reading the indeed interesting motivation, all four reviewers expected to see more from the evaluation section, including more challenging and realistic set-ups...
train
[ "S1zuJcxOC_M", "zb8mYTUEKvr", "FqQrALQKeJG", "bhPEVrIswoS", "XV_hTFx12cU", "LNPXyhYVtc", "EOwyXN_HUY7", "jSWh4WAxmQZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThis paper incorporates information of obstacles to avoid (e.g robot navigation trajectory in the room where the robot has to avoid items such as furniture) into Gaussian process regression fit. They call the obstacles, negative datapairs and the rest of data, positive datapairs. The aim is to have a GP ...
[ 6, 5, 3, -1, -1, -1, -1, 3 ]
[ 2, 4, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2021_XZzriKGEj0_", "iclr_2021_XZzriKGEj0_", "iclr_2021_XZzriKGEj0_", "FqQrALQKeJG", "S1zuJcxOC_M", "zb8mYTUEKvr", "jSWh4WAxmQZ", "iclr_2021_XZzriKGEj0_" ]
iclr_2021_hecuSLbL_vC
Generalisation Guarantees For Continual Learning With Orthogonal Gradient Descent
In Continual Learning settings, deep neural networks are prone to Catastrophic Forgetting. Orthogonal Gradient Descent (Farajtabar et al., 2019) was proposed to tackle the challenge. However, no theoretical guarantees have been proven yet. We present a theoretical framework to study Continual Learning algorithms in the...
withdrawn-rejected-submissions
The reviewers were excited by the paper's theoretical contribution to continual learning, since that aspect of continual learning is underdeveloped. However, all reviewers (including the most positive reviewer during discussions) expressed that the paper would benefit from revisions to improve the clarity and the thor...
train
[ "OfZApqJSdCq", "AGT5YYnjugw", "zAawUOoBaAj", "SaKwuqpMnq", "9VOAPhPl9H7", "kE4yJjt5h28", "gUZ45jsK7A", "KP68nw6ma4d", "x0qkc0FQgL0" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks again for your insightful suggestions and comments.\n\nWe have uploaded a new revision, which comprises all the results.\n\nAdditionally, for clarity, we wanted to present a brief overview of the changes we have applied to the manuscript overall :\n- **Experiments** : \n - Added the benchmark against the ...
[ -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2021_hecuSLbL_vC", "SaKwuqpMnq", "9VOAPhPl9H7", "gUZ45jsK7A", "x0qkc0FQgL0", "KP68nw6ma4d", "iclr_2021_hecuSLbL_vC", "iclr_2021_hecuSLbL_vC", "iclr_2021_hecuSLbL_vC" ]
iclr_2021_uV7hcsjqM-
Contrastive Code Representation Learning
Machine-aided programming tools such as automated type predictors and autocomplete are increasingly learning-based. However, current approaches predominantly rely on supervised learning with task-specific datasets. We propose Contrastive Code Representation Learning (ContraCode), a self-supervised algorithm for learnin...
withdrawn-rejected-submissions
This is a nice paper using contrastive learning for code representation. The idea is to generate variations on unlabeled source code (using domain knowledge) by creating equivalent version of code. Improvements over baselines on two multiple tasks are shown. While some of the reviewers liked the (and R4 should have res...
val
[ "BGcywtppcDx", "-d05u_5tyTH", "4JkAmREaPq", "d5ew4M1Z41", "ReLyrLXY4aU", "Wx8IarvReZ", "3Q-FGhzh3M5", "ko5e3tRahs", "wqwdSyuW6L", "1mmPhmYIMbF", "6xOX3GrD5ZZ", "l7861YlgSu7", "5w4Riya4Skg" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thank you for increasing your score, and again for your feedback. The discussion has led to some valuable experiments that improved our paper.", "We really appreciate your suggestions which helped strengthen our paper further, and thank you for raising your score. We revised the text based on your latest feedbac...
[ -1, -1, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, 4 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "Wx8IarvReZ", "ko5e3tRahs", "iclr_2021_uV7hcsjqM-", "5w4Riya4Skg", "iclr_2021_uV7hcsjqM-", "3Q-FGhzh3M5", "ReLyrLXY4aU", "l7861YlgSu7", "iclr_2021_uV7hcsjqM-", "5w4Riya4Skg", "ReLyrLXY4aU", "wqwdSyuW6L", "iclr_2021_uV7hcsjqM-" ]
iclr_2021_E4PK0rg2eP
Parameter-Efficient Transfer Learning with Diff Pruning
While task-specific finetuning of deep networks pretrained with self-supervision has led to significant empirical advances in NLP, their large size makes the standard finetuning approach difficult to apply to multi-task, memory-constrained settings, as storing the full model parameters for each task become prohibitivel...
withdrawn-rejected-submissions
This paper studies a problem setup of parameter-efficient transfer learning for large-scale deep models. The approach consists of learning a diff vector with a sparsity constraint and then pruning the vector using magnitude pruning. A group penalty is also introduced to enhance structured sparsity. The main motivation ...
val
[ "CEyTi-JugeU", "Ad8rNnYDChJ", "dU6juEdvzA", "64f9i4-1O8q", "JuCXyEXaV6_", "1lfGSdczoN1", "Npvfhl0JuT4", "li-u1Vjqkgi", "y-n1aaUhRhD", "h7S5ugPJwKK", "keH6gGlHB9l" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for responding! \n\n- **\"There are many recently proposed methods, including TinyBERT, DynaBERT, etc. Those methods achieve comparable accuracy as BERT using only 15-25% parameters.\"** This is not true? There is nontrivial drop in accuracy using these methods (see Table 1 of https://arxiv.org/pdf/1909.103...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Ad8rNnYDChJ", "Npvfhl0JuT4", "y-n1aaUhRhD", "li-u1Vjqkgi", "iclr_2021_E4PK0rg2eP", "keH6gGlHB9l", "h7S5ugPJwKK", "iclr_2021_E4PK0rg2eP", "iclr_2021_E4PK0rg2eP", "iclr_2021_E4PK0rg2eP", "iclr_2021_E4PK0rg2eP" ]
iclr_2021_y13JLBiNMsf
Learning Monotonic Alignments with Source-Aware GMM Attention
Transformers with soft attention have been widely adopted in various sequence-to-sequence (Seq2Seq) tasks. Whereas soft attention is effective for learning semantic similarities between queries and keys based on their contents, it does not explicitly model the order of elements in sequences which is crucial for monoton...
withdrawn-rejected-submissions
The paper proposed a useful incremental extension to the monotonic GMM attention by incorporating source content. It has shown comparable performance for online and long-form speech recognition, but falls behind on the machine translation task. For online ASR, it would be more convincing to include latency comparisons ...
train
[ "T_mB6hAshhy", "yuNvazPMq02", "GUpfkVvSprG", "1IqtyRAsNji", "w1AUpymr16E", "r4rg3IS5dUb", "gGam_LYy_rH", "tedUdenMktj", "J6VnSfEbZ60", "bu1M1cEdNr", "aVr1NevVjQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper introduces “source-aware” GMM attention and applies it to offline, online, long-form ASR. The value of source-aware GMM attention appears to be its ability to “ignore” long segments of silence in the input audio, which could potentially be more difficult to do using other attention mechanisms...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_y13JLBiNMsf", "J6VnSfEbZ60", "yuNvazPMq02", "iclr_2021_y13JLBiNMsf", "bu1M1cEdNr", "T_mB6hAshhy", "tedUdenMktj", "aVr1NevVjQ", "iclr_2021_y13JLBiNMsf", "iclr_2021_y13JLBiNMsf", "iclr_2021_y13JLBiNMsf" ]
iclr_2021_-757TnNDwIn
Generative Adversarial Neural Architecture Search with Importance Sampling
Despite the empirical success of neural architecture search (NAS) in deep learning applications, the optimality, reproducibility and cost of NAS schemes remain hard to assess. The variation in search spaces adopted has further affected a fair comparison between search strategies. In this paper, we focus on search strat...
withdrawn-rejected-submissions
This paper proposes a method for neural architecture search (NAS) based on adversarial methods. It uses a discriminator trained to distinguish between random vs. good architectures, letting the discriminator's scores serve as a reward signal for an autoregressive generator. I agree with AR1: this is a nice and clever i...
train
[ "_M14xJ2uVkh", "9Z-a9SdW3XV", "682dk9Acqt", "6M4oBH86A42", "gKdA2PnecU", "-Z7YLjPeA5-", "z1b5mdTR6jk", "DETtyiCeDMf", "kmsMIVnxtVu", "wbFeg-bKJI3", "chtZPgd1Dxl", "a7T_RMO0S7y" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a Neural Architecture Search algorithm (GA-NAS) based on adversarial learning. The generator constructs architectures auto-regressively, which receives feedback from a GNN discriminator. Reinforcement learning (PPO) is used for training, to solve non-differentiability. GA-NAS’s effectiveness is...
[ 5, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 2, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_-757TnNDwIn", "iclr_2021_-757TnNDwIn", "iclr_2021_-757TnNDwIn", "iclr_2021_-757TnNDwIn", "_M14xJ2uVkh", "682dk9Acqt", "9Z-a9SdW3XV", "9Z-a9SdW3XV", "_M14xJ2uVkh", "a7T_RMO0S7y", "a7T_RMO0S7y", "iclr_2021_-757TnNDwIn" ]
iclr_2021_EdXhmWvvQV
Center-wise Local Image Mixture For Contrastive Representation Learning
Recent advances in unsupervised representation learning have experienced remarkable progress, especially with the achievements of contrastive learning, which regards each image as well its augmentations as a separate class, while does not consider the semantic similarity among images. This paper proposes a new kind of ...
withdrawn-rejected-submissions
There are two main contributions in this paper. First, the use of NN from the same cluster as “views” of the data as understood in classical contrastive learning. Second, the use of additional augmentation techniques, namely cutMix and multi-resolution. The reviewers noted that the paper is written well and easy to und...
train
[ "PmHb63JS32u", "wJzF6lYOSa5", "TE_irDdWAL_", "Ydr4c0_xKUZ", "OHaoi8I6W7N", "YB0gFvPv8wi", "qEUTj2iNzm", "SXCML2OBqe-", "CJ6Yk-MIMXG", "bbSNoZKpyRZ", "B_JMJWpYaCW", "_XgKz5eL0Mz" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Center-wise Local Image Mixture For Contrastive Representation Learning\n\nThe paper introduces a new contrastive learning method for unsupervised representation learning. The main idea is to consider the semantic similarity between different images and incorporate it in the learning procedure, in contrast to the ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_EdXhmWvvQV", "TE_irDdWAL_", "YB0gFvPv8wi", "iclr_2021_EdXhmWvvQV", "_XgKz5eL0Mz", "PmHb63JS32u", "bbSNoZKpyRZ", "B_JMJWpYaCW", "PmHb63JS32u", "iclr_2021_EdXhmWvvQV", "iclr_2021_EdXhmWvvQV", "iclr_2021_EdXhmWvvQV" ]
iclr_2021_78SlGFxtlM
Robust Meta-learning with Noise via Eigen-Reptile
Recent years have seen a surge of interest in meta-learning techniques for tackling the few-shot learning (FSL) problem. However, the meta-learner's initial model is prone to meta-overfit, as there are only a few available samples with sampling noise. Besides, when handling the data sampled with label noise for FSL, me...
withdrawn-rejected-submissions
This paper was evaluated by four reviewers. After rebuttal, several concerns remained, e.g. Rev. 1 is interested in more thorough comparisons even if the model is claimed to be backbone-agnostic. Rev. 2 is concerned about re-print of some theories and authors' response that 'contribution is not in theoretical innovatio...
train
[ "j80tTQaK_hD", "Yd9kBINboEn", "b6mEzMZ0JA0", "yewabm9mG4", "JUsUwAKTmxx", "rq_yIIePu0V", "-S4UVvdTzjM", "_e6WrGP923M", "0MYbVb83UI", "tTykPS31KcZ", "yiJZf2LUvFl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a reptile-based meta-learning algorithm called Eigen-Reptile for few shot learning with sampling and label nosing. When Eigen-Reptile updates meta-parameters, it leverages not only the gradient direction of different task, but also the direction of eigenvector related to parameters matrix. Besi...
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_78SlGFxtlM", "iclr_2021_78SlGFxtlM", "iclr_2021_78SlGFxtlM", "rq_yIIePu0V", "yewabm9mG4", "Yd9kBINboEn", "j80tTQaK_hD", "0MYbVb83UI", "b6mEzMZ0JA0", "yiJZf2LUvFl", "iclr_2021_78SlGFxtlM" ]
iclr_2021_PXDdWQDBsCG
Shape Defense
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This fact is perhaps the main reason why CNNs are susceptible to adversarial examples. Here, we explore how shape bias can be incorporated into CNNs to impro...
withdrawn-rejected-submissions
The paper proposed a method for adversarial robustness by considering information from the edge map of the images. Two reviewers point out the similarities of the paper with previous work ([1]) and it is unclear whether the benefits come from binarization of the input or from shape information. As such, the paper is no...
train
[ "T1vrN0AoQOW", "Xj8s2a3nCm", "HCXSpQFkVEd", "2X07R3MPlq", "c8QB7eZX4oC", "Y3-RBFuDynj", "5fHTUfb32qt", "eLSh2LqUVAz", "PIQ5WI7hioD", "CAtwMihcqEW", "hMwu8stMLJ", "Vi5S3ny7HIA", "OVmWUNYlpv6", "xtjTxOcdG9O", "b3NjZRrlc6g", "TnzVzu12vPD", "SIj-EPgtB8d", "khXoYOVIcZn", "8X3oD2nujZd"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "author", "author", ...
[ "Summary: This paper aims to improve adversarial robustness considering the information about the object shape details with the means of edge maps. Two different strategies are proposed to increase model robustness using the edge maps: i) conduct the adversarial training on the input images, which are concatenated ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_PXDdWQDBsCG", "TnzVzu12vPD", "2X07R3MPlq", "PIQ5WI7hioD", "CAtwMihcqEW", "eLSh2LqUVAz", "PIQ5WI7hioD", "xtjTxOcdG9O", "RvnyFY9bjY", "iclr_2021_PXDdWQDBsCG", "Vi5S3ny7HIA", "b3NjZRrlc6g", "iclr_2021_PXDdWQDBsCG", "khXoYOVIcZn", "He6wlGE-QmO", "8X3oD2nujZd", "VXlbHrRNhqX", ...
iclr_2021_2KSsaPGemn2
Non-Linear Rewards For Successor Features
Reinforcement Learning algorithms have reached new heights in performance, often overtaking humans on several challenging tasks such as Atari and Go. However, the resulting models learn fragile policies that are unable to transfer between tasks without full retraining. Successor features aim to improve this situation b...
withdrawn-rejected-submissions
This paper extends the idea of successor representations. Typically the reward is compute linearly on top of states in this setting but the authors relax it to have a quadratic form. ${\bf Pros}$: 1. A novel formulation of the successor representation where the reward does not follow the linearity assumption 2. The i...
val
[ "GfPCmr2RomY", "XZ52vrfNRZz", "MVGTWLYCoyq", "wn5PQ57q73u", "1vHRItlqZRC", "ciOzKs_ro-m", "w86Bxd1WymU", "KZCZLzhomwf", "1u8m0QocCb-", "lAeMTHCp7gT" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to extend successful features by learning the second moments of cumulants in addition to the cumulants. They demonstrate that the resulting method performs better on 2D and 3D goal-reaching tasks (without obstacles) when the reward is the squared distance.\n\nThe paper is clearly written and ex...
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_2KSsaPGemn2", "MVGTWLYCoyq", "w86Bxd1WymU", "lAeMTHCp7gT", "KZCZLzhomwf", "GfPCmr2RomY", "1u8m0QocCb-", "iclr_2021_2KSsaPGemn2", "iclr_2021_2KSsaPGemn2", "iclr_2021_2KSsaPGemn2" ]
iclr_2021_RGeQOjc58d
Improved Gradient based Adversarial Attacks for Quantized Networks
Neural network quantization has become increasingly popular due to efficient memory consumption and faster computation resulting from bitwise operations on the quantized networks. Even though they exhibit excellent generalization capabilities, their robustness properties are not well-understood. In this work, we system...
withdrawn-rejected-submissions
The paper studies the robustness of binary neural networks (BNNS), showing how quantized models suffer from gradient vanishing. To solve this issue, the authors propose temperature scaling approaches that can overcome this masking, achieving near-perfect perfect success in crafting adversarial inputs for these models. ...
train
[ "Z9NWC7k0q-4", "KDQQdLCz04e", "11aKGkVvBSm", "jO_rvLbIAUI", "m77KFJ_LasE", "FMqWh5rk-QM", "FjPj23f3_gG", "oVCnS-5yuf", "s15vLOKMHwl", "8vizw5ANaRJ", "dki_I4rJa6m", "TvJKkfLVEit", "Ol7rHBbgM9d", "s442b6v_ki", "RXMZIKm0_eP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper identifies the gradient vanishing issue in the robustness of binary quantized networks. Therefore, it proposes to use temperature scaling approach in the attack generation. It has two methods for the temperature scale: (1) singular values of the input-output Jacobian and (2) maximizing the norm of the He...
[ 5, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_RGeQOjc58d", "iclr_2021_RGeQOjc58d", "iclr_2021_RGeQOjc58d", "s442b6v_ki", "KDQQdLCz04e", "RXMZIKm0_eP", "Z9NWC7k0q-4", "8vizw5ANaRJ", "11aKGkVvBSm", "dki_I4rJa6m", "s15vLOKMHwl", "m77KFJ_LasE", "FjPj23f3_gG", "iclr_2021_RGeQOjc58d", "iclr_2021_RGeQOjc58d" ]
iclr_2021_ZAfeFYKUek5
Optimization Variance: Exploring Generalization Properties of DNNs
Unlike the conventional wisdom in statistical learning theory, the test error of a deep neural network (DNN) often demonstrates double descent: as the model complexity increases, it first follows a classical U-shaped curve and then shows a second descent. Through bias-variance decomposition, recent studies revealed tha...
withdrawn-rejected-submissions
Originality: The paper can be developed into a very nice contribution, if the value of the newly introduced optimization variance is evaluated more thoroughly (e.g., through simple theory, or through more rigorous experiments). Main pros: - One of the early works studying epoch-wise double descent - Optimization varia...
train
[ "s5d3t9g4Tlo", "Dlq5Cs2W_4X", "XjCFCr2gO2f", "6tFmejk1dyw", "YkGL7W9LUk", "MBPD8qi-Kzn", "FuqgIAAgrpI", "qkM8yezUOBv", "YVG6M93BfVC", "Y1EZ1yjnFpY", "K8YsvlTO0JX", "yqkWLd9hvM", "cM4mcYex8zs", "tDZcuIAe-7Y", "ZrNtWzS9QEq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper under review studies the epoch wise double descent phenomena empirically. The epoch wise double descent phenomena is the observation that the risk of a large neural network trained with SGD first decreases, then increases, and finally decreases again as a function of the epochs or SGD steps. In addition,...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 4 ]
[ "iclr_2021_ZAfeFYKUek5", "6tFmejk1dyw", "ZrNtWzS9QEq", "ZrNtWzS9QEq", "FuqgIAAgrpI", "s5d3t9g4Tlo", "s5d3t9g4Tlo", "yqkWLd9hvM", "cM4mcYex8zs", "tDZcuIAe-7Y", "tDZcuIAe-7Y", "iclr_2021_ZAfeFYKUek5", "iclr_2021_ZAfeFYKUek5", "iclr_2021_ZAfeFYKUek5", "iclr_2021_ZAfeFYKUek5" ]
iclr_2021_IW-EI6BCxy
Variable-Shot Adaptation for Online Meta-Learning
Few-shot meta-learning methods consider the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks. However, in many real world settings, it is more natural to view the problem as one of minimizing the total amount of supervision --- both t...
withdrawn-rejected-submissions
For meta-learning with variable shot, this paper proposes a method for adapting the learning rate by a function of the number of training examples. The functional form is theoretically derived, and the method is simple and effective. However, meta-learning methods that adapt learning rates have been proposed, and the n...
test
[ "WHKbINa76nH", "dI538DO9LN", "95hZ8pS1DF1", "Ir8TplhgNaP", "6OmGy1_Fduq", "Usvs4DdXeLa", "UA6jwJaVggg", "DRC9dXo6T8R", "pP_1imeVwaI", "AORsfhHOIit", "o2Oy7D9vkEx", "4ZVYZfoyb9C", "74C1_3C364_", "JmJDBeG4vX", "UQk56U_-k8V", "MAew0jyMGRQ", "GJlNakxs9Ur", "hCF9SQ_Eh9i", "-oqhwOZUhm4...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your fast response! We would like to make further clarifications regarding your concerns as follows.\n\n**Point 1: “From table 1, 5 and mutually exclusive MiniImageNet, the baseline MAML works pretty well under variable-shot setting. Doesn't that suggest the learning rate scaling method is a minor th...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "dI538DO9LN", "95hZ8pS1DF1", "Ir8TplhgNaP", "GJlNakxs9Ur", "DRC9dXo6T8R", "iclr_2021_IW-EI6BCxy", "iclr_2021_IW-EI6BCxy", "74C1_3C364_", "crhZz5T2gSg", "-oqhwOZUhm4", "hCF9SQ_Eh9i", "UA6jwJaVggg", "UA6jwJaVggg", "UA6jwJaVggg", "hCF9SQ_Eh9i", "-oqhwOZUhm4", "crhZz5T2gSg", "iclr_2021...
iclr_2021_n5yBuzpqqw
Error Controlled Actor-Critic Method to Reinforcement Learning
In the reinforcement learning (RL) algorithms which incorporate function approximation methods, the approximation error of value function inevitably cause overestimation phenomenon and have a negative impact on the convergence of the algorithms. To mitigate the negative effects of approximation error, we propose a new ...
withdrawn-rejected-submissions
The majority of the reviewers believe that this paper is not ready for publication. Among their concerns is that the paper has limited novelty, especially in relation to existing work that use the KL constraint. Some of the reviewers also believe that the arguments are sometimes hand-wavy and not rigorous. For example,...
test
[ "Tq8A76BtXk", "ioL0h8G3hz", "8VAPfxvoY4i", "k9Bh-oiJb7", "P4OmpwdSRTA", "xytiZVJzaMi", "FxEtpxNRmMR", "zwXSpLesd38", "wqynHckwaN-", "Md8qYE2WC4c", "XoEMmcEMn0", "Wycr5L3RvU", "cba7rXe8480", "ufuMIhcQT7", "0Namc6ppAm5", "CopCioX0eI", "HU150XC7ywq", "wnZ1m814UZO" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_rev...
[ "In this paper, the authors study the error introduced by the estimation of critic function in the Actor-Critic algorithm. Then the author proposed an algorithm that utilizes the idea of double Q learning and using a KL-divergence like regularization method to control this error. Experimentally the proposed algorit...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "iclr_2021_n5yBuzpqqw", "8VAPfxvoY4i", "Wycr5L3RvU", "zwXSpLesd38", "k9Bh-oiJb7", "FxEtpxNRmMR", "XoEMmcEMn0", "cba7rXe8480", "cba7rXe8480", "XoEMmcEMn0", "CopCioX0eI", "Tq8A76BtXk", "HU150XC7ywq", "iclr_2021_n5yBuzpqqw", "wnZ1m814UZO", "iclr_2021_n5yBuzpqqw", "iclr_2021_n5yBuzpqqw",...
iclr_2021_JbAqsfbYsJy
Action and Perception as Divergence Minimization
We introduce a unified objective for action and perception of intelligent agents. Extending representation learning and control, we minimize the joint divergence between the combined system of agent and environment and a target distribution. Intuitively, such agents use perception to align their beliefs with the world,...
withdrawn-rejected-submissions
The paper presents an KL-divergence minimisation approach to the action–perception loop, and thus presents a unifying view on concepts such as Empowerment, entropy-based RL, optimal control, etc. The paper does two things here: it serves as a survey paper, but on top of that puts these in a unifying theory. While the...
train
[ "nDlpy1miu1R", "Jqys-tM3Tro", "nr85wo36REa", "ZB-BFBqGElQ", "I_G2tTEoW-l", "VAbtX_tijDV", "UM5AbA24Vlf", "z05qD9y6R3", "d22odngcxYP", "UaFYHLMYdMQ", "X6nJXPVk8lh", "9Z3e6YCxzd7", "CpMYk4Ee-ng", "tEx15m-FfVe", "bDfgH7x4hwl", "7qAUHCe7C-", "NYf1MGB8YeJ", "tin2FNqcuO9", "-3T1MGtPeOm...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ "The authors proposed to use the joint KL divergence between the generative joint distribution and the target distribution (containing latent variables which could correspond to latent parts we wanted to model (e.g. beliefs). It was illustrative to discuss decomposing the joint KL into different ways and thus formi...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_JbAqsfbYsJy", "nr85wo36REa", "ZB-BFBqGElQ", "UM5AbA24Vlf", "VAbtX_tijDV", "9Z3e6YCxzd7", "7qAUHCe7C-", "UaFYHLMYdMQ", "iclr_2021_JbAqsfbYsJy", "tEx15m-FfVe", "-3T1MGtPeOm", "-3T1MGtPeOm", "vcakzIUU_Qx", "vcakzIUU_Qx", "nDlpy1miu1R", "nDlpy1miu1R", "d22odngcxYP", "d22odng...
iclr_2021_iVaPuvROtMm
Learning Stochastic Behaviour from Aggregate Data
Learning nonlinear dynamics from aggregate data is a challenging problem since the full trajectory of each individual is not available, namely, the individual observed at one time point may not be observed at next time point, or the identity of individual is unavailable. This is in sharp contrast to learning dynamics w...
withdrawn-rejected-submissions
While the reviewer's noted a number of strengths of your paper, the approach that you took, and agreed that you had tackled an important problem, concerns remained about presentation and clarity. I agree. (Here are just a few miscellaneous comments: the very first paragraph of the Introduction needs to be rewritten for...
train
[ "5XEz9k0gOr0", "bVeh7yW-K0", "f3JaaDEQCN1", "2EFRoAdLRf7", "sh0ZzhdWKMa", "kPOz7UXZLR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper addresses an interesting problem: learning stochastic dynamics from aggregate data. The aggregate data refers to the setting when the data is anonymized. For example, one has data about the location of the birds at different instants in time, but the birds are not labeled and can not be distinguished fro...
[ 5, 4, -1, -1, -1, 8 ]
[ 4, 4, -1, -1, -1, 3 ]
[ "iclr_2021_iVaPuvROtMm", "iclr_2021_iVaPuvROtMm", "bVeh7yW-K0", "kPOz7UXZLR", "5XEz9k0gOr0", "iclr_2021_iVaPuvROtMm" ]
iclr_2021__qoQkWNEhS
Ricci-GNN: Defending Against Structural Attacks Through a Geometric Approach
Graph neural networks (GNNs) rely heavily on the underlying graph topology and thus can be vulnerable to malicious attacks targeting at graph structures. We propose a novel GNN defense algorithm against structural attacks that maliciously modify graph topology. In particular, we discover a robust representation of the ...
withdrawn-rejected-submissions
The paper proposes a new defense against adversarial attacks on graphs using a reweighting scheme based on Ricci-flow. Reviewers highlighted that the paper introduces interesting ideas and that the use of Ricci-curvature/flow is a novel and promising contribution. Reviewers also recognized that the paper has significan...
train
[ "Goam2hEXZCo", "63cGjpDVemO", "21htjuILV1U", "1quoCf9GRn4", "JkCMgk7fjy8", "uHJBAJliga_", "ZZFBlm9CjYH", "cbn4gQAiIXW", "wxmg8iNtood", "1-Su-X9Agg", "96SyFTrLMNZ", "a8FV1_t9IsQ", "r1sgvF1YXn" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Summary:\nIn Ricci-GCN new graphs are resampled in each iteration of the training phase based on the Ricci flow metric. The Ricci flow incorporates curvature information and captures the intrinsic geometry of the graph. Compared to e.g. spectral embedding it is more robust to structural perturbations. This leads t...
[ 6, -1, -1, 5, -1, -1, 5, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021__qoQkWNEhS", "21htjuILV1U", "cbn4gQAiIXW", "iclr_2021__qoQkWNEhS", "uHJBAJliga_", "96SyFTrLMNZ", "iclr_2021__qoQkWNEhS", "1quoCf9GRn4", "ZZFBlm9CjYH", "iclr_2021__qoQkWNEhS", "Goam2hEXZCo", "r1sgvF1YXn", "ZZFBlm9CjYH" ]
iclr_2021_eyXknI5scWu
Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Saliency maps that identify the most informative regions of an image for a classifier are valuable for model interpretability. A common approach to creating saliency maps involves generating input masks that mask out portions of an image to maximally deteriorate classification performance, or mask in an image to preser...
withdrawn-rejected-submissions
While the reviewers found parts of the paper interesting, the main concern about this paper was lack of novelty and marginal improvements obtained by the proposed methods.
val
[ "2u5QpP13al0", "0tBW1qDBw2c", "qYzAHWLf7C-", "ruOv5IOQBWG", "FUQpLf5-gOt", "UV6c3AJYwlo", "onfYxd75sc4", "eZTV9zB7C9f", "wwl2hQPQpMk", "PDrOsmRDfg", "KODoHhOBji", "tfVaT9VxvyX" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "/*************************Post-Rebuttal**********************/\nThe authors address many of my concerns well, and I agree with their rebuttal.\n\nThe modified manuscript also looks good, too.\n\nI raise my rating.\n\n/*************************Pre-Rebuttal**********************/\n\nPros.:\n1. The proposed method is...
[ 6, -1, 4, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2021_eyXknI5scWu", "ruOv5IOQBWG", "iclr_2021_eyXknI5scWu", "PDrOsmRDfg", "2u5QpP13al0", "iclr_2021_eyXknI5scWu", "2u5QpP13al0", "tfVaT9VxvyX", "KODoHhOBji", "qYzAHWLf7C-", "iclr_2021_eyXknI5scWu", "iclr_2021_eyXknI5scWu" ]
iclr_2021_VMAesov3dfU
Gradient Descent Resists Compositionality
In this paper, we argue that gradient descent is one of the reasons that make compositionality learning hard during neural network optimization. We find that the optimization process imposes a bias toward non-compositional solutions. This is caused by gradient descent, trying to use all available and redundant informat...
withdrawn-rejected-submissions
Dear Authors, Thank you very much for submitting this very interesting paper. This work analyzes the effect of gradient descent training on the compositionality of the learned model. Their main argument is that GD tries to use the redundant information in the data and, as a result, it doesn't generalize well. The pap...
val
[ "3eLc8C5MVll", "Xr4r5zLeqZr", "kh7QzOiwkj", "0ocSHEuj-oe", "yJ9QaZ1tZm3", "Dg9NAKD8Zum", "TrdTj38yaw", "WrISzmAn-JN", "YTtni38PJP8", "PRm5Lk3XHO6", "Ah4YHsG6Pc", "sUWjKTjpSbQ", "k4LmsCHVohi", "N-G-ZroBWO" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the question.\n\nA: This paper studies compositional generalization by studying conditional independence property (it is not invariance), which is a key property for the generalization. Please see Section 3 for more details. The experiment is designed to show the effect of breaking the conditional in...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "0ocSHEuj-oe", "yJ9QaZ1tZm3", "Dg9NAKD8Zum", "WrISzmAn-JN", "PRm5Lk3XHO6", "TrdTj38yaw", "N-G-ZroBWO", "Ah4YHsG6Pc", "sUWjKTjpSbQ", "k4LmsCHVohi", "iclr_2021_VMAesov3dfU", "iclr_2021_VMAesov3dfU", "iclr_2021_VMAesov3dfU", "iclr_2021_VMAesov3dfU" ]
iclr_2021_Cue2ZEBf12
Towards Adversarial Robustness of Bayesian Neural Network through Hierarchical Variational Inference
Recent works have applied Bayesian Neural Network (BNN) to adversarial training, and shown the improvement of adversarial robustness via the BNN's strength of stochastic gradient defense. However, we have found that in general, the BNN loses its stochasticity after its training with the BNN's posterior. As a result, th...
withdrawn-rejected-submissions
This paper improves on previous work (adv-BNN) with hierarchical variational inference. It observes that mean-field VI training for BNNs often result in close-to-deterministic approximate posterior distributions for weights, which effectively makes the BNN closer to deterministic neural network, thereby loosing the rob...
train
[ "vp6VvPN7d8", "y_Xl8eyYieW", "buAl4XO_88G", "yZuA3QFEXSj", "kLJc5U4pL5w", "l3HmjnChrFH", "kGzv6PTNkbM", "N0oYVHSWMcu", "jVmnywlKoaE", "-YIQpmKWob3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Updates:\nThe author addressed my concerns about the experiments. Though the improvement is marginal and I still have some concerns, I’m ok to accept the paper. I’ll change my score to 6.\n========================\n\nSummary:\nThe paper studied the adversarial Bayesian Neural Network and found that the stochastici...
[ 6, 5, 6, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_Cue2ZEBf12", "iclr_2021_Cue2ZEBf12", "iclr_2021_Cue2ZEBf12", "buAl4XO_88G", "vp6VvPN7d8", "y_Xl8eyYieW", "iclr_2021_Cue2ZEBf12", "-YIQpmKWob3", "yZuA3QFEXSj", "iclr_2021_Cue2ZEBf12" ]
iclr_2021_2kImxCmYBic
Numeric Encoding Options with Automunge
Mainstream practice in machine learning with tabular data may take for granted that any feature engineering beyond scaling for numeric sets is superfluous in context of deep neural networks. This paper will offer arguments for potential benefits of extended encodings of numeric streams in deep learning by way of a surv...
withdrawn-rejected-submissions
While I'm sure there are many merits to the underlying work here, the consensus of the reviews is to recommend a rejection as an ICLR paper. That recommendation is based on issues with significance as well as on clarity issues, noted by reviewers even after the revisions. One pattern I noticed was that it seemed uncle...
train
[ "snb026fDP73", "FcyopK3kHdq", "_WGUJQ7fAKR", "c0EEP_VNafe", "l3fx5owi5MM", "9rMobsk_F5V", "LCPQJ0GTXXQ", "UsA62dNCUjp", "FQyqtc4G2M", "-0MHIW-S3x0", "rXVGxCIeiUR", "DQenjeUfaY5", "vu7sdCPk0tc", "8uk3GTjdDVa", "XrATo0w5eX4", "eN-1gZc4tsA", "DsaBzmjFmPu", "KYHZ5KebbFa", "Qj-L5s0CTJ...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Hello reviewers. Please accept my sincerest gratitude for tolerating my updates through the review period. As may have been apparent, the validation of data augmentation by noise injection has been coalescing in real time through these updates, and I believe it is now well vetted. As noted in a prior comment, I ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 2 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2021_2kImxCmYBic", "_WGUJQ7fAKR", "c0EEP_VNafe", "l3fx5owi5MM", "9rMobsk_F5V", "LCPQJ0GTXXQ", "UsA62dNCUjp", "iclr_2021_2kImxCmYBic", "vu7sdCPk0tc", "XrATo0w5eX4", "DsaBzmjFmPu", "eN-1gZc4tsA", "KYHZ5KebbFa", "-0MHIW-S3x0", "Qj-L5s0CTJo", "iclr_2021_2kImxCmYBic", "iclr_2021_2kI...
iclr_2021_Kz42iQirPJI
Towards Learning to Remember in Meta Learning of Sequential Domains
Meta-learning has made rapid progress in past years, with recent extensions made to avoid catastrophic forgetting in the learning process, namely continual meta learning. It is desirable to generalize the meta learner’s ability to continuously learn in sequential domains, which is largely unexplored to-date. We found t...
withdrawn-rejected-submissions
The paper proposes a sequential meta-learning method over few-shot sequential domains, which meta learns both model parameters and learning rate vectors to capture task-general representations. Reviewers raised many insightful and constructive comments. The main themes are as follows: - The problem setting needs furth...
val
[ "uous3Lj6VKZ", "0rDp_3GLzWc", "g-euP0XnEm", "Byjcq4AL3NQ", "jwMB8oT2KRl", "4tSjDhsfLfN", "8qLbm8ClIVF", "l6HW6H_0K66", "PmeRfM3lkOx", "GIZEyVWt38o" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nAt the heart of this paper are two separate contributions. The first is a new online meta-learning problem setting where the meta-learner acts on a sequence of few-shot learning *domains*, as opposed to tasks within a single domain. The second is a method for meta-learning with this form of domain shift...
[ 6, 5, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2021_Kz42iQirPJI", "iclr_2021_Kz42iQirPJI", "GIZEyVWt38o", "uous3Lj6VKZ", "0rDp_3GLzWc", "iclr_2021_Kz42iQirPJI", "jwMB8oT2KRl", "PmeRfM3lkOx", "iclr_2021_Kz42iQirPJI", "iclr_2021_Kz42iQirPJI" ]
iclr_2021_1NRMmEUyXMu
World Model as a Graph: Learning Latent Landmarks for Planning
Planning, the ability to analyze the structure of a problem in the large and decompose it into interrelated subproblems, is a hallmark of human intelligence. While deep reinforcement learning (RL) has shown great promise for solving relatively straightforward control tasks, it remains an open problem how to best incorp...
withdrawn-rejected-submissions
This paper proposes a model-based RL algorithm which, instead of simply fitting a parameterized transition model and uses rollout for planning, learns latent landmarks via distance-based clustering and conducts planning on the learned graph. Although some of these ideas themselves have appeared in literatures, the over...
train
[ "wmP0mIFNJNY", "DJsCqwuDwTl", "fwPi_pZyptm", "_WeL21aim2-", "X7SiLwXP3Ks", "KEL7BtlMS_j", "csuWILxiNG", "bpru-jNoMm4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper approaches long horizon planning by learning a sparse graphical representation. The proposed algorithm, L3P, proceeds by learning a latent space which enforces a distance measure, where this distance is learned to mimic the number of steps between states via a goal conditioned Q-function. A clustering a...
[ 5, 5, 7, 6, -1, -1, -1, -1 ]
[ 5, 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2021_1NRMmEUyXMu", "iclr_2021_1NRMmEUyXMu", "iclr_2021_1NRMmEUyXMu", "iclr_2021_1NRMmEUyXMu", "_WeL21aim2-", "fwPi_pZyptm", "DJsCqwuDwTl", "wmP0mIFNJNY" ]
iclr_2021_hLElJeJKxzY
Deep Q Learning from Dynamic Demonstration with Behavioral Cloning
Although Deep Reinforcement Learning (DRL) has proven its capability to learn optimal policies by directly interacting with simulation environments, how to combine DRL with supervised learning and leverage additional knowledge to assist the DRL agent effectively still remains difficult. This study proposes a novel ...
withdrawn-rejected-submissions
The reviewer acknowledged that the proposed method is simple and seems to work well on the chosen benchmarks. Yet the expressed several concerns that were not fully addressed by the authors in their responses. The major concern is about the experimental setup. The chosen tasks have been judged too simple and quite diff...
train
[ "d275UPzeZcH", "1URtDDEb6F", "bNgV4V6NMMg", "suqJuhjPa-", "JnxnkNCtke", "Lqj0YmxDGh", "ipD9la7rH4-", "Hlq7-b5bAx", "tZdEeok0gxC", "ElwfW-0Maw", "U2thcg5eBr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "=====POST-REBUTTAL COMMENTS======== \n\nI thank the authors for the response and the efforts in the updated draft. Most of my concerns were addressed. This is a simple, but nice idea. After reading the rebuttal and the other reviews I am recommending to accept the paper.\n\n########################################...
[ 7, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5 ]
[ "iclr_2021_hLElJeJKxzY", "bNgV4V6NMMg", "U2thcg5eBr", "U2thcg5eBr", "iclr_2021_hLElJeJKxzY", "tZdEeok0gxC", "d275UPzeZcH", "ElwfW-0Maw", "iclr_2021_hLElJeJKxzY", "iclr_2021_hLElJeJKxzY", "iclr_2021_hLElJeJKxzY" ]
iclr_2021_49V11oUejQ
Efficient Robust Training via Backward Smoothing
Adversarial training is so far the most effective strategy in defending against adversarial examples. However, it suffers from high computational cost due to the iterative adversarial attacks in each training step. Recent studies show that it is possible to achieve Fast Adversarial Training by performing a single-step ...
withdrawn-rejected-submissions
This paper studies efficient robust training. The key idea is to use backward smoothing as an advanced random initialization to improve a model's adversarial robustness. The approach is sound, well grounded, and quite logical. Results demonstrate the effectiveness. However, there exists some limitations: 1) Andriushc...
val
[ "4RM87Worw_B", "4JwCGy-8Kp2", "GrbC_6RQPyX", "a1BjJizLPG", "ZkLuvM9cI2r", "PDc8KI6vEGp", "SGdR2o5JCLg", "pt_ROtHJuW", "JsV0Z4TTbHb", "N_bDgBk45CY", "oq583wkn5S3", "mHX3GZSjA8W", "_TtiPQH91TI", "kn5ykq17GkJ", "hTKlEfQzRfX", "UncqJRL-20-", "MZYEo_4xLXH", "jR7GiNU6Xrn", "Q9w7fXicZ5C...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_r...
[ "This work proposes backward smoothing as an advanced random initialization to improve a model's adversarial robustness. The paper is well-written and easy to follow. However, I have the following concerns:\n1) The paper argues that random initialization can help fast adversarial training because it helps improve t...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_49V11oUejQ", "iclr_2021_49V11oUejQ", "iclr_2021_49V11oUejQ", "JsV0Z4TTbHb", "pt_ROtHJuW", "SGdR2o5JCLg", "mHX3GZSjA8W", "MZYEo_4xLXH", "UncqJRL-20-", "_TtiPQH91TI", "iclr_2021_49V11oUejQ", "4RM87Worw_B", "hTKlEfQzRfX", "Q9w7fXicZ5C", "Qan3eHWT17Y", "4JwCGy-8Kp2", "GrbC_6RQ...
iclr_2021_WZnVnlFBKFj
Federated Learning With Quantized Global Model Updates
We study federated learning (FL), which enables mobile devices to utilize their local datasets to collaboratively train a global model with the help of a central server, while keeping data localized. At each iteration, the server broadcasts the current global model to the devices for local training, and aggregates the ...
withdrawn-rejected-submissions
Although the reviewers acknowledge some contributions of the paper, it has some limitations on both theoretical results and numerical experiments. It is still unclear about the effectiveness of the proposed method. The authors should consider the following issues for the future submission: 1) The justification of $\t...
train
[ "dQdSdRZuyJ_", "A4Fs0I7boH", "iO2x1RwyJRT", "QTCI_eHf6oy", "FdvFEE24RD", "bfJwLGZ7_IH", "8ffAtDFPc3F", "ijlc_cAF_0r", "9DPZn9wK7TT", "LI--RzOIiBt", "6_RLg9BJxo" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies federated learning with quantization. The problem setting is very standard, including both iid and non-iid cases. This work proposes a new algorithm, called lossy FL, to save the communication costs, especially from the broadcasting direction. To my understanding, the algorithm is new but still ...
[ 5, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_WZnVnlFBKFj", "LI--RzOIiBt", "dQdSdRZuyJ_", "LI--RzOIiBt", "dQdSdRZuyJ_", "9DPZn9wK7TT", "6_RLg9BJxo", "iclr_2021_WZnVnlFBKFj", "iclr_2021_WZnVnlFBKFj", "iclr_2021_WZnVnlFBKFj", "iclr_2021_WZnVnlFBKFj" ]