paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_eEeyRrKVfbL
Balancing training time vs. performance with Bayesian Early Pruning
Pruning is an approach to alleviate overparameterization of deep neural networks (DNN) by zeroing out or pruning DNN elements with little to no efficacy at a given task. In contrast to related works that do pruning before or after training, this paper presents a novel method to perform early pruning of DNN elements (e....
withdrawn-rejected-submissions
This paper considers the problem of pruning deep neural networks (DNNs) during training. The key idea is to include DNN elements only if they improve the predictive mean of the saliency (efficiency of the DNN elements in terms of minimizing the loss function). The objective of early pruning is to preserve the sub-netwo...
test
[ "MtvY8vJNhjI", "QvPk0fYIvIk", "jaHqZHFt1ZK", "ZQjTWPdNCrY", "MyP65VLGan", "yXXwLd6-Vts", "0zFLCJnPX5", "WWmYu3mrEQ", "nsRNWjJTWUr", "TutO705QPe0", "nMLdqHEWgXG", "m38SFfTc7-Y", "QQsaYZ8Icj" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a Bayesian-based approach to early prune parameters, which are predicted to have low saliency/importance, with the goal of accelerating the training of deep neural networks. The predictor is a \"multi-output Gaussian process\" which is computation expensive.\n\nThe writing quality and clarity of...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2 ]
[ "iclr_2021_eEeyRrKVfbL", "MyP65VLGan", "iclr_2021_eEeyRrKVfbL", "MyP65VLGan", "yXXwLd6-Vts", "MtvY8vJNhjI", "m38SFfTc7-Y", "QQsaYZ8Icj", "nMLdqHEWgXG", "iclr_2021_eEeyRrKVfbL", "iclr_2021_eEeyRrKVfbL", "iclr_2021_eEeyRrKVfbL", "iclr_2021_eEeyRrKVfbL" ]
iclr_2021_dKwmCtp6YI
Representation and Bias in Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to "conditional-language-model". Our goal is to improve our understanding and expectation of the relationship between language, data representation, size, an...
withdrawn-rejected-submissions
The paper study to what extent languages are hard to model by a conditional language-model based on information-theoretic measurements. Overall, the reviewers value the systematic and extensive controlled experiments present in the paper. However, the presentation of the paper makes it very hard to follow and reviewe...
train
[ "6vp_hHxTzOu", "q9moLR4DMkF", "F_R_iujmKO", "oo-G9pwX4BG", "5vVEqh3f4Vd", "MhuxIhbU8NB", "hongXQyc_9s", "76s5dfLC6sM", "1DBqOoGCZI", "MNJQVbu9lz0", "51q7nkcW7yZ", "Dnd42UqhNsn", "YTcewTWPnCE", "nfIPmG5ej1-", "F8Vl64Npsnf", "vyb_ir4qYaF", "ZTDrA_qwKLm", "XFM-CP46DgZ", "zh4Xv5Ynj_D...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper investigates whether languages are equally hard to Conditional-Language-Model (CLM). To do this, the authors perform controlled experiments by modeling text from parallel data from 6 typologically diverse languages. They pair the languages and perform experiments in 30 directions with Transformers, and c...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_dKwmCtp6YI", "F_R_iujmKO", "F8Vl64Npsnf", "MNJQVbu9lz0", "iclr_2021_dKwmCtp6YI", "6vp_hHxTzOu", "iclr_2021_dKwmCtp6YI", "1DBqOoGCZI", "ZTDrA_qwKLm", "51q7nkcW7yZ", "nfIPmG5ej1-", "YTcewTWPnCE", "6vp_hHxTzOu", "XFM-CP46DgZ", "zh4Xv5Ynj_D", "iclr_2021_dKwmCtp6YI", "iclr_2021...
iclr_2021_HkUfnZFt1Rw
Dissecting graph measures performance for node clustering in LFR parameter space
Graph measures can be used for graph node clustering using metric clustering algorithms. There are multiple measures applicable to this task, and which one performs better is an open question. We study the performance of 25 graph measures on generated graphs with different parameters. While usually measure comparisons ...
withdrawn-rejected-submissions
This paper studies various graph measures in depth. The paper was reviewed by three expert reviewers who complemented the ease of understanding because of clear writing. But they also expressed concerns for limited novelty, theoretical justification, and unrealistic setting. The authors are encouraged to continue rese...
train
[ "1ZI3FjX2sbg", "l7LtxINn9-I", "ATNsMEU9a9", "HZQtrj2F3Ca", "If4r7u7KLG5", "EFcZUvFTAyX", "P5p6LkLt9AB", "pV6GmG90MmX", "rQ5mjcmKv31", "89WhVX3KfR4", "yA_F5WmNcS" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank AnonReviewer1 for providing constructive suggestions, which helped to improve the manuscript. We have addressed all the comments below.\n\n_REVIEWER 1: Using 7500 LFR-generated graphs as a benchmarks suite, the authors compare 25 graph clustering measures, determining the best measure for ev...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 2, 5, 5 ]
[ "pV6GmG90MmX", "ATNsMEU9a9", "rQ5mjcmKv31", "If4r7u7KLG5", "89WhVX3KfR4", "P5p6LkLt9AB", "yA_F5WmNcS", "iclr_2021_HkUfnZFt1Rw", "iclr_2021_HkUfnZFt1Rw", "iclr_2021_HkUfnZFt1Rw", "iclr_2021_HkUfnZFt1Rw" ]
iclr_2021_9w03rTs7w5
Transfer among Agents: An Efficient Multiagent Transfer Learning Framework
Transfer Learning has shown great potential to enhance the single-agent Reinforcement Learning (RL) efficiency, by sharing learned policies of previous tasks. Similarly, in multiagent settings, the learning performance can also be promoted if agents can share knowledge between each other. However, it remains an open qu...
withdrawn-rejected-submissions
The paper proposes a method on multi-agent options-based policy transfer where agents help each other learn by exchanging policies. The core idea behind the paper is novel, as it addresses a new and emerging topic of social learning, and of interest to ICLR community. The authors significantly improved the paper with ...
train
[ "mAAEj-TfUql", "sgwbozLk09N", "8OhNug8FXy8", "sWdneW5EpfN", "d_q9gbNRQqr", "DAPxb7ukR0d", "nWXOxKd1YXo", "WwqTSFdELau", "OxpEEKqdjsG", "Wr6wlZkzfAz", "ibCRfPtc5yP", "o-mcuAthN32", "nF88eM88Ex5", "msuHI-ER2L8", "8AgE93PUZjW", "Ul2KYY3NuSG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an option-based framework for multiple agents to share knowledge with each other in the same MARL task. For scalability and robustness, two variants of the framework are designed, including 1) a global option advisor, which has the access to the global information of the environment; 2) local o...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 5, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_9w03rTs7w5", "iclr_2021_9w03rTs7w5", "iclr_2021_9w03rTs7w5", "WwqTSFdELau", "iclr_2021_9w03rTs7w5", "o-mcuAthN32", "sgwbozLk09N", "8AgE93PUZjW", "Wr6wlZkzfAz", "Ul2KYY3NuSG", "OxpEEKqdjsG", "msuHI-ER2L8", "mAAEj-TfUql", "8OhNug8FXy8", "iclr_2021_9w03rTs7w5", "iclr_2021_9w03r...
iclr_2021_DGttsPh502x
Unsupervised Discovery of Interpretable Latent Manipulations in Language VAEs
Language generation models are attracting more and more attention due to their constantly increasing quality and remarkable generation results. State-of-the-art NLG models like BART/T5/GPT-3 do not have latent spaces, therefore there is no natural way to perform controlled generation. In contrast, less popular models w...
withdrawn-rejected-submissions
This paper proposes a simple method to discover latent manipulations in trained text VAEs. Compared to random and coordinate directions, the authors found that by performing PCA on the latent code to find directions that maximize variance, more interpretable text manipulations can be achieved. This paper receives 4 r...
train
[ "ZF1DUm4mdXI", "rx7V4ECBVmZ", "Bl0s-kVQQeL", "sLfWkBPDA3C", "S5uNTnEtojx", "YA_fvk0vmP0", "hmHHN99TBv", "MWu4hOJKhcx" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "-------------------\nSummary\n-------------------\nThis paper proposes a simple approach to discover interpretable latent manipulations in trained text VAEs. The method essentially involves performing PCA on the latent representations to find directions that maximize variance. The authors argue that this results i...
[ 3, -1, -1, -1, -1, 3, 5, 4 ]
[ 4, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_DGttsPh502x", "ZF1DUm4mdXI", "YA_fvk0vmP0", "hmHHN99TBv", "MWu4hOJKhcx", "iclr_2021_DGttsPh502x", "iclr_2021_DGttsPh502x", "iclr_2021_DGttsPh502x" ]
iclr_2021_ba82GniSJdc
Task Calibration for Distributional Uncertainty in Few-Shot Classification
As numerous meta-learning algorithms improve performance when solving few-shot classification problems for practical applications, accurate prediction of uncertainty, though challenging, has been considered essential. In this study, we contemplate modeling uncertainty in a few-shot classification framework and propose ...
withdrawn-rejected-submissions
This paper tries to address the uncertainty calibration problem in meta-learning by weighting the gradient from different tasks according to class-wise similarity. There have been many concerns raised by the reviewers and most of them either are still not properly addressed after the rebuttal period. The main concern...
train
[ "nQBMoGSLIzD", "1kxkM9DBopv", "DaQ3Xy_ZnZ", "J1_dkTA_sIZ", "-gVYgwPbBol", "Sc5-1XX7nGq", "PhPtWJsqXk7", "4aGzQLyJE-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presented a task calibration (TC) method, which introduces the notion of \"distributional uncertainty\", for few-shot classification. Two TC extensions of existing methods (MAML and ProtoNet), namely TC-MAML and TC-ProtoNet, have been presented and experimented. The main contributions of this work the a...
[ 5, 4, 5, -1, -1, -1, -1, 4 ]
[ 3, 4, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2021_ba82GniSJdc", "iclr_2021_ba82GniSJdc", "iclr_2021_ba82GniSJdc", "nQBMoGSLIzD", "1kxkM9DBopv", "4aGzQLyJE-", "DaQ3Xy_ZnZ", "iclr_2021_ba82GniSJdc" ]
iclr_2021_bi7nTZy4QmH
Learning Contextual Perturbation Budgets for Training Robust Neural Networks
Existing methods for training robust neural networks generally aim to make models uniformly robust on all input dimensions. However, different input dimensions are not uniformly important to the prediction. In this paper, we propose a novel framework to train certifiably robust models and learn non-uniform perturbation...
withdrawn-rejected-submissions
Reviewers raised various concerns about the motivation, unclear justification of the idea and claim, insufficient comparison with related work, and weak experimental results. While authors had made efforts to improve some of these issues in the rebuttal, the revision was not satisfied for publication quality. Overall, ...
train
[ "0NJiIASOXIW", "KMVUjUsdNpx", "ytFx8Pvek3j", "GLWqv3N2SHV", "8AWxv51hPj", "TxhYuQH36f_", "Nqq5N1h3ezX", "i-BtE7jJrdZ", "ZVxYMMvemS4", "8LRW2MTJjM5", "2nPRuBVImVm", "SDtweLl0xEg", "QICd9osZivN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update after the rebuttal:\n\nI read response from the authors and other reviews. I have increased my score to 5 given that authors now performed some\ncomparison with Liu et al. However, I still believe that the threat model is not realistic and that attacker can not be bounded by the \nbudget that is produced by...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_bi7nTZy4QmH", "iclr_2021_bi7nTZy4QmH", "0NJiIASOXIW", "0NJiIASOXIW", "0NJiIASOXIW", "SDtweLl0xEg", "QICd9osZivN", "KMVUjUsdNpx", "KMVUjUsdNpx", "KMVUjUsdNpx", "KMVUjUsdNpx", "iclr_2021_bi7nTZy4QmH", "iclr_2021_bi7nTZy4QmH" ]
iclr_2021_AwPGPgExiYA
Differentiable Learning of Graph-like Logical Rules from Knowledge Graphs
Logical rules inside a knowledge graph (KG) are essential for reasoning, logical inference, and rule mining. However, existing works can only handle simple, i.e., chain-like and tree-like, rules and cannot capture KG's complex semantics, which can be better captured by graph-like rules. Besides, learning graph-like rul...
withdrawn-rejected-submissions
While all reviewers see a lot of value in the paper, it cannot be accepted in its current form: too many issues with clarity. A more focused paper, with clear task and contributions is recommended. The revisions and answers to reviewer questions are greatly appreciated and go a long way towards addressing these concern...
train
[ "bDS3pmVkcj", "DELmq5TmGa3", "vocyeqOhT9", "yXFvqmnNWMY", "0EwRDFp31UI", "1L20fKz9b1E", "pHLDv1Et_yo", "gEVlglgEjUf", "PjoMKkOKQ-N", "bt7aBmeLx5q" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your valuable comments! We very much appreciate it. Please allow us to respond to your questions one by one as follows.\n\n**Q1**: What is the difference between logic rules and logical queries? The paper abuses terms like “logical rules”, “logical queries”, “logical rules for query”, making it extre...
[ -1, -1, -1, -1, -1, -1, 5, 4, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 2 ]
[ "PjoMKkOKQ-N", "bDS3pmVkcj", "gEVlglgEjUf", "pHLDv1Et_yo", "iclr_2021_AwPGPgExiYA", "bt7aBmeLx5q", "iclr_2021_AwPGPgExiYA", "iclr_2021_AwPGPgExiYA", "iclr_2021_AwPGPgExiYA", "iclr_2021_AwPGPgExiYA" ]
iclr_2021_rVdLv-uzYup
Joint Perception and Control as Inference with an Object-based Implementation
Existing model-based reinforcement learning methods often study perception modeling and decision making separately. We introduce joint Perception and Control as Inference (PCI), a general framework to combine perception and control for partially observable environments through Bayesian inference. Based on the fact that...
withdrawn-rejected-submissions
This paper introduces an object perception and control method for RL, derived from a control-as-inference formulation within a POMDP. The paper provides a theoretical derivation and experiments where the proposed joint-inference approach outperforms baselines. The discussion focussed on understanding the paper's cont...
train
[ "w9Y6yh2M45W", "-ffIWS3Y-wZ", "0OQyuNI9p6I", "uVP8yVmSA-1", "eOSjWmNY73A", "s6wC0puDgAM", "CtBDtNP0bA", "Slp-a1OvBfx", "2BXK4NYh3zy" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an extension of the RL as Inference framework, and demonstrates how to use it to express an object-centric RL model and train it on simple environments. It appears to be a combination of NEM [1] with a simple TD-learning objective on top. Results are a bit hard to interpret but seem promising.\...
[ 4, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_rVdLv-uzYup", "Slp-a1OvBfx", "CtBDtNP0bA", "w9Y6yh2M45W", "2BXK4NYh3zy", "iclr_2021_rVdLv-uzYup", "iclr_2021_rVdLv-uzYup", "iclr_2021_rVdLv-uzYup", "iclr_2021_rVdLv-uzYup" ]
iclr_2021_C4-QQ1EHNcI
Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference
The Bayesian paradigm has the potential to solve some of the core issues in modern deep learning, such as poor calibration, data inefficiency, and catastrophic forgetting. However, scaling Bayesian inference to the high-dimensional parameter spaces of deep neural networks requires restrictive approximations. In this pa...
withdrawn-rejected-submissions
This paper propose an approach to efficient Bayesian deep learning by applying Laplace approximations to sub-structures within a larger network architecture. In terms of strengths, scalable approximate Bayesian inference methods for deep learning models are an important and timely topic. The paper includes an extensive...
train
[ "rqcMhCHK7aw", "s2PYE0iJSN", "wJpLDFMTPCX", "DLL2ZuzZ21e", "uOb_FDh4Qhz", "KBMXrxa8J7", "dnuTnUwRkin", "gErsWdf0h8c", "f-8Qd2qUUCi", "sjop_3593Hz", "xrYSjH7Nol5" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their thorough feedback and constructive suggestions. We address your individual points below:\n\n**1-2**\n**Agreed; we rephrased and clarified in the updated text.**\n\n**3**\n\nThat is a good question: we could indeed take the square of each weight’s gradients to fill the diagonal of ou...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "f-8Qd2qUUCi", "iclr_2021_C4-QQ1EHNcI", "xrYSjH7Nol5", "gErsWdf0h8c", "rqcMhCHK7aw", "dnuTnUwRkin", "sjop_3593Hz", "iclr_2021_C4-QQ1EHNcI", "iclr_2021_C4-QQ1EHNcI", "iclr_2021_C4-QQ1EHNcI", "iclr_2021_C4-QQ1EHNcI" ]
iclr_2021_kdm4Lm9rgB
Monotonic Robust Policy Optimization with Model Discrepancy
State-of-the-art deep reinforcement learning (DRL) algorithms tend to overfit in some specific environments due to the lack of data diversity in training. To mitigate the model discrepancy between training and target (testing) environments, domain randomization (DR) can generate plenty of environments with a sufficient...
withdrawn-rejected-submissions
The paper tackles the problem of mitigating the effect of model discrepancies between the learning and deployment environments. In particular, the author focus on the worst-case possible performance. The paper has both an empirical and theoretical flavor. The algorithm they derived is backed by theoretical guarantees. ...
train
[ "RLBJHEg3WlB", "vBKfbHv0rFB", "JJfulIKMmvN", "xDWUPKnKNVM", "95Q-RhIDUCX", "prRsijY1qA", "vDp6q9EPwK2", "4VwK6O_grWf", "3Xvij8RNA9K", "YuboIfEDEgi", "qDmrEva94lo", "756hpFO4utv", "IUrCSVMQ0A0", "9HGj9QSmABI", "_0uPUvHZK9", "edlYnHc1j4a", "hVyquA5GeqP", "BnlbUBTtMPs" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Motivated by the domain transfer problem in RL where policies are trained on simulators that may not reflect perfectly the reality, this paper propose a new policy optimization algorithm named MRPO that is expected to be robust to changes in the environment's dynamic.\nThe formal setting and the notations are the ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_kdm4Lm9rgB", "hVyquA5GeqP", "prRsijY1qA", "756hpFO4utv", "IUrCSVMQ0A0", "BnlbUBTtMPs", "iclr_2021_kdm4Lm9rgB", "JJfulIKMmvN", "YuboIfEDEgi", "qDmrEva94lo", "edlYnHc1j4a", "RLBJHEg3WlB", "9HGj9QSmABI", "vBKfbHv0rFB", "4VwK6O_grWf", "iclr_2021_kdm4Lm9rgB", "iclr_2021_kdm4Lm9...
iclr_2021_Au1gNqq4brw
SEQUENCE-LEVEL FEATURES: HOW GRU AND LSTM CELLS CAPTURE N-GRAMS
Modern recurrent neural networks (RNN) such as Gated Recurrent Units (GRU) and Long Short-term Memory (LSTM) have demonstrated impressive results on tasks involving sequential data in practice. Despite continuous efforts on interpreting their behaviors, the exact mechanism underlying their successes in capturing sequen...
withdrawn-rejected-submissions
the authors demonstrated that vanilla RNN, GRU and LSTM compute at each timestep a hidden state which is the sum of the current input and the weighted sum of the previous hidden states (weights can be either unit or complicated functions), when sigmoid and tanh functions are replaced by their second-order taylor series...
train
[ "yo5gibcg96L", "vKVbAVrJZNI", "GaHpdibtLq_", "7JMqLGNcJos", "XJ0_sT_KGph", "S-5qmFecc9", "cKMBeepjSDc", "cYn_EEOmaMv", "2y9f45jk_9v", "JGdZm3Mw_Ba" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to linearize GRU and LSTM cells (as error terms should be negligible when inputs are small in magnitude). Putting these linearized, or, really, affine, RNN cells together into a single-layer sequence processor, thanks to the affine-ness, we can decompose the score that is obtained by taking dot...
[ 4, 4, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ 4, 2, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_Au1gNqq4brw", "iclr_2021_Au1gNqq4brw", "yo5gibcg96L", "vKVbAVrJZNI", "JGdZm3Mw_Ba", "2y9f45jk_9v", "cYn_EEOmaMv", "iclr_2021_Au1gNqq4brw", "iclr_2021_Au1gNqq4brw", "iclr_2021_Au1gNqq4brw" ]
iclr_2021_LtgEkhLScK3
Probabilistic Mixture-of-Experts for Efficient Deep Reinforcement Learning
Deep reinforcement learning (DRL) has successfully solved various problems recently, typically with a unimodal policy representation. However, grasping the decomposable and hierarchical structures within a complex task can be essential for further improving its learning efficiency and performance, which may lead to a m...
withdrawn-rejected-submissions
The paper studies mixture of expert policies for reinforcement learning agents, focusing on the problem of policy gradient estimation. The paper proposes a new way to compute the gradient, apply it to two reinforcement learning algorithms, PPO and SAC, and demonstrate it in continuous MuJoCo environments, showing resul...
train
[ "pA0Ld67ElCU", "zkxcs32v88", "W2C56teTTcZ", "UUwkB65LIi", "8tfGHGY5v8F", "XJlSSILJ9Mh", "y_tvFsu4CFj", "GtnQ1o6Mx4c" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of differentiating through the policy return when the policy is a Gaussian mixture model. The main contribution of the paper is a heuristic approach for computing this gradient. Having defined the policy update, the authors integrate it to two RL algorithms: PPO and SAC. The experimen...
[ 4, -1, -1, -1, -1, 6, 3, 6 ]
[ 4, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_LtgEkhLScK3", "pA0Ld67ElCU", "GtnQ1o6Mx4c", "y_tvFsu4CFj", "XJlSSILJ9Mh", "iclr_2021_LtgEkhLScK3", "iclr_2021_LtgEkhLScK3", "iclr_2021_LtgEkhLScK3" ]
iclr_2021_tkra4vFiFq
GINN: Fast GPU-TEE Based Integrity for Neural Network Training
Machine learning models based on Deep Neural Networks (DNNs) are increasingly being deployed in a wide range of applications ranging from self-driving cars to Covid-19 diagnostics. The computational power necessary to learn a DNN is non-trivial. So, as a result, cloud environments with dedicated hardware support emerge...
withdrawn-rejected-submissions
While all reviewers agree the problem of TEEs for model training is well motivated, the reviewers remain divided on whether the concept of randomly selecting computations to verify has sufficient novelty, and whether the proposed gradient clipping method is well motivated.
train
[ "EM-3XJqchqY", "tpjTJ4yKgR1", "gZ5wPGvR9CR", "82np5qHFwya", "bgF_tj39Vj-", "spu5qe-Emg2", "wkTlp0FQB4N", "BmaRCq7TxrS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "It seems that the paper is focusing on the privacy preserving training of deep neural networks by developing a Learning-as-a-Service framework. It assumes that all resources may be penetrated by adversaries except the TEE. In this paper, the authors leverage random verification to detect the attacks and shows how ...
[ 5, 7, -1, -1, -1, -1, 3, 6 ]
[ 3, 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_tkra4vFiFq", "iclr_2021_tkra4vFiFq", "wkTlp0FQB4N", "EM-3XJqchqY", "BmaRCq7TxrS", "tpjTJ4yKgR1", "iclr_2021_tkra4vFiFq", "iclr_2021_tkra4vFiFq" ]
iclr_2021_e6hMkY6MFcU
WordsWorth Scores for Attacking CNNs and LSTMs for Text Classification
Black box attacks on traditional deep learning models trained for text classifica- tion target important words in a piece of text, in order to change model prediction. Current approaches towards highlighting important features are time consuming and require large number of model queries. We present a simple yet novel m...
withdrawn-rejected-submissions
The authors propose a method for attacking neural NLP models based on individual word importance ("WordsWorth" scores).  This is an interesting, timely topic and there may be some interesting ideas here, but at present the paper suffers from poor presentation which makes it difficult to discern the contribution. Presen...
train
[ "TSguygLK89s", "hh2h8N6Ao7p", "5BoUXutrHcv", "uKCxHjNEfr", "AzecD1-FePN", "mg4XZRnLvW" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new and simple way to determine word importance for black box adversarial attacks on text classification models. Instead of using example-specific measures of importance like recent work (typically expensive to compute), the authors propose to feed individual words from the vocabulary to a tr...
[ 4, -1, -1, -1, 3, 2 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2021_e6hMkY6MFcU", "TSguygLK89s", "AzecD1-FePN", "mg4XZRnLvW", "iclr_2021_e6hMkY6MFcU", "iclr_2021_e6hMkY6MFcU" ]
iclr_2021_n1wPkibo2R
An Efficient Protocol for Distributed Column Subset Selection in the Entrywise ℓp Norm
We give a distributed protocol with nearly-optimal communication and number of rounds for Column Subset Selection with respect to the entrywise {ℓ1} norm (k-CSS1), and more generally, for the ℓp-norm with 1≤p<2. We study matrix factorization in ℓ1-norm loss, rather than the more standard Frobenius norm loss, because th...
withdrawn-rejected-submissions
Clarity: Well written paper with a clear contribution statement; related work is up-to-date; concise algorithm description and corresponding theoretical guarantees. However, the presentation could be still improved. Significance: The polynomial running time guarantee makes the practicality of the proposed algorithm ma...
train
[ "oV8uZ3CDIL6", "QzrAuaxdvoX", "GuG5Gz8JupZ", "QpG8c7pLvbs", "Q0ViRejXKA6", "gdFJWIC6U8L" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "SUMMARY:\n\nThe paper considers the column subset selection (CSS) problem, which has received considerable attention in numerical linear algebra. It considers a distributed variant of CSS in the $\\ell_p$ norm, where $p \\in [1,2)$. Despite the attention this problem has received previously, it seems like this is ...
[ 7, -1, -1, -1, 6, 5 ]
[ 3, -1, -1, -1, 4, 4 ]
[ "iclr_2021_n1wPkibo2R", "oV8uZ3CDIL6", "Q0ViRejXKA6", "gdFJWIC6U8L", "iclr_2021_n1wPkibo2R", "iclr_2021_n1wPkibo2R" ]
iclr_2021_oweBPxtma_i
A self-explanatory method for the black box problem on discrimination part of CNN
Recently, for finding inherent causality implied in CNN, the black box problem of its discrimination part, which is composed of all fully connected layers of the CNN, has been studied by different scientific communities. Many methods were proposed, which can extract various interpretable models from the optimal discrim...
withdrawn-rejected-submissions
In this paper, the authors work on improving the interpretability of CNNs following distillation methods. The paper is written in such a convoluted way and with many changes in the notation that makes it hard to understand what they are proposing and what for. This is a major impediment for the paper going forward. But...
train
[ "KS26dwlMGTL", "mUOeYCmUCd", "SIkoZwyq8U", "uLSEy5LJyT5", "BAVYJ09zazC", "qZb6h-deJy" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comments. We will revise the manuscript based on your suggestions. Some responses to the questions have been shown in the following.\n\n(1)\tI found this paper very difficult to follow as it has many grammatical and syntactic errors.\n\nResponse:Thank you for your suggestions. We will...
[ -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, 4, 2, 3 ]
[ "BAVYJ09zazC", "qZb6h-deJy", "uLSEy5LJyT5", "iclr_2021_oweBPxtma_i", "iclr_2021_oweBPxtma_i", "iclr_2021_oweBPxtma_i" ]
iclr_2021_W0MKrbVOxtd
One Vertex Attack on Graph Neural Networks-based Spatiotemporal Forecasting
Spatiotemporal forecasting plays an essential role in intelligent transportation systems (ITS) and numerous applications, such as route planning, navigation, and automatic driving. Deep Spatiotemporal Graph Neural Networks, which capture both spatial and temporal patterns, have achieved great success in traffic forecas...
withdrawn-rejected-submissions
This paper proposes one vertex attack for GNN, applied to spatiotemporal forecasting. The paper can be improved w.r.t novelty, incorporating graph topology and rigorous analysis.
train
[ "FLG2OlbzIH", "30HgJCV3TAf", "RsTDBYVRrNj", "5-NnABP7YSf", "mw8epdVSR-", "JRbOjSvzn6-", "v5EWqdXEz-Y", "-Fks0F6-Ja6", "eTFEFlf_P2q" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of attacking graph neural networks for spatio-temporal prediction problems (e.g., traffic speed prediction). The input of the problem is a spatio-temporal sequence represented as graphs at time t-N+1 to t, where a graph neural network is trained to predict the graph sequence for time...
[ 4, -1, -1, -1, -1, -1, 4, 8, 4 ]
[ 3, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "iclr_2021_W0MKrbVOxtd", "iclr_2021_W0MKrbVOxtd", "FLG2OlbzIH", "v5EWqdXEz-Y", "-Fks0F6-Ja6", "eTFEFlf_P2q", "iclr_2021_W0MKrbVOxtd", "iclr_2021_W0MKrbVOxtd", "iclr_2021_W0MKrbVOxtd" ]
iclr_2021_gtwVBChN8td
Deep Reinforcement Learning With Adaptive Combined Critics
The overestimation problem has long been popular in deep value learning, because function approximation errors may lead to amplified value estimates and suboptimal policies. There have been several methods to deal with the overestimation problem, however, further problems may be induced, for example, the underestimatio...
withdrawn-rejected-submissions
The reviewers were in agreement that the paper is below the bar for acceptance, and the authors did not provide a response to reviewer concerns.
train
[ "M7tq491Xa6", "btT3XtCro3O", "j75GHmN48MX", "BAjbhfwUVP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \n\nThe paper proposes a method to avoid the overestimation bias. They modify the TD3 algorithm such that, instead of regressing the two Q-values towards the minimum of both Q-value functions, the regression target is a convex combination of both, determined by a parameter lambda which is updated on a slo...
[ 3, 3, 5, 3 ]
[ 4, 4, 4, 5 ]
[ "iclr_2021_gtwVBChN8td", "iclr_2021_gtwVBChN8td", "iclr_2021_gtwVBChN8td", "iclr_2021_gtwVBChN8td" ]
iclr_2021_sojnduJtbfQ
Improving Hierarchical Adversarial Robustness of Deep Neural Networks
Do all adversarial examples have the same consequences? An autonomous driving system misclassifying a pedestrian as a car may induce a far more dangerous --and even potentially lethal-- behavior than, for instance, a car as a bus. In order to better tackle this important problematic, we introduce the concept of hierarc...
withdrawn-rejected-submissions
This work targets an important problem: susceptibility of ML models to adversarial perturbations that make them completely misclassify an input, as opposed to "just" fail to get the right fine-grained class while getting the correct coarse-grained one. This natural question did not receive enough attention so far, so h...
train
[ "u4YmQnc_I1S", "h6eIfKjZsti", "ABKYDmxosSC", "mXWP9Zms1J0", "sfxHjpAm1zp", "R7GYM6ux5Q3", "6mCk8Jr9bY9", "eI6KKwmwqyw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors discuss a new notion of adversarial robustness, specifically, robustness to hierarchical adversarial examples. This is motivated by the idea that some types of misclassifications (e.g. mistaking one type of dog for another) may be less harmful than others (e.g. mistaking a dog for a truck); thus, adver...
[ 4, 5, -1, -1, -1, -1, 5, 4 ]
[ 4, 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_sojnduJtbfQ", "iclr_2021_sojnduJtbfQ", "eI6KKwmwqyw", "h6eIfKjZsti", "u4YmQnc_I1S", "6mCk8Jr9bY9", "iclr_2021_sojnduJtbfQ", "iclr_2021_sojnduJtbfQ" ]
iclr_2021_B8fp0LVMHa
EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Off-policy reinforcement learning (RL) holds the promise of sample-efficient learning of decision-making policies by leveraging past experience. However, in the offline RL setting -- where a fixed collection of interactions are provided and no further interactions are allowed -- it has been shown that standard off-poli...
withdrawn-rejected-submissions
**Overview** The paper provides a simplified offline RL algorithm based on BCQ. It analyzes the algorithms using a sampling-based maximization of the Q function over a behavior policy for both Bellman targets and for policy execution -- the EMaQ. Based on this, the paper then proposes to use more expressive autoregress...
train
[ "MatXbKHtTOH", "XksEr41LfsJ", "-q4Fs20BITL", "VvSydpKnS2i", "LbbBdg2VGiq", "ssiU-56GXyD", "e42TgPFYCYG", "R05F5wQpODG", "8j_Q6QVUyt", "yisTKoGdVXS", "pDfhUCJhXNp" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a simple offline RL algorithm based on Q-learning that instead of computing the exact maximum over actions, takes the sample maximum when sampling from an learned estimate of the behavior policy. Experimental results are quite promising (competitive with previous algorithms in both offline and o...
[ 6, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_B8fp0LVMHa", "iclr_2021_B8fp0LVMHa", "8j_Q6QVUyt", "-q4Fs20BITL", "MatXbKHtTOH", "LbbBdg2VGiq", "yisTKoGdVXS", "pDfhUCJhXNp", "iclr_2021_B8fp0LVMHa", "iclr_2021_B8fp0LVMHa", "iclr_2021_B8fp0LVMHa" ]
iclr_2021_xYJpCgSZff
Counterfactual Thinking for Long-tailed Information Extraction
Information Extraction (IE) aims to extract structured information from unstructured texts. However, in practice, the long-tailed and imbalanced data may lead to severe bias issues for deep learning models, due to very few training instances available for the tail classes. Existing works are mainly from computer vision...
withdrawn-rejected-submissions
This paper introduces an architecture based on structured causal model for long-tailed IE tasks. It incorporates the dependency tree structure of the sentence using a GCN for learning the representations. The key idea is to use counterfactual reasoning to help with the inference in attempt to reduce the impact of spuri...
train
[ "248WSsjNHfT", "CpXDU7BvLs", "hTxQTa7_rWW", "p7Hz5QBgNv5", "CPnLREjFzYV", "xDopfqwr6x5", "s8kku5lz8Pl", "OOUhRXuviqB", "fnOc03ACQbH", "yKyaqEqPCI8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "#### Update after author response and other reviewers comments:\nI think after the addition of new evaluation (Appendix A.5) on the aspect of counter-factual and other comments made by reviewers I'll stick to my score. Also, I liked the idea of applying counter-factual to long-tailed distribution IE problem.\n\n##...
[ 7, 3, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_xYJpCgSZff", "iclr_2021_xYJpCgSZff", "yKyaqEqPCI8", "fnOc03ACQbH", "248WSsjNHfT", "s8kku5lz8Pl", "OOUhRXuviqB", "CpXDU7BvLs", "iclr_2021_xYJpCgSZff", "iclr_2021_xYJpCgSZff" ]
iclr_2021_qSeqhriWKsn
Adaptive Single-Pass Stochastic Gradient Descent in Input Sparsity Time
We study sampling algorithms for variance reduction methods for stochastic optimization. Although stochastic gradient descent (SGD) is widely used for large scale machine learning, it sometimes experiences slow convergence rates due to the high variance from uniform sampling. In this paper, we introduce an algorithm th...
withdrawn-rejected-submissions
While this paper was received pretty well, especially after the revision, reviewers still find it borderline and request further revisions which we cannot check in this short review cycle. Therefore, we encourage the authors to improve the paper and resubmit to a future venue. In particular, please take into account th...
test
[ "gf0xxD0r-8v", "mVhau9M9HTD", "vN-R4jO-wnB", "-4vkkexhMdD", "PZfoir7S0pj", "H0mZFrhDhu", "S5faLhsgvQx", "B5sxqWhA4b" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper develops an efficient streaming algorithm to approximate the optimal importance sampling weights for variance reduction in finite-sum SGD. The optimal weights are proportional to each sample's gradient norm; this work uses AMS-like moment estimation to sketch gradient norms which take the form...
[ 6, 6, -1, -1, -1, -1, 5, 6 ]
[ 3, 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_qSeqhriWKsn", "iclr_2021_qSeqhriWKsn", "S5faLhsgvQx", "mVhau9M9HTD", "B5sxqWhA4b", "gf0xxD0r-8v", "iclr_2021_qSeqhriWKsn", "iclr_2021_qSeqhriWKsn" ]
iclr_2021_8YFhXYe1Ps
Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces
Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional networ...
withdrawn-rejected-submissions
All the reviewers agree that the paper presents an interesting idea, and the main concern raised by the reviewers was the clarity of the paper. I believe that the authors have improved the presentation of the paper after rebuttal, however, I still believe that the paper woudl require another round of reviews before bei...
test
[ "qRc354yIWoL", "o7xomokGxe8", "AflV7AWzFQt", "xO8zcGzcp_6", "0ef8tjEhCNy", "_WsnMRbeLHB", "TPOe5A1-1d7", "qius7XfZWRf", "LPM9kMZqeaw", "2tX2E1p_wa", "bw-xsqOCWht", "d8oBJqnpCFB", "ZF17IvHJIus", "5sUGratera" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update after revision\n------------------------------\nI thank the authors for their work on this paper. The second reading was more pleasant. I agree with the authors that performing a user-study is an important effort, that should be encouraged. I however still believe that, if not benefitial to the user, the co...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_8YFhXYe1Ps", "iclr_2021_8YFhXYe1Ps", "xO8zcGzcp_6", "_WsnMRbeLHB", "5sUGratera", "qRc354yIWoL", "ZF17IvHJIus", "LPM9kMZqeaw", "d8oBJqnpCFB", "iclr_2021_8YFhXYe1Ps", "o7xomokGxe8", "iclr_2021_8YFhXYe1Ps", "iclr_2021_8YFhXYe1Ps", "iclr_2021_8YFhXYe1Ps" ]
iclr_2021_WMUSP41HQWS
DISE: Dynamic Integrator Selection to Minimize Forward Pass Time in Neural ODEs
Neural ordinary differential equations (Neural ODEs) are appreciated for their ability to significantly reduce the number of parameters when constructing a neural network. On the other hand, they are sometimes blamed for their long forward-pass inference time, which is incurred by solving integral problems. To improve ...
withdrawn-rejected-submissions
This paper proposes two methods to speed up the evaluation of neural ODEs: regularizing the ODE to be easier to integrate, and adaptively choosing which integrator to use. These two ideas are fundamentally sensible, but the execution of the current paper is lacking. In addition to writing and clarity issues, the main...
train
[ "JW4gt6KNPXp", "qCNnaF3rw1h", "51zE4GedZGU", "YowgDurLDkE", "L42tBOLkjPH", "9Jndi2v5eR", "fwuJVo7RTr", "rk1eBGUfVez", "ykmitw7FCRc", "VSdA3o9xXBQ", "OhM4cBhvn0r", "0UNeN4svrL" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments. We uploaded new version. Changes are highlighted in red. To minimize changes, we didn't tough minor points. We will revise them if accepted.", "Thanks for your comments. Following your suggestions, we revised the paper and supplementary material. Our changes are in red. To minimize chan...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "0UNeN4svrL", "ykmitw7FCRc", "VSdA3o9xXBQ", "OhM4cBhvn0r", "0UNeN4svrL", "ykmitw7FCRc", "VSdA3o9xXBQ", "OhM4cBhvn0r", "iclr_2021_WMUSP41HQWS", "iclr_2021_WMUSP41HQWS", "iclr_2021_WMUSP41HQWS", "iclr_2021_WMUSP41HQWS" ]
iclr_2021_j39sWOYhfEg
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs
Adversarial training has been the topic of dozens of studies and a leading method for defending against adversarial attacks. Yet, it remains largely unknown (a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-of-distribution examples; and (b) how their generalization capability rel...
withdrawn-rejected-submissions
This work investigates the relationship between adversarial robustness and shape bias of neural networks. Reviewers pointed out that one of the primary questions being investigated "(a) how adversarially-robust ImageNet classifiers (R classifiers) generalize to out-of-distribution examples;" has already been a primary ...
train
[ "-N7MjzT0d5l", "u5f8UWw8_sp", "e5n9GBMKsIR", "sS25B20pJRy", "jCbBv102bqR", "WcHGHjAp0PR", "EefzCEv9uLv", "bmShvF0ekM3", "WnzTn8MZFmQ", "HbsheXNzW4c" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper takes a step further to understand the relationships between the adversarial trained CNNs (R-CNNs) and shape-based representation, and delve deeper into the R-CNNs via studying the hidden units. First, it justifies that the R-CNNs prefer shape cues based on random-shuffled, Stylized-ImageNet, and silhou...
[ 6, 5, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_j39sWOYhfEg", "iclr_2021_j39sWOYhfEg", "u5f8UWw8_sp", "WnzTn8MZFmQ", "-N7MjzT0d5l", "HbsheXNzW4c", "HbsheXNzW4c", "HbsheXNzW4c", "iclr_2021_j39sWOYhfEg", "iclr_2021_j39sWOYhfEg" ]
iclr_2021_MRQJmsNPp8E
Learning Representations by Contrasting Clusters While Bootstrapping Instances
Learning visual representations using large-scale unlabelled images is a holy grail for most of computer vision tasks. Recent contrastive learning methods have focused on encouraging the learned visual representations to be linearly separable among the individual items regardless of their semantic similarity; however, ...
withdrawn-rejected-submissions
The idea of combining instance-level contrastive loss and deep clustering is a promising direction in recent unsupervised/self-supervised visual representation learning studies. However, authors did a poor literature review and did not cite and compare with quite a few recent popular work exploring the similar directio...
train
[ "uvS57lzxegC", "0Eon7jgQdGg", "aXIwUgsuSro", "FMIhgTxDheU", "TyzwukNomWC", "-IIkldX3M69", "tlCRe3eNSp", "UdCemKywXzu", "2VF6Lo0EA7d" ]
[ "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear all reviewers.\n\nWe uploaded our newly updated document to answer the remained questions from the last comments.\n\nIn the new version, we added Section A.3 to discuss the impact of the choice of K. \nWe include exceptional cases to consider the relationship between the selected K and the underlining label s...
[ -1, -1, -1, -1, -1, -1, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "iclr_2021_MRQJmsNPp8E", "UdCemKywXzu", "2VF6Lo0EA7d", "2VF6Lo0EA7d", "tlCRe3eNSp", "iclr_2021_MRQJmsNPp8E", "iclr_2021_MRQJmsNPp8E", "iclr_2021_MRQJmsNPp8E", "iclr_2021_MRQJmsNPp8E" ]
iclr_2021_OLOr1K5zbDu
Triple-Search: Differentiable Joint-Search of Networks, Precision, and Accelerators
The record-breaking performance and prohibitive complexity of deep neural networks (DNNs) have ignited a substantial need for customized DNN accelerators which have the potential to boost DNN acceleration efficiency by orders-of-magnitude. While it has been recognized that maximizing DNNs' acceleration efficiency requi...
withdrawn-rejected-submissions
This paper introduces a methodology for jointly optimizing neural network architecture, quantization policy, and hardware architecture. There are two key ideas: - Heterogeneous sampling strategy to tackle the dilemma between exploding memory cost and biased search. - Integrates a differentiable hardware search engine ...
train
[ "vM161gb9iIl", "qPDCeOtWv7", "8Pe8Gz4yW0D", "kjQngL3oJQj", "5S5PfOg8C6Z", "Hpd9Hi28cmt", "hiEyyPX2e4c", "xDN_iespm8w", "1STHmq12bBY", "WbuZl3hgUe", "uZYcBOsX7-_", "0SAn6hrto4V", "c1KXsnwtKCT", "YUDwVwmxLJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the valuable suggestions and we believe the discussions will strengthen our work. We will discuss the two points in the final version.", "Please add the discussion of path and the comparison with prior work on path to the paper.\n\nPlease also consider adding a few sentences talking about the general ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 3, 4 ]
[ "qPDCeOtWv7", "8Pe8Gz4yW0D", "kjQngL3oJQj", "WbuZl3hgUe", "uZYcBOsX7-_", "0SAn6hrto4V", "0SAn6hrto4V", "YUDwVwmxLJ", "YUDwVwmxLJ", "c1KXsnwtKCT", "iclr_2021_OLOr1K5zbDu", "iclr_2021_OLOr1K5zbDu", "iclr_2021_OLOr1K5zbDu", "iclr_2021_OLOr1K5zbDu" ]
iclr_2021_zg4GtrVQAKo
Generative Adversarial User Privacy in Lossy Single-Server Information Retrieval
We consider the problem of information retrieval from a dataset of files stored on a single server under both a user distortion and a user privacy constraint. Specifically, a user requesting a file from the dataset should be able to reconstruct the requested file with a prescribed distortion, and in addition, the ident...
withdrawn-rejected-submissions
The paper got mixed ratings. However, keeping in mind the low confidence of some of the reviewers, the paper needed an additional look. The AC himself went over the paper. The paper presents an interesting formalism for private information retrieval. As reviewers have pointed out the formalism is based on several exist...
train
[ "MwmlqJp1LSG", "fBPckeB-fOE", "7KP8E83F5IF", "6WPXppKLl47", "7VPpHlVvVN", "RcK-eyPSgMu" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers and formulates a generalized version of the private information retrieval (PIR) problem, where a user aims to retrieve one of $M$ files from a dataset, but wants to keep the index $M$ private. Unlike the basic version studied in prior works, the problem formulated in the paper allows non-zero ...
[ 5, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 1, 3 ]
[ "iclr_2021_zg4GtrVQAKo", "7VPpHlVvVN", "RcK-eyPSgMu", "MwmlqJp1LSG", "iclr_2021_zg4GtrVQAKo", "iclr_2021_zg4GtrVQAKo" ]
iclr_2021_wUUKCAmBx6q
Flow Neural Network for Traffic Flow Modelling in IP Networks
This paper presents and investigates a novel and timely application domain for deep learning: sub-second traffic flow modelling in IP networks. Traffic flows are the most fundamental components in an IP based networking system. The accurate modelling of the generative patterns of these flows is crucial for many practic...
withdrawn-rejected-submissions
The revised paper is a solid improvement. However, all reviewers and I find that there are still a number of issues that prevent the paper from being acceptable at the current stage. For example, some important parts are still unclear, especially the definition of STI effect. The observation of STI effect requires m...
train
[ "8xJC38u5rOo", "VQfqYj9gPCc", "4LPO5nWkRng", "ikClxc9IUX", "l4dt9vS_0oi" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \n- The paper claims to discover a universal “spatio-temporal induction (STI) effect” in network traffic flows, and developed a model FlowNN to learn representations of flow-structure data. However, the STI effect was not clearly explained and the problem is not well formulated, making it hard for readers...
[ 2, -1, 4, 3, 4 ]
[ 4, -1, 4, 2, 3 ]
[ "iclr_2021_wUUKCAmBx6q", "iclr_2021_wUUKCAmBx6q", "iclr_2021_wUUKCAmBx6q", "iclr_2021_wUUKCAmBx6q", "iclr_2021_wUUKCAmBx6q" ]
iclr_2021_E3UZoJKHxuk
Latent Causal Invariant Model
Current supervised learning can learn spurious correlation during the data-fitting process, imposing issues regarding interpretability, out-of-distribution (OOD) generalization, and robustness. To avoid spurious correlation, we propose a \textbf{La}tent \textbf{C}ausal \textbf{I}nvariance \textbf{M}odel (LaCIM) which p...
withdrawn-rejected-submissions
This work introduces a method for supervised learning that takes a data-generating process into account. While the paper proposes an interesting approach to learning a causally invariant model, the reviewers had several concerns about the proposed method. I thank the authors for having the paper revised, addressing the...
train
[ "j-2ZzdvD-Hk", "ehMiJJCn6W2", "1j_iWXD_Y34", "E-aCjgSBLxy", "8IiSoKQclJF", "mJaW3YLS83K", "FgG16oIxg_h", "puGLQvtLt2p", "bOjTh9FL_mr", "m4n8kfLHQGl", "aMdBiXX42P7", "be-W-K8dA4-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I would like to thank the authors for the interesting work they proposed. I tried to explain my concerns below and I am open to changing my score with their feedback. \n\n1) My main concern is whether the considered causal graph indeed captures the phenomenon that the authors are attempting to address. What I mean...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_E3UZoJKHxuk", "iclr_2021_E3UZoJKHxuk", "iclr_2021_E3UZoJKHxuk", "j-2ZzdvD-Hk", "j-2ZzdvD-Hk", "j-2ZzdvD-Hk", "aMdBiXX42P7", "ehMiJJCn6W2", "be-W-K8dA4-", "be-W-K8dA4-", "iclr_2021_E3UZoJKHxuk", "iclr_2021_E3UZoJKHxuk" ]
iclr_2021_bGZtz5-Cmkz
Learning Collision-free Latent Space for Bayesian Optimization
Learning and optimizing a blackbox function is a common task in Bayesian optimization and experimental design. In real-world scenarios (e.g., tuning hyper-parameters for deep learning models, synthesizing a protein sequence, etc.), these functions tend to be expensive to evaluate and often rely on high-dimensional inpu...
withdrawn-rejected-submissions
The reviewers liked the overall idea presented in this paper. Although the idea as well as relevant tooling for incorporating constraints in the latent space has been studied a lot in the past, the authors differentiate their work by applying it in a new interesting problem. At the same time, some confusions about rela...
test
[ "oe2kSO07fd_", "PfWBMWDEhUW", "YRZp1MZpyQ", "bIc06LC0t3d", "dwE8iZh4V1Y", "7WoqVjMwpjS", "z03CteKbat", "KngE3GoSDF3", "nvdK58LFQA8" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe thank the reviewers for their detailed comments and valuable suggestions. We studied the reviews and discussions carefully and modified our paper accordingly. Our revision followed the same list of actions proposed in our rebuttal response and further feedback from the reviewers. \n\nNext, we summarize the (...
[ -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "iclr_2021_bGZtz5-Cmkz", "z03CteKbat", "nvdK58LFQA8", "KngE3GoSDF3", "7WoqVjMwpjS", "iclr_2021_bGZtz5-Cmkz", "iclr_2021_bGZtz5-Cmkz", "iclr_2021_bGZtz5-Cmkz", "iclr_2021_bGZtz5-Cmkz" ]
iclr_2021_lfJpQn3xPV-
Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted
Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever growing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable t...
withdrawn-rejected-submissions
This is an empirical paper that proposed a few different settings for applying GNNs on temporal data, including what context window to use, code-start vs warm-start, incremental training vs static. This paper also proposed and released a few more temporal graph datasets, which could be useful. The consensus assessmen...
train
[ "sp7RC6ngZ", "KBzNyM0rxV", "ChY1uIN9qe", "8C1qo8lF6OW", "BhmdttMeRJJ", "MNWzxywcNVF", "k9HtI_JnmPs", "ZYePZHk32L0" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your insightful comments. Below, we respond to the key concerns about the problem statement and contribution itself before we outline the differences to related works. \n\n - **Clarify problem statement**: The problem we address is about online learning on *global* graph dynamics, where new vertices...
[ -1, -1, -1, -1, 5, 5, 5, 3 ]
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "iclr_2021_lfJpQn3xPV-", "k9HtI_JnmPs", "sp7RC6ngZ", "sp7RC6ngZ", "iclr_2021_lfJpQn3xPV-", "iclr_2021_lfJpQn3xPV-", "iclr_2021_lfJpQn3xPV-", "iclr_2021_lfJpQn3xPV-" ]
iclr_2021_JE7a-YejzfN
Geometry matters: Exploring language examples at the decision boundary
A growing body of recent evidence has highlighted the limitations of natural language processing (NLP) datasets and classifiers. These include the presence of annotation artifacts in datasets, classifiers relying on shallow features like a single word (e.g., if a movie review has the word "romantic", the review tends...
withdrawn-rejected-submissions
This paper presents the method of using Fisher information matrix values to identify examples near the decision boundary for a model, and proposes to preferentially use these examples in evaluation. Pros: - Reviewers found this use of FIM values to be novel and interesting. - The paper presents fairly extensive resul...
train
[ "95hQmZUky3L", "AWNYTcwd9WE", "laDT10YALdY", "Uh6WChJAlBC", "ZpnW7LXgKrp", "48sCABvCzr", "wkZxWGjWoB", "oXlHSxBqM7F", "HohbPHXW1Zv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an analysis technique for studying the 'difficulty' of a pair of test dataset examples in NLP. The setup proposed by past work on Contrast sets and Counterfactual Examples (Gardner et al, 2020 and Kaushik et al 2020 respectively) is to manually construct two dataset examples (x,y) with differen...
[ 5, 3, -1, -1, -1, -1, -1, 4, 5 ]
[ 2, 4, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_JE7a-YejzfN", "iclr_2021_JE7a-YejzfN", "iclr_2021_JE7a-YejzfN", "oXlHSxBqM7F", "95hQmZUky3L", "AWNYTcwd9WE", "HohbPHXW1Zv", "iclr_2021_JE7a-YejzfN", "iclr_2021_JE7a-YejzfN" ]
iclr_2021_UiLl8yjh57
Deep Reinforcement Learning For Wireless Scheduling with Multiclass Services
In this paper, we investigate the problem of scheduling and resource allocation over a time varying set of clients with heterogeneous demands. This problem appears when service providers need to serve traffic generated by users with different classes of requirements. We thus have to allocate bandwidth resources over ti...
withdrawn-rejected-submissions
The reviewers mostly agree that this paper presents a new deep reinforcement learning-based approach to solving a challenging problem in the communications domain -- wireless scheduling. However, the main concern, expressed almost unanimously, is about the novelty of the ideas in the paper beyond the assembly of existi...
train
[ "TYXWyLwuJj", "9uAeDXOu9LE", "97gfb7iocd9", "L19-okRRs-_", "G8kdm6z_0dJ", "URedVfXVhdU", "t8fLi8HCjrn", "LYCmU7kg0VR", "F6U_XD_jIjg", "phqAnt6FLhf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Paper Summary\nThis paper investigated the problem of scheduling and resource allocation for a time-varying set of clients with heterogeneous traffic and QoS requirements in wireless networks. It proposed to solve this problem with distributional based DDPG with Deep Sets, and conducted experiments showing perform...
[ 5, 3, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_UiLl8yjh57", "iclr_2021_UiLl8yjh57", "9uAeDXOu9LE", "F6U_XD_jIjg", "TYXWyLwuJj", "LYCmU7kg0VR", "phqAnt6FLhf", "iclr_2021_UiLl8yjh57", "iclr_2021_UiLl8yjh57", "iclr_2021_UiLl8yjh57" ]
iclr_2021_ucEXZQncukK
Bayesian Online Meta-Learning
Neural networks are known to suffer from catastrophic forgetting when trained on sequential datasets. While there have been numerous attempts to solve this problem for large-scale supervised classification, little has been done to overcome catastrophic forgetting for few-shot classification problems. Few-shot meta-lear...
withdrawn-rejected-submissions
This paper proposes an online meta-learning algorithm. 3 out of 4 reviews were borderline. The main concern during the discussion was that it is unclear what kind of online learning this paper does. For instance, in theory, the online learner competes with the best solution in hindsight. This is a regret-minimizing poi...
train
[ "3WMQdw373gJ", "D1JVbT25D5e", "AJS_ImiO3JJ", "sA5veKYvO0k", "DwMLGk5fOJf", "x1n5uQ3sPL3", "OjGFXOFK3fn", "xVXQTFcnUQA", "HX2fMXbyFs", "shopG2TZhAF", "KNyBkhBIcnL", "uAXi7GzGN2L", "LR3D1oTMGuj", "mUBEp2Lh4fM", "F4hkof-z1jQ", "ut7aw84GaC", "k12Rk1IC6qe", "Dh8pp-rWFgf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper develops a semi-online Bayesian approach to meta-learning, where tasks arrive sequentially and learning within any task is performed in batch mode (hence my terminology semi-online). It suggests a sequential between-task Bayesian update, eq. 5, and proposed three approximations to aid computation. The ba...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_ucEXZQncukK", "iclr_2021_ucEXZQncukK", "x1n5uQ3sPL3", "iclr_2021_ucEXZQncukK", "iclr_2021_ucEXZQncukK", "OjGFXOFK3fn", "D1JVbT25D5e", "uAXi7GzGN2L", "LR3D1oTMGuj", "KNyBkhBIcnL", "Dh8pp-rWFgf", "k12Rk1IC6qe", "3WMQdw373gJ", "shopG2TZhAF", "xVXQTFcnUQA", "HX2fMXbyFs", "iclr...
iclr_2021_h8q8iZi-ks
Conditional Networks
In this work we tackle the problem of out-of-distribution generalization through conditional computation. Real-world applications often exhibit a larger distributional shift between training and test data than most datasets used in research. On the other hand, training data in such applications often comes with additio...
withdrawn-rejected-submissions
The paper proposes to address the out-of-distribution generalization problem by means of conditional computation in form of a feature modulating module. While the approach is interesting and brings a new take on how to perform feature modulation (although initially felt too similar to Conditional Batch Normalization) s...
train
[ "Smw0RaLHgZz", "15cdMShD0oo", "3cBmT8eo-qR", "T90yCmj9Y5v", "tlUxw112pS6", "O1n1J--Yq-", "MYwlU1C4dI", "lE_MDmxKeB", "lR7IWwr8Q8j" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary:\nThis submission proposes an approach to modulate activations of general convolutional neural networks by means of an auxiliary network trained on additional metadata to a dataset. The specific goal is to improve out-of-distribution (OOD) generalisation. This *conditional network* approach is illustra...
[ 6, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_h8q8iZi-ks", "iclr_2021_h8q8iZi-ks", "MYwlU1C4dI", "Smw0RaLHgZz", "lE_MDmxKeB", "lR7IWwr8Q8j", "iclr_2021_h8q8iZi-ks", "iclr_2021_h8q8iZi-ks", "iclr_2021_h8q8iZi-ks" ]
iclr_2021_1z_Hg9oBCtY
MAS-GAN: Adversarial Calibration of Multi-Agent Market Simulators.
We look at the problem of how the simulation of a financial market should be configured so that it most accurately emulates the behavior of a real market. In particular, we address agent-based simulations of markets that are composed of many hundreds or thousands of trading agents. A solution to this problem is import...
withdrawn-rejected-submissions
This paper tackles an important problem of generating a synthetic data for stock market agents behavior. In particular, GAN is trained to distinguish stock market time series from synthetic ones. Then market agent parameters can be scored by running time series generated using these parameters through a GAN. This way o...
train
[ "I8oQ8PmjrYh", "yA7-glqzpfb", "Hjgs0mah_dl", "mOJw_gkCBZw", "_iRuDBBSLSW", "zwcHRSOgLSr", "Xq_hoacuwxg" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes using a GAN to learn a discriminator and then use the discriminator to tune a non-differentiable simulator. The authors use this idea in the context of stock market simulation.\n\nPros:\nThe idea of using the discriminator of a GAN to tune parameters of a simulator is interesting.\n\nCons:\nThe...
[ 5, -1, -1, -1, -1, 3, 7 ]
[ 5, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_1z_Hg9oBCtY", "zwcHRSOgLSr", "I8oQ8PmjrYh", "Xq_hoacuwxg", "iclr_2021_1z_Hg9oBCtY", "iclr_2021_1z_Hg9oBCtY", "iclr_2021_1z_Hg9oBCtY" ]
iclr_2021_ATgKbzY1UPh
On the Capability of CNNs to Generalize to Unseen Category-Viewpoint Combinations
Object recognition and viewpoint estimation lie at the heart of visual understanding. Recent works suggest that convolutional neural networks (CNNs) fail to generalize to category-viewpoint combinations not seen during training. However, it is unclear when and how such generalization may be possible. Does the number of...
withdrawn-rejected-submissions
This paper received 2 borderline accepts, 1 accept, and 1 reject. In general, there is broad agreement that this is solid experimental work and that the differences found between recognition and viewpoint estimation were interesting. The main issue brought up by the more negative reviewer is that some of the experime...
train
[ "AlEyxyIZq3O", "qRENWP6jypx", "iISAF7H46r4", "pV40P4W1kWq", "YwJScW9dJXD", "38jo2KRhFpw", "g5GHwqxcpOf", "12OdTHk-EvP", "oOxYAxoAdO-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Strength: This submission provides a novel perspective in understanding the relationship between jointly learned tasks. Recent works such as Standley et al. [1] empirically showed that shared-weight training behavior in a pairwise setting could vary substantially depending on the task. This submission tells us tha...
[ 6, 6, 4, 7, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2021_ATgKbzY1UPh", "iclr_2021_ATgKbzY1UPh", "iclr_2021_ATgKbzY1UPh", "iclr_2021_ATgKbzY1UPh", "iclr_2021_ATgKbzY1UPh", "iISAF7H46r4", "AlEyxyIZq3O", "qRENWP6jypx", "pV40P4W1kWq" ]
iclr_2021_nLktL9-M-C6
Collaborative Normalization for Unsupervised Domain Adaptation
Batch Normalization (BN) as an important component assists Deep Neural Networks achieving promising performance for extensive learning tasks by scaling distribution of feature representations within mini-batches. However, the application of BN suffers from performance degradation under the scenario of Unsupervised Doma...
withdrawn-rejected-submissions
Although all reviewers acknowledge that the paper has some merit, the work lacks in novelty. The idea of using normalisation techniques to reduce the domain discrepancy in UDA is well established in the DA and DG community. The theoretical analysis is an interesting first step in providing some insights on this class o...
train
[ "CVFhpE2kTt", "oKFL93ahouJ", "e7CtdnMnnz1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summarize what the paper claims to contribute.**\n \nThe paper proposes a normalization technique called collaborative normalization, which can substitute batch normalization in any neural network and achieve better performance on domain adaptation tasks than the previous works. The proposed method introduces an...
[ 4, 6, 5 ]
[ 5, 4, 4 ]
[ "iclr_2021_nLktL9-M-C6", "iclr_2021_nLktL9-M-C6", "iclr_2021_nLktL9-M-C6" ]
iclr_2021__FXqMj7T0QQ
On Effective Parallelization of Monte Carlo Tree Search
Despite its groundbreaking success in Go and computer games, Monte Carlo Tree Search (MCTS) is computationally expensive as it requires a substantial number of rollouts to construct the search tree, which calls for effective parallelization. However, how to design effective parallel MCTS algorithms has not been systema...
withdrawn-rejected-submissions
As the reviewer confidence of the reviews of 2.8 or lower, I made a full and detailed pass of submission as it was requested by the PCs. The paper studies how to parallelization of MCTS affects its performance and provides the analysis of the excess regret - how much "error" we incur by parallelizing as opposed to ...
val
[ "t0yOSiSfWAR", "G9UsvbL2hHZ", "j-scN40--V", "s7VSAIletQ3", "mVsLsZxea4", "Kqy_i5KWysO", "KBeFbKS6zPK", "GHzT_fdzdE4", "a8yyWgk_Mer" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your constructive feedback.\n\n- Virtual loss: \n - We would like to clarify that we have considered the AlphaGo [1] version of virtual loss that adjusts both the value and the visit count. Specifically, in Section 4.3 we wrote “... or use virtual loss (Segal, 2010) to penalize action value and visi...
[ -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 1 ]
[ "KBeFbKS6zPK", "Kqy_i5KWysO", "t0yOSiSfWAR", "GHzT_fdzdE4", "a8yyWgk_Mer", "iclr_2021__FXqMj7T0QQ", "iclr_2021__FXqMj7T0QQ", "iclr_2021__FXqMj7T0QQ", "iclr_2021__FXqMj7T0QQ" ]
iclr_2021_bLGZW0hIQpO
Temperature Regret Matching for Imperfect-Information Games
Counterfactual regret minimization (CFR) methods are effective for solving two player zero-sum extensive games with imperfect information. Regret matching (RM) plays a crucial role in CFR and its variants to approach Nash equilibrium. In this paper, we present Temperature Regret Matching (TRM), a novel RM algorithm th...
withdrawn-rejected-submissions
This paper consider an important problem CounterFactualRegret (CFR) minimization, and proposes a new algorithm to solve this problem. Reviewers raised many questions and concerns that the authors chose not to answer. We can only recommend rejection
val
[ "8tFrA4QL3vp", "FxQ6Bt4f007", "URsI7rz0694" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a new algorithm for regret matching that can be dropped into existing CFR variants. The new algorithm is claimed to demonstrate quicker convergence than existing algorithms and be more efficient. Additionally, the authors present a new proof generalizing the CFR convergence proof. \n\nThese are...
[ 3, 2, 6 ]
[ 3, 5, 3 ]
[ "iclr_2021_bLGZW0hIQpO", "iclr_2021_bLGZW0hIQpO", "iclr_2021_bLGZW0hIQpO" ]
iclr_2021_sv2wC7Amb0s
Conditioning Trick for Training Stable GANs
In this paper we propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training. We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomp...
withdrawn-rejected-submissions
The paper proposes a trick for stabilizing GAN training and reports experiment results on spectrogram synthesis. All the reviewers rate the paper below the bar, citing various concerns, including a lack of clarity and unconvincing results. Several reviewers suggest conducting evaluations in the image domain as most of ...
train
[ "FuvF9sTr7bl", "GTdhcepgoiz", "Cg9LmW4eKOj", "W5KSxSDFc5u", "1GNsEKwRiwa", "R_gAct7ZAfm", "5NBZaIHiNqa", "2OgMXgIb1Tt" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the valuable comments which helped us to improve our paper. We address all the raised concerns in detail as the following.\n\nWe sincerely thank the reviewer for sharing these two valuable references about cycle-consistent GANs. To deal with other aspects of GANs and speech, we have inclu...
[ -1, -1, -1, -1, 4, 3, 5, 3 ]
[ -1, -1, -1, -1, 3, 5, 3, 2 ]
[ "R_gAct7ZAfm", "5NBZaIHiNqa", "1GNsEKwRiwa", "2OgMXgIb1Tt", "iclr_2021_sv2wC7Amb0s", "iclr_2021_sv2wC7Amb0s", "iclr_2021_sv2wC7Amb0s", "iclr_2021_sv2wC7Amb0s" ]
iclr_2021_I3xhgVtNC5t
Wasserstein Distributionally Robust Optimization: A Three-Player Game Framework
Wasserstein distributionally robust optimization (DRO) has recently received significant attention in machine learning due to its connection to generalization, robustness and regularization. Existing methods only consider a limited class of loss functions or apply to small values of robustness. In this paper, we presen...
withdrawn-rejected-submissions
The paper provides a reformulation of the distributionally robust optimization problem into a (difficult in general) transportation map problem. In the new reformulation, the authors provide strong convergence results albeit requiring strong conditions, such as solvability of the auxiliary problems in reasonable time o...
train
[ "hiR2jui3EzQ", "vCF5A5iFTXa", "Zk9_b1sRL44", "gHeF1ndV3Kj", "tNZvnsKMghn", "IKSj4kp_x1", "i0k_TvBgzM8", "Oizh6dLcaqh", "fg8HZGPWrjJ", "Byi2pqdGRuY", "FUdY7eA8wFj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a \"three-player framework\" for distributionally robust optimization (DRO), where the authors considered lifting the constraint of the ambiguity set of using the Lagrange multiplier method. The authors propose two algorithms, one for convex loss and one for non-convex, based on exponentiated g...
[ 5, 5, -1, -1, -1, -1, -1, -1, 4, 5, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_I3xhgVtNC5t", "iclr_2021_I3xhgVtNC5t", "hiR2jui3EzQ", "iclr_2021_I3xhgVtNC5t", "vCF5A5iFTXa", "fg8HZGPWrjJ", "FUdY7eA8wFj", "Byi2pqdGRuY", "iclr_2021_I3xhgVtNC5t", "iclr_2021_I3xhgVtNC5t", "iclr_2021_I3xhgVtNC5t" ]
iclr_2021_PU35uLgRZkk
The Skill-Action Architecture: Learning Abstract Action Embeddings for Reinforcement Learning
The option framework, one of the most promising Hierarchical Reinforcement Learning (HRL) frameworks, is developed based on the Semi-Markov Decision Problem (SMDP) and employs a triple formulation of the option (i.e., an action policy, a termination probability, and an initiation set). These design choices, however, ...
withdrawn-rejected-submissions
This paper proposes the Skill-Action (SA) architecture, based on the insight that semi-MDPs in the option framework can be posed as an equivalent MDP. The paper presents interesting theoretical results and very promising empirical results. We thank the reviewers for their revisions, which provided more insights into th...
train
[ "0LNmBryU0oj", "hd4X5nYt6s", "fUZSb_gMS6_", "TLD4e4Zaf0X", "iux-PY7X1lN", "e-nmIcuGWFM", "QkAbwkxd3xd" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank reviewer1 very much for his feedback. Reviewer1 clearly read our paper very closely and has a deep understanding of SA. We thank him very much for fairly acknowledging our contributions and providing his very constructive feedback. Other than this response we have also provided a general response to commo...
[ -1, -1, -1, -1, 5, 4, 5 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "iux-PY7X1lN", "e-nmIcuGWFM", "QkAbwkxd3xd", "iclr_2021_PU35uLgRZkk", "iclr_2021_PU35uLgRZkk", "iclr_2021_PU35uLgRZkk", "iclr_2021_PU35uLgRZkk" ]
iclr_2021_fB2GZQajQ2b
Information-Theoretic Odometry Learning
In this paper, we propose a unified information-theoretic framework for odometry learning, a crucial component of many robotics and vision tasks such as navigation and virtual reality where 6-DOF poses are required in real time. We formulate this problem as optimizing a variational information bottleneck objective func...
withdrawn-rejected-submissions
This paper performs visual odometry using variational information bottleneck. It assumes video and pose observations and aims to find a latent state that is maximally predictive of the pose observations, while minimizing the mutual information between the image observations and the latent state. It approximates this co...
train
[ "7NbW1J0E4C5", "H7aTiQXMyii", "MxWmB-4rdsG", "CB4I8BMqZya", "_UTTvUmpn93", "NuQiihEwC1v", "ACyO7EjxuN0" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank you for the instructive feedbacks and suggestions. We provide more clarifications and discussions on the concerns below:\n\nQ1: What is the problem being attacked and what is the contribution of this work?\n\nIn short, we target the specific supervised learning problem for relative camera po...
[ -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, 3, 2, 3 ]
[ "ACyO7EjxuN0", "_UTTvUmpn93", "NuQiihEwC1v", "7NbW1J0E4C5", "iclr_2021_fB2GZQajQ2b", "iclr_2021_fB2GZQajQ2b", "iclr_2021_fB2GZQajQ2b" ]
iclr_2021_AhLeNin_5sh
GenAD: General Representations of Multivariate Time Series for Anomaly Detection
Anomaly Detection(AD) for multivariate time series is an active area in machine learning, with critical applications in Information Technology system management, Spacecraft Health monitoring, Multi-Robot Systems detection, etc.. However, due to complex correlations and various temporal patterns of large-scale multivari...
withdrawn-rejected-submissions
The paper proposes an architecture of a model for multiple correlated time series that has application in anomaly detection. The main idea is to use attention both along time to capture trends, seasonality, etc and along series types to capture their correlation. Training loss attempts to reconstruct masked ranges of...
train
[ "9eNEHEm3dm", "WO35JrMgGo", "0FG4CVJG36t", "lkK6L-DCxMM", "NJoHv7SO7aG", "KnfPjDLrQqk" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewer,\nWe appreciate the time and effort you put into this paper, Your reviews are really helpful, and we revised the paper carefully.\n\n1.\t\"In practice, the overall status of an entity is more concerned...\" This sentence is really hard to understand without special background, and we have modified th...
[ -1, -1, -1, 3, 5, 4 ]
[ -1, -1, -1, 4, 4, 3 ]
[ "lkK6L-DCxMM", "KnfPjDLrQqk", "NJoHv7SO7aG", "iclr_2021_AhLeNin_5sh", "iclr_2021_AhLeNin_5sh", "iclr_2021_AhLeNin_5sh" ]
iclr_2021_EyDgK7q5vwJ
Streamlining EM into Auto-Encoder Networks
We present a new deep neural network architecture, named EDGaM, for deep clustering. This architecture can seamlessly learn deep auto-encoders and capture common group features of complex inputs in the encoded latent space. The key idea is to introduce a differentiable Gaussian mixture neural network between an encoder...
withdrawn-rejected-submissions
This paper presents a method for unrolling the iterative expectation-maximizing steps in the EM algorithm for a Gaussian mixture model into layers in a neural network. Then, the proposed method is applied in the latent space of an autoencoder to allow deep clustering using autoencoder features. The reviewers raised con...
train
[ "0PFZT_x7oD4", "_s63oAVVlsI", "ZsUqcPUpcvo", "G5LwKipOZHW", "rpJfS4cSgJm", "6TW5WbE_Y0U", "1SZ-3-t-nBt", "NMMhB_3LDae", "-vCu94Xo8ik", "mamH6iQIIy0", "NlpXob7sqbS", "okmQPwtyJ0m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a deep clustering method called EDGaM. Clustering algorithms often suffer from cluster collapse or sample-specific details overestimation. To balance both challenges, the authors propose a differentiable GMM network in the latent space between encoder and decoder. The network is designed with t...
[ 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_EyDgK7q5vwJ", "iclr_2021_EyDgK7q5vwJ", "okmQPwtyJ0m", "rpJfS4cSgJm", "NlpXob7sqbS", "0PFZT_x7oD4", "NMMhB_3LDae", "-vCu94Xo8ik", "_s63oAVVlsI", "G5LwKipOZHW", "iclr_2021_EyDgK7q5vwJ", "iclr_2021_EyDgK7q5vwJ" ]
iclr_2021_FMdjYY6H8-Z
RETHINKING LOCAL LOW RANK MATRIX DETECTION:A MULTIPLE-FILTER BASED NEURAL NETWORK FRAMEWORK
The matrix local low rank representation (MLLRR) is a critical dimension reduction technique widely used in recommendation systems, text mining and computer vision. In MLLRR, how to robustly identify the row and column indices that forma distinct low rank sub-matrix is a major challenge. In this work, we first organize...
withdrawn-rejected-submissions
We thank the authors for detailing their answers to the reviewers and uploading a new version of the paper with more details and experiments. While the experimental section has improved in the revision, the fact that the method proposed is an ad hoc sequence of 6 heuristic steps, not supported by theoretical justificat...
train
[ "VD2ePVrWRJC", "d8-S6rRkClG", "zf4XwvIelzS", "CyKwHQBPzNM", "y_ksGuDVmNo", "VJ-DILULEeN", "BG--JojE8dv", "34B4av0J-BG" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a multi-filter based neural network framework for local low-rank matrix approximation problems. They three sub-problems, namely, LLR-1C, LLR-1, and LLR-k. Based on extensive simulations, the authors claim that the proposed method is more general and outperforms previous methods on this problem....
[ 5, -1, -1, -1, -1, -1, 3, 4 ]
[ 5, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_FMdjYY6H8-Z", "VD2ePVrWRJC", "VD2ePVrWRJC", "VD2ePVrWRJC", "34B4av0J-BG", "BG--JojE8dv", "iclr_2021_FMdjYY6H8-Z", "iclr_2021_FMdjYY6H8-Z" ]
iclr_2021_-S7-RsPv78e
Optimizing Quantized Neural Networks in a Weak Curvature Manifold
Quantized Neural Networks (QNNs) have achieved an enormous step in improving computational efficiency, making it possible to deploy large models to mobile and miniaturized devices. In order to narrow the performance gap between low-precision and full-precision models, we introduce the natural gradient to train a ...
withdrawn-rejected-submissions
The reviewers unanimously raised concerns over the clarity and technical correctness of the theory and the Imagenet experiments during the first round. The authors submitted a highly revised version during the rebuttal which allayed concerns for multiple reviewers, however all the reviewers raised the concern that the ...
train
[ "Y3FgroZCqnc", "2DLA56CUYx", "1UoDE2UnWAn", "-B54vlqbKhy", "e0A4onJct6B", "ovMSTgKz_4p", "1oE_8s0HRNX", "B-kFWW7Fod" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the comments, and would like to clarify several things to address the reviewer's concerns:\n\nQ1: The target of the method\n\nA1: The main goal of the QNN is 'In order to narrow the performance gap between low-precision and full-precision models' in the abstract. The proposed training alg...
[ -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "ovMSTgKz_4p", "B-kFWW7Fod", "e0A4onJct6B", "1oE_8s0HRNX", "iclr_2021_-S7-RsPv78e", "iclr_2021_-S7-RsPv78e", "iclr_2021_-S7-RsPv78e", "iclr_2021_-S7-RsPv78e" ]
iclr_2021_TTnPcO6kK5
A New Variant of Stochastic Heavy ball Optimization Method for Deep Learning
Stochastic momentum optimization methods, also known as stochastic heavy ball (SHB) methods, are one of the most popular optimization methods for deep learning. These methods can help accelerate stochastic gradient descent and dampen oscillations. In this paper we provide a new variant of the stochastic heavy ball meth...
withdrawn-rejected-submissions
The paper presents a new variant of the Stochastic Heavy Ball method with coordinate-wise stepsizes. They prove a regret upper bound in the online convex optimization setting and validate the algorithm on few deep learning tasks. The reviewers found the paper severely lacking on many aspects. In particular, the formu...
val
[ "AA-18qAQO5Q", "Ra96wM-0oft", "CgT5xL57XO9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Based on the idea of Euler’s method, this paper proposes an algorithm that adaptively adjusts the directional step sizes. The algorithm also incorporates Heavy Ball momentum with a tunable momentum parameter. A convergence analysis of the algorithm is provided in the case of a decaying learning rate. The proposed ...
[ 6, 3, 4 ]
[ 3, 4, 4 ]
[ "iclr_2021_TTnPcO6kK5", "iclr_2021_TTnPcO6kK5", "iclr_2021_TTnPcO6kK5" ]
iclr_2021_4pN0NjwSoPR
Differentiable Dynamic Quantization with Mixed Precision and Adaptive Resolution
Model quantization is to discretize weights and activations of a deep neural network (DNN). Unlike previous methods that manually defined the quantization hyperparameters such as precision (\ie bitwidth), dynamic range (\ie minimum and maximum discrete values) and stepsize (\ie interval between discrete values), ...
withdrawn-rejected-submissions
Two referees support accept and two indicate reject. Despite the author's rebuttal, reviewers determined through subsequent private discussions that the paper was insufficient to satisfy the high standards of ICLR due to the lack of diverse evaluations on various models/datasets and increased computational overhead. E...
train
[ "n65NkqJSHMf", "a6NAqBlgMJ", "olORYqGA-4S", "UOYFI0WZc9M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an approach for quantization-aware training, named Differentiable Dynamic Quantization(DDQ) to automatically learn all the quantization parameters. DDQ represents a wide variety of quantizers by formulating quantization as matrix-vector product in a unified framework instead of using traditional...
[ 6, 4, 6, 5 ]
[ 4, 5, 5, 4 ]
[ "iclr_2021_4pN0NjwSoPR", "iclr_2021_4pN0NjwSoPR", "iclr_2021_4pN0NjwSoPR", "iclr_2021_4pN0NjwSoPR" ]
iclr_2021_l_LGi6xeNT9
The 3TConv: An Intrinsic Approach to Explainable 3D CNNs
Current deep learning architectures that make use of the 3D convolution (3DConv) achieve state-of-the-art results on action recognition benchmarks. However, the 3DConv does not easily lend itself to explainable model decisions. To this end we introduce a novel and intrinsic approach, whereby all the aspects of the 3DCo...
withdrawn-rejected-submissions
The reviewers had a number of concerns: not state of the art recommend analysis and comparison with [1] Temporal Shift Module writing needs to be improved appreciate the motivation for the paper, but needs more extensive experimentation. Need larger scene-related datasets. We hope you find the reviewers' comments he...
train
[ "liq2ooioFaB", "7Xo5X3FIYca", "A1sj5992Gz", "HzVPrjqMWn0", "JKwW8hjrSFL" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, thank you for taking the time to read and critically comment on our paper. \nIt is clear to us that you do not consider the paper ready for ICLR. We shall take your comments and suggestions into consideration to develop it into a better article. \n\nWe do want to clarify and comment on a couple of ...
[ -1, 3, 6, 3, 5 ]
[ -1, 3, 4, 4, 4 ]
[ "iclr_2021_l_LGi6xeNT9", "iclr_2021_l_LGi6xeNT9", "iclr_2021_l_LGi6xeNT9", "iclr_2021_l_LGi6xeNT9", "iclr_2021_l_LGi6xeNT9" ]
iclr_2021_cgRzg1V9su
Inner Ensemble Networks: Average Ensemble as an Effective Regularizer
We introduce Inner Ensemble Networks (IENs) which reduce the variance within the neural network itself without an increase in the model complexity. IENs utilize ensemble parameters during the training phase to reduce the network variance. While in the testing phase, these parameters are removed without a change in the ...
withdrawn-rejected-submissions
The paper proposes inner ensemble method where output of inner layers are replaced by an ensemble average of them during inference time to reduce inference time and reduce variance. The authors include experiment results showing performance improvement of their method and use the theoretical analysis of the dropout and...
train
[ "lMrSYsHxTjM", "YxOo_BCTGVI", "9Nn0EQlputy", "EN1ApQ4CUOg", "mXuS-2G3S5f", "X8S7GLLpZjg", "OqVxeTT3lnA", "0O4g-9ZpCjT", "5e6u6v1_u0P", "cmkCWeeIzG-", "wqNAldrtPG", "o6h7qB9rDdu", "M2IQzEejCQM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a drop-in replacement for CNN and FC layers that use multiple instances of the same layer and apply the average operation to those layers’ output. For inference, the weights are averaged themselves, so the params count in inference is the same as without the inner ensembles. The authors show th...
[ 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_cgRzg1V9su", "iclr_2021_cgRzg1V9su", "YxOo_BCTGVI", "lMrSYsHxTjM", "9Nn0EQlputy", "0O4g-9ZpCjT", "iclr_2021_cgRzg1V9su", "o6h7qB9rDdu", "YxOo_BCTGVI", "EN1ApQ4CUOg", "M2IQzEejCQM", "iclr_2021_cgRzg1V9su", "iclr_2021_cgRzg1V9su" ]
iclr_2021_hPDC6tBFNiV
Quantifying Uncertainty in Deep Spatiotemporal Forecasting
Quantifying uncertainty is critical to risk assessment and decision making in high stakes domains. However, prior works for deep neural network uncertainty estimation have mostly focused on point prediction. A systematic study of uncertainty quantification methods for spatiotemporal forecasting has been missing in the ...
withdrawn-rejected-submissions
This paper investigates methods for producing and evaluating interval forecasts rather than point forecasts. The authors focus on spatio-temporal forecasts whose interval accuracy is measured with the Mean Interval Score. Pros: Uncertainty quantification is an important topic that is often ignored in the ML literatur...
test
[ "M4E8RwxAGJ2", "HQ5XoheZLPj", "EGdF-k-_KxS", "viGpL9v1Q57", "NOM_Q3Hgfmx", "7m4FmjC6Bf6", "ufqsBNkspNA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a comparison of deep learning (DL) uncertainty quantification (UQ) methods in spatiotemporal forecasting problems (specifically, in graph-based problems). The paper define first a metric that is used in statistics and econometrics for interval forecasts, mean interval score Then it reviews the ...
[ 5, 4, -1, -1, -1, -1, 4 ]
[ 4, 5, -1, -1, -1, -1, 5 ]
[ "iclr_2021_hPDC6tBFNiV", "iclr_2021_hPDC6tBFNiV", "viGpL9v1Q57", "ufqsBNkspNA", "HQ5XoheZLPj", "M4E8RwxAGJ2", "iclr_2021_hPDC6tBFNiV" ]
iclr_2021_WN_6sThEI_-
Recurrent Exploration Networks for Recommender Systems
Recurrent neural networks have proven effective in modeling sequential user feedbacks for recommender systems. However, they usually focus solely on item relevance and fail to effectively explore diverse items for users, therefore harming the system performance in the long run. To address this problem, we propose a new...
withdrawn-rejected-submissions
This paper overall received borderline negative scores. All the reviewers agree that the paper proposed an interesting approach to exploration for RNN-based recommender systems. However, there are concerns around the experiments as well as the theoretical contribution. Specifically, a few reviewers pointed out that the...
train
[ "6mW4AfvHE_t", "X0colzBq9tR", "6ccVEztE7fV", "nerF_lO03Id", "8QRm8kGvc4u", "x5rsg2r2lJ4", "uMFDZ1GpS3", "9GFHRuZDwxM", "GyyddDQRhmc", "DHI6ESc4Pjc", "pWa2-f5CcD" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their constructive comments. They mention that our work is novel/interesting (R1, R3), that our theoretical analysis is sound (R1, R3, R4), and that our experiments show our methods’ effectiveness (R1). They also voice several concerns, which we address one by one below. We also revised ...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "iclr_2021_WN_6sThEI_-", "GyyddDQRhmc", "GyyddDQRhmc", "DHI6ESc4Pjc", "pWa2-f5CcD", "pWa2-f5CcD", "9GFHRuZDwxM", "iclr_2021_WN_6sThEI_-", "iclr_2021_WN_6sThEI_-", "iclr_2021_WN_6sThEI_-", "iclr_2021_WN_6sThEI_-" ]
iclr_2021_bNdohBx9sPa
Unsupervised Domain Adaptation via Minimized Joint Error
Unsupervised domain adaptation transfers knowledge from learned source domain to a different (but related) target distribution, for which only few or no labeled data is available. Some researchers proposed upper bounds for the target error when transferring the knowledge, i.e.,Ben-David et al. (2010) established a theo...
withdrawn-rejected-submissions
This submission provides a new bound and derived method for unsupervised domain adaptation, based on adversarial training. The method is then extensively evaluated empirically. Pro: - the proposed method seems empirically successful Con: - I agree with one of the reviewers that the presented theoretical justificatio...
train
[ "OjdUx13b96k", "-W3Zko2XQEk", "LpmtnpHAXLz", "Y2cPzb3K3ne", "9o_N19BtQA", "oinXmVKVf0u", "fpq2CAjpMU2", "D18RO6vFGEc", "f4LGm4G8t0F", "ieIxTvIgyJD", "9MRam57qIrD" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. In practice, we can always add the validation set into the training set after we determine the hyper-parameters based on the performance on the validation set. However, in our experiments, we did not do that because we have to provide the comparisons under a fair condition.\n\n Besides, previous works like [...
[ -1, 4, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "LpmtnpHAXLz", "iclr_2021_bNdohBx9sPa", "oinXmVKVf0u", "9MRam57qIrD", "ieIxTvIgyJD", "-W3Zko2XQEk", "f4LGm4G8t0F", "iclr_2021_bNdohBx9sPa", "iclr_2021_bNdohBx9sPa", "iclr_2021_bNdohBx9sPa", "iclr_2021_bNdohBx9sPa" ]
iclr_2021_RCGBA1i5MF
The Unreasonable Effectiveness of the Class-reversed Sampling in Tail Sample Memorization
Long-tailed visual recognition poses significant challenges to traditional machine learning and emerging deep networks due to its inherent class imbalance. A common belief is that tail classes with few samples cannot exhibit enough regularity for pattern extraction. What makes things worse, the limited cardinality may ...
withdrawn-rejected-submissions
Reviewers were concerned about the technical novelty because the two-stage sampling strategy is similar to BBN and the decoupling of features and classifiers. Rebuttal addressed some concerns about the experiments, but Reviewers' major concerns remained.
val
[ "-yfYtRqzSD0", "zQmq228Rp-p", "qPvIn0IBhFe", "zavclq1dGo", "e2EP9L1cYV", "qm6B9vmgN7I", "elO45UFI83n", "HUGApVF1LpW", "foUdRP1THsH", "cZPpBIcE-z", "WPy87iOSt0", "gUbJQCHB9-O", "NnIOujd9KPq", "QTaAtD9vfOI", "JdhiY2TTBtZ", "tnNEjdnV8t", "jTomkdr_tN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "This paper investigates long-tailed visual recognition from a memorization-generalization point of view. A simple yet effective combinational sampling method is proposed to tackle long-tailed learning trade-off via overexposure of tail samples in the late stage of training. Specifically, switching instance-balanc...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_RCGBA1i5MF", "iclr_2021_RCGBA1i5MF", "jTomkdr_tN", "-yfYtRqzSD0", "tnNEjdnV8t", "zQmq228Rp-p", "iclr_2021_RCGBA1i5MF", "-yfYtRqzSD0", "zQmq228Rp-p", "jTomkdr_tN", "gUbJQCHB9-O", "NnIOujd9KPq", "tnNEjdnV8t", "JdhiY2TTBtZ", "iclr_2021_RCGBA1i5MF", "iclr_2021_RCGBA1i5MF", "ic...
iclr_2022_pz1euXohm4H
Target-Side Input Augmentation for Sequence to Sequence Generation
Autoregressive sequence generation, a prevalent task in machine learning and natural language processing, generates every target token conditioned on both a source input and previously generated target tokens. Previous data augmentation methods, which have been shown to be effective for the task, mainly enhance source ...
Accept (Poster)
This paper presents a method for target side data augmentation for sequence to sequence models. The authors of the paper use a relatively straightforward method to generate pseudo tokens that are used for enhanced training. The authors present results on dialog generation, MT and summarization where automatic metrics...
train
[ "qpWoKL5GeCN", "MarMdKXbTyP", "w1zv9OF-MdJ", "kQlLJQCHfyf", "GaysL8-UZRf", "nNLxxLiO6U5", "6NsMCjGV4_u", "dBaKilQX5_A", "bWkOIqfy2ET", "LivHes28G0C", "a1Z6Y8i3S2l", "e0NEOZp7AsE", "jt-VdJ7mzz", "Z5aR1oaB-J", "MdgpxDWVgum", "qNtQYzK_nFs", "JuTXQvQTSxN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper experiments with a target-side sequence-level data-augmentation scheme for sequence-to-sequence generation tasks. The primary contribution of this work is an algorithm that leverages model-outputs to construct pseudo-target-side tokens (and consequently pseudo-sequences) for augmentation. It is done by ...
[ 6, -1, 6, -1, -1, 8, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, 4, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_pz1euXohm4H", "kQlLJQCHfyf", "iclr_2022_pz1euXohm4H", "Z5aR1oaB-J", "JuTXQvQTSxN", "iclr_2022_pz1euXohm4H", "bWkOIqfy2ET", "iclr_2022_pz1euXohm4H", "LivHes28G0C", "a1Z6Y8i3S2l", "e0NEOZp7AsE", "jt-VdJ7mzz", "dBaKilQX5_A", "w1zv9OF-MdJ", "qNtQYzK_nFs", "qpWoKL5GeCN", "nNLxx...
iclr_2022_kNKFOXleuC
Anytime Dense Prediction with Confidence Adaptivity
Anytime inference requires a model to make a progression of predictions which might be halted at any time. Prior research on anytime visual recognition has mostly focused on image classification.We propose the first unified and end-to-end approach for anytime dense prediction. A cascade of "exits" is attached to the mo...
Accept (Poster)
This submission received 4 final ratings above the acceptance threshold: 6, 6, 6, 8. The reviewers mentioned limited novelty, but acknowledged practical importance of this work, and particularly appreciated thorough analysis provided by the authors. After a strong rebuttal, most of remaining concerns have been addresse...
train
[ "fCehCT2kzUJ", "gzcduOpwtmT", "0gLBYQtoBm", "GCppqOhY3WS", "8m5bBAsi58I", "K_K7JTDoLO5", "8vQMpS32nhH", "x1Goe_F69w8", "GJ4ucR8aG_E", "xARxjj8svt", "S0-pPCvnC8", "1e3S4cPeAQT", "1bjnIWhExkM", "b9Mc0q4jd0I", "qu1IoQ4BhRL", "dVJCg2vWVIS", "3NKZZkDgKl1", "5KKy1ag0o2", "siUdwNOGTp", ...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ "This work focuses on anytime pixel-level recognition (e.g., semantic segmentation). They propose to add intermediate exists in the architecture for anytime inference. They also consider spatial confidence adaptivity in their network, where they only execute subsequent layers on a small set of non-confidence pixels...
[ 6, -1, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_kNKFOXleuC", "8vQMpS32nhH", "K_K7JTDoLO5", "xARxjj8svt", "iclr_2022_kNKFOXleuC", "iclr_2022_kNKFOXleuC", "1e3S4cPeAQT", "siUdwNOGTp", "iclr_2022_kNKFOXleuC", "iclr_2022_kNKFOXleuC", "1bjnIWhExkM", "dVJCg2vWVIS", "qu1IoQ4BhRL", "iclr_2022_kNKFOXleuC", "JRf1WPov73l", "fCehCT2k...
iclr_2022_qyTBxTztIpQ
CrowdPlay: Crowdsourcing Human Demonstrations for Offline Learning
Crowdsourcing has been instrumental for driving AI advances that rely on large-scale data. At the same time, reinforcement learning has seen rapid progress through benchmark environments that strike a balance between tractability and real-world complexity, such as ALE and OpenAI Gym. In this paper, we aim to fill a ga...
Accept (Poster)
This paper studies the problem of how to collect demonstrations via crowd sourcing for imitation and offline learning. The paper received mixed reviews initially. The reviewers had difficulty understanding empirical results, asked for some more ablations, and were little unconvinced by the proposed usefulness of the co...
test
[ "nrv5LDN30su", "hQGCjA0efLY", "drc90BgMdqQ", "NZKSbtb-u7T", "inZNdzfKXQK", "QOiuogmDYQU", "WZFbYuPu_9u", "HcsK4zAeizL", "dpV6HmHBO8b", "v6EVwu3NMHe", "p5TQ0NjcMQf", "lvY5hVhdS3U", "ddDsnjBB0G_", "s2up88Oa4e4", "rNsxQ5DPeT", "xp0YsxMOzi5", "dVBV5mVYyx_", "KS5VKpSzlPB", "Amv_unomLm...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_...
[ " Thank you for your response. We were unable to include all baselines by the paper update deadline due to computational constraints, but some of the remaining experiments have now finished, and a handful are still running with results coming in soon. We have uploaded updated results on the link below, and will fil...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "hQGCjA0efLY", "NZKSbtb-u7T", "iclr_2022_qyTBxTztIpQ", "p5TQ0NjcMQf", "dpV6HmHBO8b", "HcsK4zAeizL", "iclr_2022_qyTBxTztIpQ", "Amv_unomLmF", "Amv_unomLmF", "iclr_2022_qyTBxTztIpQ", "rNsxQ5DPeT", "iclr_2022_qyTBxTztIpQ", "s9GLM4E5kOX", "Q4TWRIvE2j", "xp0YsxMOzi5", "dVBV5mVYyx_", "KS5VK...
iclr_2022_ZDaSIkWT-AP
Case-based reasoning for better generalization in textual reinforcement learning
Text-based games (TBG) have emerged as promising environments for driving research in grounded language understanding and studying problems like generalization and sample efficiency. Several deep reinforcement learning (RL) methods with varying architectures and learning schemes have been proposed for TBGs. However, th...
Accept (Poster)
This paper describes how to apply a combination of case-based reasoning and RL methods to improve the performance of agents in text-adventure games. The reviewers unanimously recommend acceptance. This work is both insightful and practical. This is a valuable contribution. Well done!
train
[ "saAdd8c9e4_", "lLt2FVJvJlr", "_b104DD7URz", "nsGrYH1ia33", "w_sFO4JOHdq", "w2Gt5vAaQ6p", "EAbZuBY5Tsg", "vqcohPwDVfl", "hp7_fVzi0hA", "euOAlQ66Iv5", "7elrjbem22b", "yP5b-CKqNqp", "6FH7diWZ6vd", "1hHcD6oRjpz", "Bo9uyH9nbRR" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper describes how to apply a combination of case-based reasoning and RL methods to improve performance of agents on text-adventure game type tasks. It introduces a GNN representation of state and a vector-quantized encoding scheme so that contextual information about successful actions from the past can be...
[ 8, -1, -1, 8, -1, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, 5, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2022_ZDaSIkWT-AP", "Bo9uyH9nbRR", "saAdd8c9e4_", "iclr_2022_ZDaSIkWT-AP", "hp7_fVzi0hA", "nsGrYH1ia33", "saAdd8c9e4_", "iclr_2022_ZDaSIkWT-AP", "yP5b-CKqNqp", "iclr_2022_ZDaSIkWT-AP", "iclr_2022_ZDaSIkWT-AP", "euOAlQ66Iv5", "vqcohPwDVfl", "nsGrYH1ia33", "saAdd8c9e4_" ]
iclr_2022_gICys3ITSmj
The Close Relationship Between Contrastive Learning and Meta-Learning
Contrastive learning has recently taken off as a paradigm for learning from unlabeled data. In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototy...
Accept (Poster)
This paper was borderline, based on the reviews. The paper points out an interesting connection (somewhat known but not in this specific version) and good experimental results. However, numerous reviewers raised concerns that the paper was lacking a comparison to prior work connecting unsupervised learning and meta-lea...
train
[ "KsZdUaucQG3", "M3vioQIqVOu", "G5nxL04y84R", "Nczb7C-FGy", "QpfFzpAp262", "myTmYcHWWU4", "8vTS2mAoqH1", "KZOKc5Bu0i", "ofiDCPMMA5_", "wuZ1-cuZ9U", "Q5lpT_tIzK3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the response from the authors. They have successfully addressed my concerns.\nI will keep my score and suggest weak acceptance.", " Thanks for your careful review and helpful suggestions. Below are detailed responses to each of your comments.\n\n- **\"the title of the paper is 'contrastive learning ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "QpfFzpAp262", "wuZ1-cuZ9U", "ofiDCPMMA5_", "8vTS2mAoqH1", "Q5lpT_tIzK3", "KZOKc5Bu0i", "iclr_2022_gICys3ITSmj", "iclr_2022_gICys3ITSmj", "iclr_2022_gICys3ITSmj", "iclr_2022_gICys3ITSmj", "iclr_2022_gICys3ITSmj" ]
iclr_2022_V0A5g83gdQ_
Tuformer: Data-driven Design of Transformers for Improved Generalization or Efficiency
Transformers are neural network architectures that achieve remarkable performance in many areas. However, the core component of Transformers, multi-head self-attention (MHSA), is mainly derived from heuristics, and the interactions across its components are not well understood. To address the problem, we first introduc...
Accept (Poster)
This paper presents a tensor diagram view of the multi-headed self-attention (MHSA) mechanism used in Transformer architectures, and by modifying the tensor diagram, introduces a strict generalization of MHSA called the Tucker-head self attention (THSA) mechanism. While there is some concern regarding the incremental n...
test
[ "0FwBzwXSqL2", "Bi-2bhqWs7W", "U4incwg9_dZ", "kzNqE1wAw1N", "YZiqu9zizgN", "NfRGTi3kog4", "fIN68QiYw9N", "hMH92blmlZv", "zW-ui3Zg3s_", "ltWsaivlDUw", "6KtuaslLCl8", "V_MjPwJB1Ah", "Jc4USnqWhy", "sjiFVMT3HS", "iFsRPk7uJL", "ZEAFOHWQ5Dm", "0Uxs1dutwf", "EmQhAbKXIqN", "nQOEGYD8G7A",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_r...
[ "The paper investigates the multi-head self-attention mechanism (MHSA) of transformer networks through the lens of tensor decompositions via tensor diagram notation. The authors propose an extension to MHSA inspired by the Tucker decomposition (termed THSA), analyze its expressive power, and demonstrate that it bel...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_V0A5g83gdQ_", "iFsRPk7uJL", "YZiqu9zizgN", "zW-ui3Zg3s_", "ZEAFOHWQ5Dm", "sjiFVMT3HS", "6KtuaslLCl8", "iclr_2022_V0A5g83gdQ_", "-5Y9M5nnOxU", "0FwBzwXSqL2", "ZEAFOHWQ5Dm", "iclr_2022_V0A5g83gdQ_", "sjiFVMT3HS", "Ti43MyDteDd", "V_MjPwJB1Ah", "0FwBzwXSqL2", "0FwBzwXSqL2", ...
iclr_2022_o0ehFykKVtr
Know Thyself: Transferable Visual Control Policies Through Robot-Awareness
Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data. How might we leverage data previously collected on another robot to reduce or even completely remove this need for robot-specific data? We propose a "robot-aware control" paradigm that achiev...
Accept (Poster)
This paper proposes a method to improve the transfer of visual control policies between robots. The method extends a visual foresight approach using a learned robot-agnostic world-dynamics model and a (potentially analytic) robot-specific robot-dynamics module. A key aspect of the method is to form a blocky mask over...
train
[ "CLi4_1cnqBW", "LeOZ0NtOMuV", "_TiOZrE9yL_", "qcQHH-KPznu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a robot-aware RL-based approach to transfer learning from a robot to another. The approach factorizes the model into two models, a robot model and a world model. The method is evaluated against several baselines on simulated and real data. The paper claims that it permits zero-shot transfer ont...
[ 6, 5, 6, 8 ]
[ 3, 3, 4, 4 ]
[ "iclr_2022_o0ehFykKVtr", "iclr_2022_o0ehFykKVtr", "iclr_2022_o0ehFykKVtr", "iclr_2022_o0ehFykKVtr" ]
iclr_2022_bwq6O4Cwdl
How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning
To avoid collapse in self-supervised learning (SSL), a contrastive loss is widely used but often requires a large number of negative samples. Without negative samples yet achieving competitive performance, a recent work~\citep{chen2021exploring} has attracted significant attention for providing a minimalist simple Siam...
Accept (Poster)
This paper provide an explanation why contrastive learning methods like SimSiam avoid collapse without negative samples. As the authors claimed, this is indeed a timely work for understanding the recent success in self-supervised learning (SSL). The key idea in this submission is to decomposes the gradient into a cente...
train
[ "3RJFzbpAV9g", "jwT70zgZSj", "OZaeDWpAOZ0", "IG46pUcjdBw", "_4FKedBR--J", "rIwxTUuILTd", "XXayT7d0c05", "sIuLRzdxP2", "-miO4WCobZv", "Z7eo8wmw5", "zCZhmjoIHg_", "DKcezbCPpQW", "6fozDBwg5j-", "P-Z5tJsUdSD", "St3pQczWH8P", "EXbra-qG1o", "aM2hsYdp_CR", "LoKcDVeONOh", "aFpbETXLXj", ...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " As the discussion approaches the end, we take the opportunity to express our genuine gratitude to all reviewers. All your comments are very constructive for improving our work. We also thank the AC(s) for their effort in helping evaluate our work. \n\nThank you all very much.", " We thank Reviewer q4XL for the ...
[ -1, -1, 6, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 3, -1, -1, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_bwq6O4Cwdl", "IG46pUcjdBw", "iclr_2022_bwq6O4Cwdl", "St3pQczWH8P", "rIwxTUuILTd", "St3pQczWH8P", "iclr_2022_bwq6O4Cwdl", "Z7eo8wmw5", "iclr_2022_bwq6O4Cwdl", "-miO4WCobZv", "iclr_2022_bwq6O4Cwdl", "TiIht-bNskA", "-miO4WCobZv", "XXayT7d0c05", "OZaeDWpAOZ0", "2aKUtN3koko", "...
iclr_2022_68n2s9ZJWF8
Offline Reinforcement Learning with Implicit Q-Learning
Offline reinforcement learning requires reconciling two conflicting aims: learning a policy that improves over the behavior policy that collected the dataset, while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift. This tradeoff is critical, because mo...
Accept (Poster)
This paper proposes a new paradigm --- called in-sample Q learning --- to tackle offline reinforcement learning. Based on the novel idea of using expectile regression, the proposed algorithm enjoys stable performance by focusing on in-sample actions and avoiding querying the values of unseen actions. The empirical perf...
train
[ "_W9ynV0H1z7", "P6akjBx5auC", "zkf0jQV45d_", "fgLqNjUdAdc", "jlVGpPwwic", "OLV6nYaEiV", "hZxezQSHpp", "3DHaf1efX1e", "a-7fLBPBPl", "DC-b01IPVyg", "eMlPAzJOQj", "GJ8Qela-bS7", "VSrcNoBrmjM", "6Q9HwoNgw1", "KxwtmM7Jc3e", "jDLnYwIwqzx", "4RFcUE86Ajw", "j18PBOOfgpX", "O5sgeoZoz_d", ...
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " \n> Your policy is only updated by advantage-weighted behavioral cloning. Why the policy improvement step is absorbed into learning V?\n\nThe structure of our algorithm differs from that of standard actor-critic methods (see discussion in Section 4.3). Whereas actor-critic methods alternate between updating the a...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "P6akjBx5auC", "fgLqNjUdAdc", "3DHaf1efX1e", "KxwtmM7Jc3e", "OLV6nYaEiV", "jDLnYwIwqzx", "3DHaf1efX1e", "4RFcUE86Ajw", "iclr_2022_68n2s9ZJWF8", "KxwtmM7Jc3e", "jDLnYwIwqzx", "VSrcNoBrmjM", "6Q9HwoNgw1", "4RFcUE86Ajw", "4PGOrl5QHF", "O5sgeoZoz_d", "a-7fLBPBPl", "iclr_2022_68n2s9ZJWF...
iclr_2022_ECvgmYVyeUz
Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap
Recently, contrastive learning has risen to be a promising approach for large-scale self-supervised learning. However, theoretical understanding of how it works is still unclear. In this paper, we propose a new guarantee on the downstream performance without resorting to the conditional independence assumption that is ...
Accept (Poster)
The paper under review provides a theoretical analysis for contrastive representation learning. The paper proposes a guarantee on the performance (specifically upper and lower bounds) without resorting to previously used conditional independence assumptions. Throughout, the theoretical results and assumptions are suppo...
train
[ "SjG10tHEnD4", "2GQpOzF4loJ", "VUq7VDDE7Si", "kzbdWhq48Mo", "lF2-08pdBoQ", "fGA4xtODbT5", "rdCs3XG597o", "IglWx5sqAI", "gG8iggnQkGj", "cnOpdteqDfC", "PPk3mQMKuWO", "AJVAIqrLdK", "AdrI1LyHNo", "b5QErMwUyHn", "fZM71sPLIH", "coPC4jnsScR", "3CzLVQqOHNe", "jjjRuTs8M1-", "uwUub5Ex-l", ...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ " \nWe thank Reviewer tWSB for appreciating our response. We will further elaborate our explanations to address your concerns point by point.\n\n---\n**Q1.** About Assumption 4.6.\n\n**Point a**\n\n> As far as I understand, Wang et al. does not assume alignment and uniformity as stated in the response, but proves i...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "kzbdWhq48Mo", "kzbdWhq48Mo", "iclr_2022_ECvgmYVyeUz", "PPk3mQMKuWO", "VUq7VDDE7Si", "VUq7VDDE7Si", "iclr_2022_ECvgmYVyeUz", "gG8iggnQkGj", "3CzLVQqOHNe", "rdCs3XG597o", "VUq7VDDE7Si", "AdrI1LyHNo", "lF2-08pdBoQ", "rdCs3XG597o", "rdCs3XG597o", "rdCs3XG597o", "MHCGIhAfG1t", "V63nzBc...
iclr_2022_WHA8009laxu
Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data. However, potential clients might even be reluctant to label their own data, which could limit the applicability of FL in practice. In this paper, we show the possibility of unsupervised FL whose mo...
Accept (Poster)
The paper demonstrates a case of federated learning with unlabelled but systematically partitioned data between clients. A title along terms like "FL with unlabelled data" would be much better - the considered setting here is not fully unsupervised but relies on the key assumption that while not the labels, at lease th...
train
[ "rYObl07lZnL", "t5Z_apYFX_F", "mNaAsKpCjsP", "TWrHH4_pmce", "eTwW6T6u-Fh", "GIXGynpGpUo", "_lrkgzN9-gQ", "iSXIGOCX09f", "DLSprjvIED0", "xX0ISkBrDtL", "xIeh8myxH6H", "8MuJVK_uux", "A_461NcBSFl", "cIJIKpJffbm", "cOKffWufPrt", "_1X7BdxgYrK", "NebPJ_FE7R-", "xGrfomawO1h", "_A5SK56rTs...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "offici...
[ " Thanks very much for your constructive comments on our work!\nAs the rebuttal deadline is approaching, is there still any unclear point (e.g., the class prior) about the paper and the rebuttal?\nAlthough there is no more time for new experiments as a rebuttal revision, do you have suggested experiments for the ne...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "aMaMvksKXVS", "TWrHH4_pmce", "iclr_2022_WHA8009laxu", "mNaAsKpCjsP", "VBjc7ZRezZ", "iclr_2022_WHA8009laxu", "VBjc7ZRezZ", "VBjc7ZRezZ", "VBjc7ZRezZ", "_A5SK56rTss", "cOKffWufPrt", "yjxRVsneYmu", "cOKffWufPrt", "cOKffWufPrt", "aMaMvksKXVS", "_A5SK56rTss", "_A5SK56rTss", "_A5SK56rTs...
iclr_2022_5HvpvYd68b
switch-GLAT: Multilingual Parallel Machine Translation Via Code-Switch Decoder
Multilingual machine translation aims to develop a single model for multiple language directions. However, existing multilingual models based on Transformer are limited in terms of both translation performance and inference speed. In this paper, we propose switch-GLAT, a non-autoregressive multilingual machine translat...
Accept (Poster)
This paper proposes several innovations for machine translation. The reviewers had several questions about the claims that were made and the authors addressed these and also acknowledged that some of their formulations (e.g. 'better') would need to be qualified. Overall, there are several interesting ideas that have be...
train
[ "8xwEgR_Xu_n", "SpiShXI_fhn", "h6Sz4VUwHOy", "qxIbbivbYY5", "BdN_-_3l5G4", "FszVvyeBRCt", "YcjPSY0XdGd", "4lnMzuh2gkk", "NJktedtQ9eN", "tFlfuPuv3JA", "F0Y58L5Fy53", "5Zu-oCgx85b", "2gLwZ7sf9RJ", "wtO6Ukc_BA3", "S4FZuNCTAJ3", "nXx0uG_CPmF", "WdSBaxbsSNj", "duPBys1sv-", "Pl1XPUuJ_n...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_r...
[ " Thanks for reviewer np9D's comments! Our responses to the reviewer's concerns are listed as follows:\n\n1. **For our title:** The \"better\" means a trade-off between translation quality and inference speed. We use \"better\" to show that our model can achieve comparable or even better results and simultaneously...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "SpiShXI_fhn", "Pl1XPUuJ_nU", "BdN_-_3l5G4", "BdN_-_3l5G4", "NJktedtQ9eN", "4lnMzuh2gkk", "iclr_2022_5HvpvYd68b", "F0Y58L5Fy53", "tFlfuPuv3JA", "5Zu-oCgx85b", "nXx0uG_CPmF", "2gLwZ7sf9RJ", "OswwiStw5VM", "S4FZuNCTAJ3", "WdSBaxbsSNj", "YcjPSY0XdGd", "rq0otnAp5y9", "iclr_2022_5HvpvYd...
iclr_2022_xNO7OEIcJc6
Language-biased image classification: evaluation based on semantic representations
Humans show language-biased image recognition for a word-embedded image, known as picture-word interference. Such interference depends on hierarchical semantic categories and reflects that human language processing highly interacts with visual processing. Similar to humans, recent artificial models jointly trained on t...
Accept (Poster)
This paper presents a new benchmark task for models similar to CLIP for evaluating how visual word forms interfere with the visual recognition of objects in images when the former are superimposed on the latter ones. Specifically, by superimposing words belonging to different categories (e.g., hypernyms vs basic label...
test
[ "Su_DHVZnVXS", "iaJ183_Cg16", "XdUEZ88BPpV", "dftgudnS610", "uTWEEFuLcla", "erPsQgESkhz", "__zV612O8K", "4lO6lJBcBkR", "WcHWYRqdUUd", "Bo1riap3eTb", "_1os3Yh3ZAD", "xVbYfDeAGTJ", "XlDHiAWhSAc", "yDeig1-Y_gn", "smFhWaC2zvZ", "cyFIEPxg9A_" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Just replying to say: thank you for the updates ! I have seen them and I think the prompts you wrote do provide a zero-shot experiment that doesn't suffer from the same symmetry problems as the unprompted cases. I will take these updates into account in my reviewing capacity for this work going forward.", " We ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "uTWEEFuLcla", "WcHWYRqdUUd", "erPsQgESkhz", "4lO6lJBcBkR", "xVbYfDeAGTJ", "__zV612O8K", "yDeig1-Y_gn", "XlDHiAWhSAc", "Bo1riap3eTb", "_1os3Yh3ZAD", "smFhWaC2zvZ", "cyFIEPxg9A_", "iclr_2022_xNO7OEIcJc6", "iclr_2022_xNO7OEIcJc6", "iclr_2022_xNO7OEIcJc6", "iclr_2022_xNO7OEIcJc6" ]
iclr_2022_h0OYV0We3oh
Illiterate DALL-E Learns to Compose
Although DALL-E has shown an impressive ability of composition-based systematic generalization in image generation, it requires the dataset of text-image pairs and the compositionality is provided by the text. In contrast, object-centric representation models like the Slot Attention model learn composable representatio...
Accept (Poster)
The paper addresses the problem of generating images by combining visual components. These components are learned during pretraining, forming a dictionary of visual concpets which plays the role of text in DALLE. The technique is based on DALLE and slot attention approach to generate VQ codes in a way that is consisten...
train
[ "si78BIyvH9I", "hJ46mfSNVqS", "RJWygyha5kw", "HPiW-LhXaE9", "Ak52Di4lERt", "rm8BjLdakzU", "NiE2JiZULCu", "FCX4z6Rsqki", "UwxZles8ixA", "uaxpiwVA8WT", "ts8XHm9LR7", "47nzqfd3GP8", "kmZGQvr-i_-", "xByPyjSQYSl", "3_JCOtkFa7B", "yZr5nzvi_7g", "qrpca5My5s5", "jgY78TxN75" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "This paper proposes a method (Slot2Seq) to adapt the recent DALL-E (text-to-image) model to perform image-to-image composition. The aim is to simultaneously learn latent concepts from base images that can then apply to the generation process (as opposed to input text, which contains somewhat discretized \"concepts...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_h0OYV0We3oh", "uaxpiwVA8WT", "HPiW-LhXaE9", "rm8BjLdakzU", "iclr_2022_h0OYV0We3oh", "Ak52Di4lERt", "si78BIyvH9I", "jgY78TxN75", "iclr_2022_h0OYV0We3oh", "3_JCOtkFa7B", "Ak52Di4lERt", "kmZGQvr-i_-", "ts8XHm9LR7", "jgY78TxN75", "si78BIyvH9I", "qrpca5My5s5", "iclr_2022_h0OYV0...
iclr_2022_RDlLMjLJXdq
Learning Temporally Causal Latent Processes from General Temporal Data
Our goal is to recover time-delayed latent causal variables and identify their relations from measured temporal data. Estimating causally-related latent variables from observations is particularly challenging as the latent variables are not uniquely recoverable in the most general case. In this work, we consider both a...
Accept (Poster)
This paper proposes two new sets of conditions under which we can identify temporally causal latent processes. In this sense, this work makes valuable contributions to the theories of identifiability in this topic. The authors also propose LEAP, extending the VAE, to estimate temporally causal latent processes. The re...
train
[ "3Sy1Vh1fCI", "Yt5Vw8MxcST", "EwsOEI2jRcc", "UHhKPnknRi3", "KgqmzMrEvRB", "aGPjOEFcPqv", "cSl65gKdobj", "p1C8NlS244w", "J7pjobd4Xnb", "akrhVAKosyp", "cZxOjuxmTU", "MveoCFb_g2D", "f6vZfKtM8lC", "5S2muHHpXTO", "o-Tn_O1T8iV", "htTe6LM9z1l", "axFjEvDDwi1", "FN2OeZl70xx", "vidg6uW7wsq...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer x73T\n\nThank you for your prompt feedback. Would you like to consider updating your recommendation, if your concerns are properly addressed?\n\nWith best regards,\n\nAuthors of submission 481", " Dear Reviewer Qhdx,\n\nThanks for your time and comments! Hope we are not bothering you, but we are l...
[ -1, -1, -1, -1, 8, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "p1C8NlS244w", "YGgFlfQ9OSw", "UHhKPnknRi3", "5S2muHHpXTO", "iclr_2022_RDlLMjLJXdq", "p1C8NlS244w", "akrhVAKosyp", "htTe6LM9z1l", "iclr_2022_RDlLMjLJXdq", "J7pjobd4Xnb", "iclr_2022_RDlLMjLJXdq", "YGgFlfQ9OSw", "KgqmzMrEvRB", "KgqmzMrEvRB", "KgqmzMrEvRB", "9mo3xWN7Rvg", "J7pjobd4Xnb",...
iclr_2022_nhN-fqxmNGx
A Comparison of Hamming Errors of Representative Variable Selection Methods
Lasso is a celebrated method for variable selection in linear models, but it faces challenges when the covariates are moderately or strongly correlated. This motivates alternative approaches such as using a non-convex penalty, adding a ridge regularization, or conducting a post-Lasso thresholding. In this paper, we com...
Accept (Poster)
In the end, all reviewers agreed that this is a solid piece of work. However, there were also some doubts regarding the relevance of the block diagonal design and the underlying assumptions about the p/n ratio. The majority of the reviewers, on the other hand, had the impression that the positive aspects dominate the p...
train
[ "Xj828xW9tI5", "S_HoNEA02BC", "-1oow9o4jPC", "LYpIgIaMhuI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper investigates the theoretical performance of a number of variables selection methods when the columns of the design matrix are correlated. They focus specifically on the case where the Gram matrix for the design is block-wise diagonal (2 x 2 blocks) with the off-diagonals within each block given by a cor...
[ 6, 8, 3, 6 ]
[ 3, 4, 5, 3 ]
[ "iclr_2022_nhN-fqxmNGx", "iclr_2022_nhN-fqxmNGx", "iclr_2022_nhN-fqxmNGx", "iclr_2022_nhN-fqxmNGx" ]
iclr_2022_QuObT9BTWo
Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization
Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In...
Accept (Poster)
This paper develops a ``preference-conditioned” approach to approximate the Pareto frontier for Multi-Objective Combinatorial Optimization (MOCO) problems with a single model (thus dealing with the thorny problem that there can be exponentially-many Pareto-optimal solutions). It appears to provide flexibility for user...
val
[ "KQSIgy9whz", "EjXX8HdhTHc", "4jC6j6OVlGV", "dmTQLpCwf41", "Dv6D8fYXz_f", "92pEvBT2_W4", "BJS64aPFBOO", "SUINlqtvjP1", "9RaKi-Uju8Y", "zfmocvQKVX5", "Quc5JzHK7hF", "xe3UHTqXElC" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your suggestion. We will move the theoretical part to the appendix and provide more discussions on the proposed algorithm in the camera-ready version (the manuscript cannot be edited in this phase).", " Thank the authors for the feedback, and here is a remark to update your camera-ready version: I...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "EjXX8HdhTHc", "dmTQLpCwf41", "Quc5JzHK7hF", "xe3UHTqXElC", "zfmocvQKVX5", "9RaKi-Uju8Y", "9RaKi-Uju8Y", "iclr_2022_QuObT9BTWo", "iclr_2022_QuObT9BTWo", "iclr_2022_QuObT9BTWo", "iclr_2022_QuObT9BTWo", "iclr_2022_QuObT9BTWo" ]
iclr_2022_LDAwu17QaJz
MAML is a Noisy Contrastive Learner in Classification
Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms, achieving remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates, which govern the task-specific and meta-model-centric learning, respectively, th...
Accept (Poster)
This paper connects MAML to contrastive learning under some simplifying assumptions and with slight modifications in the setting. Specifically, the authors show that if the inner loop updates are only applied on the top linear layer, MAML is equivalent to supervised contrastive learning (SCL). This means that MAML lea...
train
[ "px0SFiV0g8", "oDtGCQshDpH", "bCBvJxPORwi", "gW4J88o5nHy", "Kttb-3b_c8c", "Nqrq1JZOf_X", "fsNpuu67GU", "22yF9uqU8Vz", "IijZMqA5lN", "kIa89UG2PRF", "WopGJ3EtF_s", "8MKo_8NaS4v", "LPFJIdVqrQ4", "_07420cTf6u", "dEzB-k0D3LM", "mCHhZWiGZU_", "vQbDF6uZNuJ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer XCch,\n\nBased on your initial review comment \"*I would be happy to re-consider my recommendation if the responses are reasonable.*\", we would like to follow up to remind the reviewer about our author responses and revised version, which incorporated all the suggested changes and review comments. ...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "mCHhZWiGZU_", "Kttb-3b_c8c", "Nqrq1JZOf_X", "iclr_2022_LDAwu17QaJz", "gW4J88o5nHy", "IijZMqA5lN", "22yF9uqU8Vz", "iclr_2022_LDAwu17QaJz", "vQbDF6uZNuJ", "vQbDF6uZNuJ", "gW4J88o5nHy", "gW4J88o5nHy", "mCHhZWiGZU_", "mCHhZWiGZU_", "mCHhZWiGZU_", "iclr_2022_LDAwu17QaJz", "iclr_2022_LDAw...
iclr_2022_SHbhHHfePhP
Equivariant Graph Mechanics Networks with Constraints
Learning to reason about relations and dynamics over multiple interacting objects is a challenging topic in machine learning. The challenges mainly stem from that the interacting systems are exponentially-compositional, symmetrical, and commonly geometrically-constrained. Current methods, particularly the ones based on...
Accept (Poster)
The manuscript develops a new kind of graph neural network (a Graph Mechanics Network; GMN) that is particularly well suited to representing and making predictions about physical mechanics systems (and data with similar structure). It does so by developing a way to build geometric constraints implicitly and naturally i...
train
[ "Wf4wvrjNTfu", "1Jpa1oz1WXz", "lEP41HMA0S-", "gZBjVCLDMvN", "RHw_1BczFkD", "YkEzUtyHipJ", "dX_stVQQEKR", "3rZJYoooU9", "TrQ_qn3DzZj", "x452pBA0fxd", "wKGkxuFhZxk", "mfqKzGufKHS", "DdCTZPQUHqv", "8qmzmyxIe6", "_IRWgy3gNrh", "NsdHdM6yEmd", "K_w21RMUNLv", "R7cn4PIyhqO", "Lo7OxfEwyYA...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ " Dear Reviewer,\n \nWe really appreciate that you have recognized the efforts that we put into the extra experiments and increased your score. We are also thankful for your reference to DeLaN and the work by Sanchez-Gonzalez et al. (ICML 2018). If necessary, we would like to emphasize the main difference again: o...
[ -1, 8, 5, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 4, 2, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "gZBjVCLDMvN", "iclr_2022_SHbhHHfePhP", "iclr_2022_SHbhHHfePhP", "tOwIiVDclF", "3rZJYoooU9", "TrQ_qn3DzZj", "iclr_2022_SHbhHHfePhP", "mfqKzGufKHS", "mfqKzGufKHS", "1Jpa1oz1WXz", "8qmzmyxIe6", "_IRWgy3gNrh", "_IRWgy3gNrh", "qTt3rUy_T-", "LX-kFkVHz8w", "1Jpa1oz1WXz", "5kidu5TlH9U", "...
iclr_2022_oiZJwC_fyS
Neural Network Approximation based on Hausdorff distance of Tropical Zonotopes
In this work we theoretically contribute to neural network approximation by providing a novel tropical geometrical viewpoint to structured neural network compression. In particular, we show that the approximation error between two neural networks with ReLU activations and one hidden layer depends on the Hausdorff dista...
Accept (Poster)
The submission introduces an algorithm for structured pruning of fully connected ReLU layers using ideas from tropical geometry. The paper begins with a very accessible overview of key concepts from tropical geometry, and shows how ReLU networks can be thought of as tropical polynomials. It gives an efficient K-means-...
val
[ "RkAH0sEPJnv", "AqKYVFRB6B4", "5znxxXwDu05", "S5ThCCrVzPv", "AD2UCFi5YZv", "xWyge8wlmL4", "1MR6ejQQLRC", "9QKMzQ1Bk0R", "yn_XO2mJC72" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "***This is a educated guess review as the paper is outside my domain expertise.\n\nThis paper proposes a compression method using a framework based on geometrical zonotope reduction. The authors further analyze the error bounds of the proposed methods and compare its performance with modern pruning techniques. I t...
[ 5, 5, -1, -1, -1, -1, -1, 5, 6 ]
[ 1, 3, -1, -1, -1, -1, -1, 2, 2 ]
[ "iclr_2022_oiZJwC_fyS", "iclr_2022_oiZJwC_fyS", "RkAH0sEPJnv", "iclr_2022_oiZJwC_fyS", "yn_XO2mJC72", "9QKMzQ1Bk0R", "AqKYVFRB6B4", "iclr_2022_oiZJwC_fyS", "iclr_2022_oiZJwC_fyS" ]
iclr_2022_lrocYB-0ST2
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective
The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical ...
Accept (Poster)
The paper addresses hierarchical kernels and provides an analysis of their RKHS along with generalization bounds and cases where improved generalization can be obtained. The reviewers appreciated the analysis and its implications. There were multiple concerns regarding presentation clarity, which the authors should add...
test
[ "5cSHmw33qKZ", "SojEU3OJG0Z", "1McelfYcg_h", "oojT294fZVn", "uP0Oj3j0ZrT", "zxZdXfhNx6m", "jmwVeiB9BPy", "QYCgzaqAJt6", "7qEiWd7RGhu", "eoa-FC96aZJ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your helpful comments! We will address them in the updated version.\n\n* We will improve the figure labels and will clarify the meaning of \"average\" in the caption.\n* As we mention in the paragraph \"Role of pooling\", this figure was mainly supposed to convey the fact that varying the depth and ...
[ -1, -1, -1, -1, -1, -1, 8, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "SojEU3OJG0Z", "uP0Oj3j0ZrT", "eoa-FC96aZJ", "7qEiWd7RGhu", "QYCgzaqAJt6", "jmwVeiB9BPy", "iclr_2022_lrocYB-0ST2", "iclr_2022_lrocYB-0ST2", "iclr_2022_lrocYB-0ST2", "iclr_2022_lrocYB-0ST2" ]
iclr_2022_BrPdX1bDZkQ
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations
We consider offline imitation learning (IL), which aims to mimic the expert's behavior from its demonstration without further interaction with the environment. One of the main challenges in offline IL is to deal with the narrow support of the data distribution exhibited by the expert demonstrations that cover only a sm...
Accept (Poster)
The paper presents a method for learning sequential decision making policies from a mix of demonstrations of varying quality. The reviewers agree, and I concur, that the method is relevant to the ICLR community. It is non-trivial, the empirical evaluations and theoretical analysis are rigorous, resulting in a novel met...
train
[ "WBp1RaZ_uUn", "cwl3MeudZdk", "iudOxy7JX8A", "1c_cWJldReC", "HSuH6MXuEyf", "74TrC4timX", "_uhcp1xVD8" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper considers an offline imitation learning (IL) problem with an addition of supplementary imperfect demonstrations. To solve this problem, the paper proposes DemoDICE which regularizes a distribution-matching objective of IL by a KL divergence between the agent distribution and a mixed of expert and imperfe...
[ 8, 8, -1, -1, -1, -1, 8 ]
[ 3, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2022_BrPdX1bDZkQ", "iclr_2022_BrPdX1bDZkQ", "_uhcp1xVD8", "WBp1RaZ_uUn", "cwl3MeudZdk", "iclr_2022_BrPdX1bDZkQ", "iclr_2022_BrPdX1bDZkQ" ]
iclr_2022_zz9hXVhf40
Revisiting Design Choices in Offline Model Based Reinforcement Learning
Offline reinforcement learning enables agents to leverage large pre-collected datasets of environment transitions to learn control policies, circumventing the need for potentially expensive or unsafe online data collection. Significant progress has been made recently in offline model-based reinforcement learning, appro...
Accept (Spotlight)
This paper empirically studies various design choices in offline model-based RL algorithms, with a focus on MOPO (Model-based Offline Policy Optimization). Among the key design choices is the uncertainty measure used in MOPO that provides an (approximate) lower bound on the performance, the horizon rollout length, and ...
train
[ "K95PcQSFMnl", "3zwTPAiJz3U", "L1E57BlqSOY", "GV2He4_Meel", "HiPJbCVJwt", "yfOp35kKfgD", "tZpLSX7Hs5R", "7QJWP1izpM4", "fGFGsICthdL", "ly3ScZvgmXB", "7pfLLWZrg8r", "pCMrUs15dnj", "L7U74nxwheE", "bwmfMKZxqrR", "sJXGpQJa9-H", "bOMqLDDQ4mT", "C9sAMcaroT-", "8BGS2ZDWmhF", "TLGTuRvv8X...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "au...
[ " Thank you for your time reviewing our response and for your suggestions which improved our paper.", " Thank you for your time reviewing our additional results and for your comments which made our paper better.", "The paper provides an evaluation of many of the design choices and hyperparameter decisions made ...
[ -1, -1, 8, -1, -1, 6, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 3, -1, -1, 4, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "HiPJbCVJwt", "GV2He4_Meel", "iclr_2022_zz9hXVhf40", "jEGRSfnZ5pJ", "OhPrOX7aAq", "iclr_2022_zz9hXVhf40", "UA0est2vZq", "iclr_2022_zz9hXVhf40", "ly3ScZvgmXB", "bwmfMKZxqrR", "L7U74nxwheE", "iclr_2022_zz9hXVhf40", "C9sAMcaroT-", "sJXGpQJa9-H", "bOMqLDDQ4mT", "TLGTuRvv8XN", "pCMrUs15dn...
iclr_2022_IfNu7Dr-3fQ
Generalized Kernel Thinning
The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses a probability distribution more effectively than independent sampling by targeting a reproducing kernel Hilbert space (RKHS) and leveraging a less smooth square-root kernel. Here we provide four improvements. First, we show that KT applied dire...
Accept (Poster)
The focus of the paper is kernel thinning, i.e. the extraction of a core set from a sample with good integration properties meant in MMD (maximum mean discrepancy, hence worst case) sense. Particularly, the authors propose generalizations of the kernel thinning method (Dwivedi and Mackey, 2021) which relax the assumpti...
train
[ "SxDwa5kpBy", "_5c-X7sGwus", "AdaZdnrH6iF", "Zw1wi4_8W6n", "U8DnJPH4au", "4bbaPA9gCn", "ZTMlXYNfZj2", "Fbdi7UNhNG", "Dzp7094nNNd", "7_uO-jiMiSp", "YicghYSav9Z", "n0xoxqBfyM4", "PDhOWWmYGKa", "w6o2HMy11NU", "hxDm8B2KB2v", "vAn1TJAxsoo", "TlaQvHR6VIg", "uK-ZRXzgOxG" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the time you’ve spent reviewing our work and for your thoughtful feedback. We address each of your concerns below. \n", " Thank you for your helpful suggestions regarding the presentation! In the uploaded revision, we have reorganized the presentation and discussion of our main results, provided ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 2 ]
[ "hxDm8B2KB2v", "SxDwa5kpBy", "Zw1wi4_8W6n", "vAn1TJAxsoo", "Dzp7094nNNd", "SxDwa5kpBy", "Dzp7094nNNd", "Dzp7094nNNd", "uK-ZRXzgOxG", "TlaQvHR6VIg", "Zw1wi4_8W6n", "Zw1wi4_8W6n", "SxDwa5kpBy", "SxDwa5kpBy", "iclr_2022_IfNu7Dr-3fQ", "iclr_2022_IfNu7Dr-3fQ", "iclr_2022_IfNu7Dr-3fQ", "...
iclr_2022_lzupY5zjaU9
Distribution Compression in Near-Linear Time
In distribution compression, one aims to accurately summarize a probability distribution $\mathbb{P}$ using a small number of representative points. Near-optimal thinning procedures achieve this goal by sampling $n$ points from a Markov chain and identifying $\sqrt{n}$ points with $\widetilde{\mathcal{O}}(1/\sqrt{n})$ ...
Accept (Poster)
This paper proposes a simple meta algorithm to speed up data thinning algorithms with good theoretical guarantees. The method is both theoretically interesting and useful for practical applications.
test
[ "_kUulIC_LP", "rBCMe5D2OYK", "2_QV8vFAZwg", "KmGXTHIlMlF", "zopUzTOsiF", "XzqLLQojAEM", "0xKX3HD-3CI", "rvPs56jZNnK", "6s0ceKv98O9", "WWtkwcETdR", "VSORCFQ-TBb", "LjeQDlb2SgH", "A54FDNCVN-c", "9sqB9GWncWl", "O4i69GJNNKC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper gives a meta algorithm for speeding up coreset constructing algorithms for the distribution compression problem.\n\nThe benefit of this meta algorithm is that its running time is faster by a square-root factor (e.g. quadratic to linear) while keeping the error rate roughly the same: only a factor of 4 w...
[ 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_lzupY5zjaU9", "VSORCFQ-TBb", "rvPs56jZNnK", "iclr_2022_lzupY5zjaU9", "6s0ceKv98O9", "6s0ceKv98O9", "6s0ceKv98O9", "6s0ceKv98O9", "KmGXTHIlMlF", "LjeQDlb2SgH", "LjeQDlb2SgH", "O4i69GJNNKC", "9sqB9GWncWl", "_kUulIC_LP", "iclr_2022_lzupY5zjaU9" ]
iclr_2022_EAy7C1cgE1L
Increasing the Cost of Model Extraction with Calibrated Proof of Work
In model extraction attacks, adversaries can steal a machine learning model exposed via a public API by repeatedly querying it and adjusting their own model based on obtained predictions. To prevent model stealing, existing defenses focus on detecting malicious queries, truncating, or distorting outputs, thus necessari...
Accept (Spotlight)
the paper proposed a novel idea of requiring users to complete a proof-of-work before they can read the model's prediction to prevent model extraction attacks. Reviewers were excited about the paper and ideas. Some misunderstanding raised by reviewers were sufficiently clarified by the authors in the rebuttal.
test
[ "_MMTl8t2UBh", "2Cg7RUNkgi", "gCQY0Lv2QhC", "GFWOqg3BcOk", "h5jcDSQcnlT", "vimYcUVB0AI", "pnQk7oo_RwJ", "W1irx5it8Oj", "hB6q4zAEWy4", "RGk8suxUY-M", "UKP88GOPmD", "erCpHgmd-g", "aHhb3uXN7NN", "vbptd7AzbG0", "8__9EOaxc_g", "G0CafShkOb", "z6kEdxac8N", "94UFCyJfV2_", "sy3LpF9pqH", ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author",...
[ " We would like to thank the reviewers for their questions and comments. The paper has definitely improved as a result. We would like to check one last time if there are any pending questions that we have not adequately addressed.", " We would appreciate it if the reviewer could let us know if we can carry out an...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_EAy7C1cgE1L", "erCpHgmd-g", "GFWOqg3BcOk", "W1irx5it8Oj", "iclr_2022_EAy7C1cgE1L", "SrNOizC9BuU", "SDG9E6mBUam", "7MaisyJ5ng", "RGk8suxUY-M", "UKP88GOPmD", "pqRcu-v4OfY", "aHhb3uXN7NN", "vbptd7AzbG0", "8__9EOaxc_g", "G0CafShkOb", "sy3LpF9pqH", "SrNOizC9BuU", "SrNOizC9BuU...
iclr_2022_-llS6TiOew
Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
We perform systematically and fairly controlled experiments with the 6-layer Transformer to investigate the hardness in conditional-language-modeling languages which have been traditionally considered morphologically rich (AR and RU) and poor (ZH). We evaluate through statistical comparisons across 30 possible languag...
Accept (Spotlight)
The authors address a very important question pertaining to the relevance of morphological complexity in the ability of transformer based conditional language models. Through extensive (controlled) experiments using 6 languages they answer as well as raise very interesting questions about the role of morphology/segment...
val
[ "WAAjUHR7WtC", "5T32eyCbK_D", "qKDhp3RnNG", "V_792DKe6Ds", "Ble5ZqPe5Vb", "Rm9r4iLC_sN", "zanlbFKfeTV", "a3287kpAH4R", "s38gVK8oHWq", "LMTYc71MEYU", "CRo7ewY7qA", "6GZddDitNH", "uRaFFejUG4r", "v3Si9VSlzNj", "zkclTag7YhE", "Dv91cnqurwx", "NZrMDdcCdOp", "Fg76Fsj09_F", "Gi1HqtHdlKa"...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_r...
[ " Yes, we will cite a non-anonymous copy of that paper in the camera-ready. Hope we get to prepare a camera-ready for ICLR2022! Thank you! ", " Sounds good to me. Good luck with the other submission! Hopefully the submission preprint may be visible meanwhile, but I'm fine either way---good luck!", " Dear Review...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,...
[ "5T32eyCbK_D", "qKDhp3RnNG", "uRaFFejUG4r", "zanlbFKfeTV", "Rm9r4iLC_sN", "lehpP1_VxV6", "a3287kpAH4R", "iclr_2022_-llS6TiOew", "iclr_2022_-llS6TiOew", "iclr_2022_-llS6TiOew", "zkclTag7YhE", "s38gVK8oHWq", "v3Si9VSlzNj", "Dv91cnqurwx", "6k_Dg-EPGIO", "NZrMDdcCdOp", "iv0hZSQpnU_", "...
iclr_2022_tBIQEvApZK5
Feature Kernel Distillation
Trained Neural Networks (NNs) can be viewed as data-dependent kernel machines, with predictions determined by the inner product of last-layer representations across inputs, referred to as the feature kernel. We explore the relevance of the feature kernel for Knowledge Distillation (KD), using a mechanistic understandin...
Accept (Poster)
This is a borderline paper. This paper proposed feature kernel distillation (FKD), a new distillation framework, by matching the kernels obtained from the networks of student and the teacher. Theoretical justification is provided by extending the results of Allen-Zhu and Li(2020)(ALi20 hereafter). Empirical results...
train
[ "84xw_VGNiaY", "VOuJhhHzFm4", "vZiK2Z2ZjE6", "6G2ku99y_d", "p66a5s7ubPY", "sVbMxNtUzx_", "ZTMYT_3axt", "r-uy2Q-oR11", "FdpMMaIwFcn", "WlUgBPISgyS", "iKOvZkylrc9", "yz5jqn9wzRF", "2YVRxXgXbz_" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors consider neural networks as data-dependent kernel machines and propose applying a distillation method directly on the pairwise kernel matrix of the models. Authors extend their setting into ensemble settings, examining both some theoretical aspects of the process, building upon the work o...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_tBIQEvApZK5", "r-uy2Q-oR11", "iclr_2022_tBIQEvApZK5", "p66a5s7ubPY", "iKOvZkylrc9", "2YVRxXgXbz_", "VOuJhhHzFm4", "84xw_VGNiaY", "yz5jqn9wzRF", "6G2ku99y_d", "iclr_2022_tBIQEvApZK5", "iclr_2022_tBIQEvApZK5", "iclr_2022_tBIQEvApZK5" ]
iclr_2022_ds8yZOUsea
Hidden Parameter Recurrent State Space Models For Changing Dynamics Scenarios
Recurrent State-space models (RSSMs) are highly expressive models for learning patterns in time series data and for system identification. However, these models are often based on the assumption that the dynamics are fixed and unchanging, which is rarely the case in real-world scenarios. Many control applications often...
Accept (Poster)
The paper addresses a few very important points on sequential latent-variable models, and introduce a different view on meta-RL. Even though the methods that this paper poses are incremental, it is such a hot-debated topic that I would prefer to see this published now.
train
[ "mKHTUCP1bvv", "KBGtBVDTJPM", "F5PT0iUiCVi", "WBkTvHmYp2H", "3wiME3wEvxH", "ZF8-0Swrtbe", "FvvYsYJDgS", "yZCfhnYWf5", "rrKun2FjWu", "8IhjZTMaW15", "Ybfd3JriBe", "FQK82cgp8kV", "P5YnhSrC_re", "oSyCBxx-0ac", "ORu6q7mc0D-", "lJGQUBkWAJX", "I48T6bIrLhy", "G0jzQfuGdrt", "H9xZMPuJJPV",...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " Many thanks for your reply and updated score. Thank you for your suggestions.\n\nRegarding your observation on Deep-SSMs, we would like to point out that existing literature in Deep SSMs (eg: https://arxiv.org/abs/1710.05741, https://arxiv.org/abs/1605.06432 , https://arxiv.org/abs/1905.07357 etc) are variants f...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "F5PT0iUiCVi", "iclr_2022_ds8yZOUsea", "FQK82cgp8kV", "3wiME3wEvxH", "yZCfhnYWf5", "rrKun2FjWu", "Ybfd3JriBe", "P5YnhSrC_re", "oSyCBxx-0ac", "iclr_2022_ds8yZOUsea", "ORu6q7mc0D-", "iOolU44GpA", "KM425bk0A4u", "H9xZMPuJJPV", "yJPJF8qb5CT", "4aiMvm9jEHR", "lljZ43gMVcw", "KBGtBVDTJPM"...
iclr_2022_CI-xXX9dg9l
On Distributed Adaptive Optimization with Gradient Compression
We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMSGrad algorithm. Gradient compression with error feedback is applied to reduce the communication cost in the gradient transmission process. Our convergence analysis of COMP-AMS shows that such compressed gradient averagin...
Accept (Poster)
The paper considers the setting of distributed optimization and proposes an adaptive gradient averaging and compression scheme to reduce the communication cost. The proposed scheme is shown to achieve the same convergence rate as full-gradient AMSGrad algorithm, but due to the reduced cost, it exhibits linear speedup a...
train
[ "AJLuDP4gx3H", "2ibV3HhDpGR", "EHXh8uUqUX7", "uRCA2qPuG06", "MRQE5Zhxh2c", "oOBBlDcioD" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThanks for your valuable feedback.\n\n1. In this paper, we choose to focus on the non-convex objective mainly for two reasons:\n\n(i) Practically, compression methods are most important in scenarios when training large deep neural networks which are highly non-convex. \n\n(ii) Considering non-co...
[ -1, -1, -1, 8, 8, 5 ]
[ -1, -1, -1, 3, 2, 3 ]
[ "oOBBlDcioD", "uRCA2qPuG06", "MRQE5Zhxh2c", "iclr_2022_CI-xXX9dg9l", "iclr_2022_CI-xXX9dg9l", "iclr_2022_CI-xXX9dg9l" ]
iclr_2022_3PN4iyXBeF
Amortized Implicit Differentiation for Stochastic Bilevel Optimization
We study a class of algorithms for solving bilevel optimization problems in both stochastic and deterministic settings when the inner-level objective is strongly convex. Specifically, we consider algorithms based on inexact implicit differentiation and we exploit a warm-start strategy to amortize the estimation of the...
Accept (Poster)
The paper presents a quite rigorous analysis of approximate implicit differentiation with warm starts applied to strongly convex upper level/strongly convex lower level and nonconvex upper level/strongly convex lower level bilevel optimization algorithms in a very general yet also very practical framework. They allow f...
val
[ "EXBKGw7GzDG", "C9vxHuVnmZy", "kF9odwQEjw9", "P5nCYesAT3", "L8zHlpOdzhH", "WWoccyepEOE", "4HU1ZbrG40l", "e3XxBFDWP5L", "7AqypIzpkIr", "nH423WiwMIv", "bBVfKH3CJJz", "pNF3_dO_weF", "Z2g0X7Ga0s6", "zs-ursNpQQN", "qI4Q0QFDo1M", "ko2rjYTXWzQ", "e8suA_ms4Yj", "QvZVY4G7EMN", "nNb5JRKXpP...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " Dear reviewer,\n\nThe authors replied to all your comments and have uploaded a revised PDF. It would be great if you could reply to them and update your score accordingly.\n\nBest,\nthe AC ", " My concerns are mostly addressed, although mnist is a dataset that is arguably small. Anyway, I believe this paper is ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 8, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 2, 3, 3 ]
[ "nNb5JRKXpP5", "P5nCYesAT3", "nNb5JRKXpP5", "QvZVY4G7EMN", "e3XxBFDWP5L", "4HU1ZbrG40l", "Z2g0X7Ga0s6", "pNF3_dO_weF", "iclr_2022_3PN4iyXBeF", "bBVfKH3CJJz", "nNb5JRKXpP5", "7AqypIzpkIr", "e8suA_ms4Yj", "ko2rjYTXWzQ", "iclr_2022_3PN4iyXBeF", "iclr_2022_3PN4iyXBeF", "iclr_2022_3PN4iyX...
iclr_2022_18Ys0-PzyPI
Online Ad Hoc Teamwork under Partial Observability
Autonomous agents often need to work together as a team to accomplish complex cooperative tasks. Due to privacy and other realistic constraints, agents might need to collaborate with previously unknown teammates on the fly. This problem is known as ad hoc teamwork, which remains a core research challenge. Prior works u...
Accept (Poster)
The paper presents a method for cooperative ad-hoc collaboration by learning latent representations of the teammates. The method is evaluated in three domains. All the reviewers agree that the method is novel and adds an interesting contribution to the important and difficult problem of the ad-hoc collaboration, making...
train
[ "AynUDlfqzE7", "8YNQVPKxDI3", "O8WQSo3l0T", "IE2HWN0HaNs", "AF-yLVO2p2k", "VfJSVACIWFz", "DXviSmyeci", "DjZ0Y4zYUjL", "lnXP1VeOYo" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nWe thank all the reviewers for their time spending on our paper and insightful feedbacks. We have updated the paper according to the suggestions of the reviewers. Here is the summary of the updates:\n\n1.\tWe have added some descriptions in Section 2 to discuss two related works in the field of agent modeling [...
[ -1, -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "iclr_2022_18Ys0-PzyPI", "lnXP1VeOYo", "DjZ0Y4zYUjL", "DXviSmyeci", "VfJSVACIWFz", "iclr_2022_18Ys0-PzyPI", "iclr_2022_18Ys0-PzyPI", "iclr_2022_18Ys0-PzyPI", "iclr_2022_18Ys0-PzyPI" ]
iclr_2022_MXEl7i-iru
GraphENS: Neighbor-Aware Ego Network Synthesis for Class-Imbalanced Node Classification
In many real-world node classification scenarios, nodes are highly class-imbalanced, where graph neural networks (GNNs) can be readily biased to major class instances. Albeit existing class imbalance approaches in other domains can alleviate this issue to some extent, they do not consider the impact of message passing ...
Accept (Poster)
Although reviews were initially a little polarized, they trend toward accepting the paper after rebuttal and discussion. The most negative review raised issues of datasets, baselines, and experiments, and various details that they find confusing. These concerns were not shared by the other reviewers for the most part....
train
[ "ofYdYVkgNmb", "nWV0scUHPpK", "5rnU9xd0rc0", "DAXbBWFQ8m", "zif2K9LyhyO", "TjbOfZuwUSx", "QOvrQElej-L", "J9KrQtDdxTM", "y3xceKJB8k", "H3tONaHLZ8f", "QdcP3lubp1g", "fRYW4_qhn1", "jd2JOQM8TfI", "KgLN3RjeeZDU", "Ej-jt-6oZKe", "VoFp3DoTO0d", "P1UZFiRNM0w", "iwbyidzcJkt", "3oPGlELXMy"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " I think the experiment with regards to the new distribution is interesting. Overall this is a good paper.", "This paper exploring neighborhood memorization problem and proposes a neighbor-aware data augmentation method for node classification task. Representation learning for node classification usually has cla...
[ -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "VoFp3DoTO0d", "iclr_2022_MXEl7i-iru", "DAXbBWFQ8m", "zif2K9LyhyO", "jd2JOQM8TfI", "J9KrQtDdxTM", "iclr_2022_MXEl7i-iru", "3oPGlELXMy", "fRYW4_qhn1", "E7-Ywb_qAKU", "E7-Ywb_qAKU", "H3tONaHLZ8f", "nWV0scUHPpK", "nWV0scUHPpK", "QOvrQElej-L", "qgNhf_f9s5F", "qgNhf_f9s5F", "QOvrQElej-L...
iclr_2022_wbPObLm6ueA
Fairness Guarantees under Demographic Shift
Recent studies have demonstrated that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behaviors will ...
Accept (Poster)
The paper studies the setting of group-based fairness under the so-called demographic shift, where the marginal distribution of the data remains the same conditional on the subgroup but the subgroup distribution can change. It provides a class of algorithms which give high confidence guarantees under demographic shift ...
train
[ "pUY9IgDa58", "6P1UdvjCk6G", "g0chspAaCvZ", "1StqYrlhM6", "AXoTUDwbrHY", "FcMElGJ_11", "lt2Jfnh9D0H", "kWq3btbQpkb", "HSEpS7bXmRr", "xNKqbmzBhCv", "tMxaHGNmQdz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response to my review. \nAdding new results and running experiments on a new dataset, i.e, Adult, with a comparison with existing methods on covariate shifts would definitely improve the paper.\nAlso, it is important that the authors add the discussion on how their approach can exten...
[ -1, -1, -1, 8, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, 4, 4, 3 ]
[ "AXoTUDwbrHY", "FcMElGJ_11", "lt2Jfnh9D0H", "iclr_2022_wbPObLm6ueA", "xNKqbmzBhCv", "tMxaHGNmQdz", "HSEpS7bXmRr", "1StqYrlhM6", "iclr_2022_wbPObLm6ueA", "iclr_2022_wbPObLm6ueA", "iclr_2022_wbPObLm6ueA" ]
iclr_2022_BjyvwnXXVn_
EViT: Expediting Vision Transformers via Token Reorganizations
Vision Transformers (ViTs) take all the image patches as tokens and construct multi-head self-attention (MHSA) among them. Complete leverage of these image tokens brings redundant computations since not all the tokens are attentive in MHSA. Examples include that tokens containing semantically meaningless or distractive...
Accept (Spotlight)
The paper presents an approach to select visual tokens in images and reorganize them for the object classification, within Transformers. All four reviewers find the paper interesting and novel, and they are also very positive about the experimental results. The authors also addressed minor concerns of the reviewers suc...
train
[ "M22krVnu69n", "tqunGg2n9NF", "0Y12tdEkxr7", "rD02LbBigMb", "FW06kFYNZiB", "yC_12HfTmbn", "zcRTKXB2LSe", "F6eGisORnuT", "EDVwtj9_Ff", "2fM7wUx72UX", "FbUdtcU-sDT", "VOUmQ5Q3bZF", "z2ZLpRySiXS", "TB3RpG2P6O", "6C8DtR5MlD1", "oYCOdwdvNQI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response. My minor concerns have been addressed, so I keep my rating.", "This paper aims to expediting vision transformers by reducing the number of tokens. The main contribution is the attentive token identification, which is based on calculating the attentiveness o...
[ -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "2fM7wUx72UX", "iclr_2022_BjyvwnXXVn_", "yC_12HfTmbn", "iclr_2022_BjyvwnXXVn_", "zcRTKXB2LSe", "tqunGg2n9NF", "F6eGisORnuT", "rD02LbBigMb", "oYCOdwdvNQI", "FbUdtcU-sDT", "6C8DtR5MlD1", "TB3RpG2P6O", "iclr_2022_BjyvwnXXVn_", "iclr_2022_BjyvwnXXVn_", "iclr_2022_BjyvwnXXVn_", "iclr_2022_B...
iclr_2022_dPyRNUlttBv
Optimization and Adaptive Generalization of Three layer Neural Networks
While there has been substantial recent work studying generalization of neural networks, the ability of deep nets in automating the process of feature extraction still evades a thorough mathematical understanding. As a step toward this goal, we analyze learning and generalization of a three-layer neural network wit...
Accept (Poster)
This paper goes beyond the NTK setting in analyzing optimization and generalization in ReLU networks. It nicely generalizes NTK by showing that generalization depends on a family of kernels rather than the single NTK. The reviewers appreciated the results. One thing that is missing is a clear separation between NTK res...
train
[ "ytpVZP7d6Ta", "UuCZ0QXGSvG", "O7utCKU9Jbj", "wkC7AcuZMAu", "gW9DD1qaT8h", "frKZi8nCOBl", "4eYwt6gj7LG", "KrnBSC6-2t", "CGjIYk0uwcp", "VF77oK2aoY4", "VI31vXxNjT", "UkfvQNW0uhl" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper introduces novel results on the optimization and generalization of three-layer networks, optimized with a variant of gradient-descent. The goal of the analysis introduced in the paper is to obtain bounds that improve over the bounds obtained using the well-studied NTK framework. Namely, the aim is to go ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ "iclr_2022_dPyRNUlttBv", "O7utCKU9Jbj", "wkC7AcuZMAu", "frKZi8nCOBl", "VI31vXxNjT", "CGjIYk0uwcp", "UkfvQNW0uhl", "VF77oK2aoY4", "ytpVZP7d6Ta", "iclr_2022_dPyRNUlttBv", "iclr_2022_dPyRNUlttBv", "iclr_2022_dPyRNUlttBv" ]
iclr_2022_z1-I6rOKv1S
Autoregressive Quantile Flows for Predictive Uncertainty Estimation
Numerous applications of machine learning involve representing probability distributions over high-dimensional data. We propose autoregressive quantile flows, a flexible class of normalizing flow models trained using a novel objective based on proper scoring rules. Our objective does not require calculating computation...
Accept (Spotlight)
The paper proposes a framework for training autoregressive flows based on proper scoring rules. The proposed framework is shown to be a computationally appealing alternative to maximum-likelihood training, and is empirically validated in a wide variety of applications. All three reviewers are positive about the paper ...
train
[ "ysWClPHWeG_", "IvtS5JX93DA", "uxsbarcW21N", "qcbJQh4-NcP", "xCaqbPX9838", "Ldgj_G7N9bf", "9ejW67OUbVm", "n1Vfll3EHYZ", "GmmN-jZ3Uzy", "zN_rR3sSI28" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer" ]
[ " Thanks so much for the post-rebuttal response!", " Thanks so much for the update! We'll make sure to clarify the difference with diffusion models in the main paper.", "This paper proposed a quantile regression method for uncertainty estimation based on autoregressive quantile flow. The flow model can be train...
[ -1, -1, 6, 8, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, 4 ]
[ "qcbJQh4-NcP", "uxsbarcW21N", "iclr_2022_z1-I6rOKv1S", "iclr_2022_z1-I6rOKv1S", "qcbJQh4-NcP", "iclr_2022_z1-I6rOKv1S", "uxsbarcW21N", "GmmN-jZ3Uzy", "iclr_2022_z1-I6rOKv1S", "iclr_2022_z1-I6rOKv1S" ]
iclr_2022_RCZqv9NXlZ
Offline Reinforcement Learning with Value-based Episodic Memory
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework...
Accept (Poster)
Most of the reviewers think this paper is clearly a valuable addition to ICLR based on the convincing theoretical analysis and extensive experimental results. Please refer to reviewers's review for more detailed discussions of the pros and cons of the paper.
val
[ "1tLqwCxPRG8", "jiReeBVFOmU", "SFpODVdJ7lF", "LFTfRqoDCFf", "quP9tWmuJJn", "vpVVrqncGyj", "PVl4Y0BXHS4", "hq8fdr5AB1r", "Uvj2Wu3Ftug", "N44F5uOrkUq", "DSty1CXd1N", "_LUEKqbxn4", "jcZuKUUQ5hS", "2gAHon5kje", "5CKqQ22i8e2", "bm8LKyClBhd", "4m65RUqNsZ", "rNwumT7dObW", "TV5cfZ_69-", ...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " We would like to thank the reviewer for raising the score!\nWe really appreciate the valuable comments and suggestions from the reviewer.", "This paper presents a new offline RL algorithm that leverages expectile regression for value learning and performs AWR-style policy learning with the value function learne...
[ -1, 6, -1, -1, 6, -1, 8, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 5, -1, -1, 3, -1, 4, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "jiReeBVFOmU", "iclr_2022_RCZqv9NXlZ", "quP9tWmuJJn", "VKnY30pnsCf", "iclr_2022_RCZqv9NXlZ", "PVl4Y0BXHS4", "iclr_2022_RCZqv9NXlZ", "PVl4Y0BXHS4", "N44F5uOrkUq", "iclr_2022_RCZqv9NXlZ", "ANpFIwPYMP3", "jcZuKUUQ5hS", "iclr_2022_RCZqv9NXlZ", "jiReeBVFOmU", "PVl4Y0BXHS4", "quP9tWmuJJn", ...
iclr_2022_zzk231Ms1Ih
A Theory of Tournament Representations
Real-world tournaments are almost always intransitive. Recent works have noted that parametric models which assume $d$ dimensional node representations can effectively model intransitive tournaments. However, nothing is known about the structure of the class of tournaments that arise out of any fixed $d$ dimensional r...
Accept (Poster)
The paper takes a creative step in the theory of tournaments, and it seems plausible that this could lead to interesting follow-ups. The reviewers made many excellent comments and I highly encourage the authors to take ALL of them into account in the revision, it will make the paper much stronger.
val
[ "C9NhLV0gKlP", "9v5OUUXUBRa", "r6YVs7y7t-j", "TvyIJI1TlM-", "PTrWepaf1rL", "b8g79PK9VJX", "1Ub_Ecw0E9", "G7so9meq63f", "np1SI9Vxn6T", "pUE59TP-ax", "mvJh08SL8ke" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the relationship between dimensional representation of tournament and their structural characterization. In particular, a relationship is established between rank d tournament and their forbidden configurations in terms of flip classes, introduced by Fisher&Ryan(1995) as a way to partion the set ...
[ 6, -1, -1, 5, -1, -1, -1, -1, -1, 8, 5 ]
[ 3, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_zzk231Ms1Ih", "iclr_2022_zzk231Ms1Ih", "PTrWepaf1rL", "iclr_2022_zzk231Ms1Ih", "b8g79PK9VJX", "TvyIJI1TlM-", "pUE59TP-ax", "mvJh08SL8ke", "C9NhLV0gKlP", "iclr_2022_zzk231Ms1Ih", "iclr_2022_zzk231Ms1Ih" ]
iclr_2022_vIC-xLFuM6
Overcoming The Spectral Bias of Neural Value Approximation
Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm. While multi-layer perceptrons are universal function approximators, recent works in neural kernel regression suggest the...
Accept (Poster)
The paper points out an interesting and, to me unexpected, problem when learning Q-functions to do with spectral bias. Figures 1 and 2 are quite striking. The diagnosis and proposed solution elegantly combines ideas from NTKs and NeRFs. The proposed random Fourier actor-critic performs well in practice. The main proble...
train
[ "XnEH9Aa6FI1", "6sEjYU2Ut-m", "rQmPrXxxoV3", "PB_LdmNbF-M", "Z9Owirm7ReY", "IptUik0d0d5", "6iJitKrkAGO", "CJwNo06oZl", "_B1HVH43aqV", "PnBjbT6EJ0F", "r3-SXYV_nHR", "_vf347s4c-h", "efrbRfdbztW", "We3dF4ZhVGk", "qanitxHjaRS", "kLguh3V7kq0", "paM944iqh-X", "QLy6l1sI8WP", "LZK5yMqzvT...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " ## Including Additional Kernels and Test Environments\n\n> *The baselines include a decent set of non-Fourier baselines. However, this work does not compare to architectures from Sitzmann 2020 and Tancik 2020 (though these works are cited). As a minor comment, including \"dense+deep\" and \"wide\" would be better...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "v4dGo5E66Z5", "bG3JCmxETX2", "PB_LdmNbF-M", "h7-6TRMZu7Z", "iclr_2022_vIC-xLFuM6", "Z9Owirm7ReY", "iclr_2022_vIC-xLFuM6", "r3-SXYV_nHR", "CJwNo06oZl", "_B1HVH43aqV", "_vf347s4c-h", "efrbRfdbztW", "kLguh3V7kq0", "iclr_2022_vIC-xLFuM6", "bG3JCmxETX2", "paM944iqh-X", "Z9Owirm7ReY", "...
iclr_2022_iPHLcmtietq
Phase Collapse in Neural Networks
Deep convolutional classifiers linearly separate image classes and improve accuracy as depth increases. They progressively reduce the spatial dimension whereas the number of channels grows with depth. Spatial variability is therefore transformed into variability along channels. A fundamental challenge is to understand ...
Accept (Poster)
This paper proposes that the superior performance of modern convolutional networks is partly due to a phase collapse mechanism that eliminates spatial variability while ensuring linear class separation. To support their hypothesis, authors introduce a complex-valued convolutional network (called Learned Scattering net...
train
[ "UC9ohhfPV1b", "-2Cfka_GrSI", "Q3KNUKqlE3", "DApFYowMJ-N", "5XiYyI-7Cm-", "VfhyjYC0s_K", "QCNN10msY81", "fGtdnTTZl4M", "l1tEtXAgVMD", "ePqF9PfjgSS", "Wjq3iMiTE3Y", "G4HNUTrRuMM", "1AZpZGVqhha" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "This paper studies within-class variability which reduces along the layers of deep neural networks. They mainly question the effect of sparsity and soft-thresholding introduced by ReLU. They show that these classification improvements by eliminating spatial within-class variabilities rather come from a phase colla...
[ 8, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_iPHLcmtietq", "QCNN10msY81", "DApFYowMJ-N", "l1tEtXAgVMD", "iclr_2022_iPHLcmtietq", "5XiYyI-7Cm-", "G4HNUTrRuMM", "1AZpZGVqhha", "UC9ohhfPV1b", "Wjq3iMiTE3Y", "iclr_2022_iPHLcmtietq", "iclr_2022_iPHLcmtietq", "iclr_2022_iPHLcmtietq" ]
iclr_2022_MeeQkFYVbzW
Adversarial Unlearning of Backdoors via Implicit Hypergradient
We propose a minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data. This formulation encompasses much of prior work on backdoor removal. We propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm to solve the minimax. Unlike previous work, which breaks ...
Accept (Poster)
This paper investigates defense against backdoor attacks for models that have already been trained. It proposes, in particular, a min-max formulation for backdoor defense, in which the inner maximum seeks a powerful trigger that leads to a high loss, while the outer minimum seeks to suppress the "adversarial loss", so ...
train
[ "HxeHl-L1SNB", "KSr5w06ULgf", "rlwaHBRXpUK", "rG4qpWUoXd", "dZ24daNetD", "-7jWUx-uYU", "dGrnZi9kiaM", "Qyx__yITOW", "D9Xr0YddUh", "_Oduir7I7o9", "4KJJ4r5C7yK", "oN259ufsiEB", "s-4ApB9XTdO", "D71a4Iebx3I", "4ZAajNdsZgq", "NAmIbUb2_nc", "jYHaZLW_jt5", "kq_-E2Sr3-f", "zk8_lA_scRU", ...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "...
[ " We appreciate your response!\n\n****\n**Settings:**\n\nFor implementation details of [1], we followed the same settings for their defense against BadNets. Here is their original paragraph in the appendix, page 16, right bottom, [link](https://arxiv.org/pdf/2102.13624.pdf): \n\n>*'For backdoor triggers, we do not ...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "D9Xr0YddUh", "iclr_2022_MeeQkFYVbzW", "KSr5w06ULgf", "aF8hPM1UIfb", "iclr_2022_MeeQkFYVbzW", "HxeHl-L1SNB", "D9Xr0YddUh", "pm8cQPS1uxn", "3I8s4y_OKh", "jYHaZLW_jt5", "s-4ApB9XTdO", "iclr_2022_MeeQkFYVbzW", "ww-N_Y95j5_", "pm8cQPS1uxn", "oN259ufsiEB", "dZ24daNetD", "kq_-E2Sr3-f", "...