paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_mz7Bkl2Pz6
Global Convergence and Stability of Stochastic Gradient Descent
In machine learning, stochastic gradient descent (SGD) is widely deployed to train models using highly non-convex objectives with equally complex noise models. Unfortunately, SGD theory often makes restrictive assumptions that fail to capture the non-convexity of real problems, and almost entirely ignore the complex noise models that exist in practice. In this work, we make substantial progress on this shortcoming. First, we establish that SGD’s iterates will either globally converge to a stationary point or diverge under nearly arbitrary nonconvexity and noise models. Under a slightly more restrictive assumption on the joint behavior of the non-convexity and noise model that generalizes current assumptions in the literature, we show that the objective function cannot diverge, even if the iterates diverge. As a consequence of our results, SGD can be applied to a greater range of stochastic optimization problems with confidence about its global convergence behavior and stability.
Reject
The paper considers the global convergence and stability of SGD for non-convex setting. The main contribution of the work seems to be to remove uniform bounded assumption on the noise, and to relax the global Holder assumption typically made. Their discussions in Appendix A provide an example for which the uniform bounded assumption on the noise commonly assumed in the literature fails. The authors establish that SGD’s iterates will either globally converge to a stationary point or diverge and hence tehir result exclude limit cycle or oscillation. Under a more restrictive assumption on the joint behavior of the non-convexity and noise model they also show that the objective function cannot diverge, even if the iterates diverge. The reviewers are on the fence with this paper. While they agree that the paper is interesting, they only give it a score of weak accept (subsequent to rebuttal as well). One of the qualms is that while the authors claim the result helps show success of SGD in more natural non-convex problems, they don’t provide realistic examples supporting their claim. Further, while the extension to holder smoothness assumption while is indeed interesting, unless practical significance is shown via examples, the result is not that exciting. From my point of view and reading, while the reviews are not extensive, i do not disagree with reviewers sentiment. Technically the paper is strong but there is a unanimous lack of strong excitement for the paper amongst reviewers. While there is this lack of more enthusiasm, given the number of strong submissions this year, I am tending towards a reject.
train
[ "O2XdDUra8w", "lOVLgFfezJZ", "V9bMUSSvzAe", "AhBhsHgmqxg", "uu23xyf0nW", "PCLqsCPpXIr", "AS9Bu9PXdw", "lbimSBKQt5p" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " My concerns are well addressed.", " Thanks for your response.", " Thank you for the detailed feedback and the grammatical corrections.\n\n1. We have expanded the discussion of Assumption 5. See page 4.\n\n2. These techniques can be used to prove non-asymptotic results. In our opinion, such results are greedy/...
[ -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "V9bMUSSvzAe", "uu23xyf0nW", "lbimSBKQt5p", "AS9Bu9PXdw", "PCLqsCPpXIr", "iclr_2022_mz7Bkl2Pz6", "iclr_2022_mz7Bkl2Pz6", "iclr_2022_mz7Bkl2Pz6" ]
iclr_2022_-dzXGe2FyW6
Equalized Robustness: Towards Sustainable Fairness Under Distributional Shifts
Increasing concerns have been raised on deep learning fairness in recent years. Existing fairness metrics and algorithms mainly focus on the discrimination of model performance across different groups on in-distribution data. It remains unclear whether the fairness achieved on in-distribution data can be generalized to data with unseen distribution shifts, which are commonly encountered in real-world applications. In this paper, we first propose a new fairness goal, termed Equalized Robustness (ER), to impose fair model robustness against unseen distribution shifts across majority and minority groups. ER measures robustness disparity by the maximum mean discrepancy (MMD) distance between the loss curvature distributions of two groups of data. We show that previous fairness learning algorithms designed for in-distribution fairness fail to meet the new robust fairness goal. We further propose a novel fairness learning algorithm, termed Curvature Matching (CUMA), to simultaneously achieve both traditional in-distribution fairness and our new robust fairness. CUMA efficiently debiases the model robustness by minimizing the MMD distance between loss curvature distributions of two groups. Experiments on three popular datasets show CUMA achieves superior fairness in robustness against distribution shifts, without more sacrifice on either overall accuracies or the in-distribution fairness.
Reject
The paper considers learning a fair classifier under distribution shift. The proposal involves an additional MMD penalty between the model curvatures on the data subgroups defined by the sensitive attribute. Reviewers generally found the problem setting to be well motivated, and the paper to have interesting ideas. Some concerns were raised in the initial set of reviews: (1) _Relation between local curvatures and fairness robustness_. The concern was that the paper does not make sufficiently clear how similarity of the distributions of local curvatures ensures fairness robustness, and that there is no explicit definition of fairness robust to distribution shift. (2) _Comparison to related work_. The concern was that works such as FARF as also considering the issue of distribution shift. (3) _Technical novelty_. The concern was that technical depth of the proposal may be limited, as it builds on existing ideas (e.g., adversarial learning, Hessian to measure curvature). (4) _Significance of results_. The concern was that the improvements of the proposed method are not significant, statistically and/or practically. For point (1), the response clarified that the proposal is to ensure that the local curvature (and hence robustness) across data subgroups is similar. The relevant reviewer was still unclear as to whether this ensures what one might intuitively consider "robust fairness". On my review of the paper, I do concur that from the Introduction, and the para preceding Eqn 4, it appears that one natural notion is $ \sup_{\mathbb{Q} \in \mathcal{U}( \mathbb{P} )} \Delta( \mathbb{Q}( \hat{Y}, Y \mid A = 0 ), \mathbb{Q}( \hat{Y}, Y \mid A = 1 ) ) $ where $\mathbb{P}$ is the observed data distribution, $\mathcal{U}$ is some uncertainty set, and $\Delta$ is some fairness measure (e.g., DP). Assuming this is indeed the ideal, it would be useful to mathematically contrast it to the proposal adopted in the present paper. The para preceding Eqn 4 correctly notes that the above notion would require specifying $\mathcal{U}$. This may be challenging, but an apparently reasonable strategy that follows the distributionally-robust optimization literature would be to use a specific ball around the training distribution (e.g., all distributions with bounded KL divergence against $\mathbb{P}$). Further, it is of interest to ask whether the proposed objective in any way approximate this one; put another way, is there any implicit assumption made as to which class of distributions one is likely to encounter? Further discussion would also be useful on the following alternative to the objective presented in the paper: rather than match the curvatures for the subgroups, simply minimise their unweighted average. This ought also to ensure robustness under the two different distributions; page 2 hints that this might not work owing to the different scales of these terms (i.e., the minority subgroup being much less robust), but the point does not seem to be discussed very explicitly subsequently. For point (2), the response noted that FARF is designed for online learning, whereas the present paper involves a single, static training set drawn iid from a single distribution. In the present paper, the drift happens at test time, and the learner has no access to samples from this distribution. The authors argued that FARF can be applied as-is to this setting. From my reading of this and the FARF paper, I agree that while the latter should be cited, it is not clearly applicable to the present setting. This said, the present paper primarily focusses on the covariate shift setting, for which there have been some relevant recent works; see: Singh et al., "Fairness Violations and Mitigation under Covariate Shift", FAccT '21. Rezaei et al., "Robust Fairness under Covariate Shift", AAAI '21. The former uses tools from joint causal graphs, while the latter assumes access to an unlabelled sample for the target distribution. The present work is certainly different in technical details, but at a minimum it seems prudent to acknowledge that there are relevant works on ensuring fairness outside the observed training distribution, and thus tone down statements such as "As a pioneer work...". There also seems scope to compare against the latter, e.g., to see how valuable having a few samples from the target domain are. Another work relevant to the spirit of ensuring fairness beyond the observed data is Mandal et al., "Ensuring Fairness Beyond the Training Data", NeurIPS 2020. This is in line with the distributionally-robust objective suggested in point (1), where one considers test distributions that can be arbitrary re-weightings of the training distribution. For point (3), from my reading, the technical content is reasonable. I would however have liked more mathematical discussions on point (1) above, which is important as it is the foundation of the strategy followed. For point (4), the response asserts their improvements are significant practically and statistically. From my reading, I am inclined to agree with this claim. I would however note that another reviewer raised the question of whether Gaussian and uniform noise are reflective of real-world distribution shifts. I concur with this concern; this part of the paper seems a little disappointing. The response mentioned results on a new setting with more realistic shift, which we suggest is incorporated into future versions of the paper. Overall, the paper has some interesting ideas for a topical and important problem. At the same time, there is scope for tightening the work per the comments above, particularly on points (1) and (2), and to some extent (4). We believe that addressing these would help properly situate the work, and thus increase its clarity and potential impact. We thus encourage the authors to consider incorporating these for a future submission.
train
[ "0cpPoBrR4tg", "8l-rqJzt11", "S5TZgbfLy5Z", "nSgLSJNWb39", "k8l__fvLOLo", "vTMPwXSZveT", "M8xzulK4I44", "JuLRY3KhLoo", "_o32fXB5OeF", "4wqaon54AOR", "3RuJeU3nigg", "PLFQ2si6BfU", "c2ccVhkTAnT", "CTcpoHV5L_g", "uVzknJ7b5B", "on8aBGWWBtO", "P2qBKmI2mK_", "1QbooNdByc_", "_nP9InE1KQN...
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers and AC panel,\n\nThank you again for reviewing our paper and providing helpful comments! We are glad the merits of our paper are enthusiastically acknowledged by Reviewers ebvu and xUAG. \n\nThe authors are an experienced research team with many experiences at ICLR and similar venues. We fully unde...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "iclr_2022_-dzXGe2FyW6", "JuLRY3KhLoo", "3RuJeU3nigg", "JuLRY3KhLoo", "JuLRY3KhLoo", "3RuJeU3nigg", "JuLRY3KhLoo", "CTcpoHV5L_g", "1QbooNdByc_", "1QbooNdByc_", "4wqaon54AOR", "_nP9InE1KQN", "P2qBKmI2mK_", "P2qBKmI2mK_", "on8aBGWWBtO", "iclr_2022_-dzXGe2FyW6", "iclr_2022_-dzXGe2FyW6",...
iclr_2022_7ktHTjV9FHw
Relative Molecule Self-Attention Transformer
Self-supervised learning holds promise to revolutionize molecule property prediction - a central task to drug discovery and many more industries - by enabling data efficient learning from scarce experimental data. Despite significant progress, non-pretrained methods can be still competitive in certain settings. We reason that architecture might be a key bottleneck. In particular, enriching the backbone architecture with domain-specific inductive biases has been key for the success of self-supervised learning in other domains. In this spirit, we methodologically explore the design space of the self-attention mechanism tailored to molecular data. We identify a novel variant of self-attention adapted to processing molecules, inspired by the relative self-attention layer, which involves fusing embedded graph and distance relationships between atoms. Our main contribution is Relative Molecule Attention Transformer (R-MAT): a novel Transformer-based model based on the developed self-attention layer that achieves state-of-the-art or very competitive results across a~wide range of molecule property prediction tasks.
Reject
This paper presents an extension to the MAT model by using relative attention that considers graph-level distance, geometric distance, and bond type between nodes during the attention computation. This is shown to lead to improved performance on several benchmark datasets against MAT and GROVER. The inclusion of additional information into the attention computation is a sensible and natural choice for transformer with the success of relative positional embedding in the NLP and vision domains. But it is somewhat a straightforward extension of the existing ideas from other domains to MAT, hence the novelty is somewhat limited. It is also worth noting that the proposed method should be considered in the context of the larger body of research on 3D GNNs. The authors drew inspiration from DimeNet's design in the encoding of geometric distances but do not consider it in the empirical comparisons. Instead, it focused exclusively on transformer based models. This limits the scope of conclusions that we can draw from these experiments and makes it difficult to gauge the practical impact of RMAT in comparison to many other GNN methods that uses 3D geometries of the molecule.
train
[ "ChY9M3U1rhX", "c1jc8Wc4QQQ", "8iARwzOxhB", "vAmtvtmCOSv", "--BOXnhxzN9", "Vme8EQOQZlU", "E-LWYkAIr2B", "0PPyCBtUg7", "B5YB1ESBvgi", "vAM7fNWvuYI", "50F9Edao8nt" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for reading the rebuttal and supporting the paper.", " Thank you very much for taking the time to engage with our rebuttal and raising the score!", " Thank the authors' response, which solved my concerns. After all the review and author responses, I decided to maintain my score.", " We ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, 2, 3 ]
[ "8iARwzOxhB", "Vme8EQOQZlU", "E-LWYkAIr2B", "iclr_2022_7ktHTjV9FHw", "Vme8EQOQZlU", "iclr_2022_7ktHTjV9FHw", "50F9Edao8nt", "vAM7fNWvuYI", "Vme8EQOQZlU", "iclr_2022_7ktHTjV9FHw", "iclr_2022_7ktHTjV9FHw" ]
iclr_2022_Q42O1Qaho5N
$G^3$: Representation Learning and Generation for Geometric Graphs
A geometric graph is a graph equipped with geometric information (i.e., node coordinates). A notable example is molecular graphs, where the combinatorial bonding is supplement with atomic coordinates that determine the three-dimensional structure. This work proposes a generative model for geometric graphs, capitalizing on the complementary information of structure and geometry to learn the underlying distribution. The proposed model, Geometric Graph Generator (G$^3$), orchestrates graph neural networks and point cloud models in a nontrivial manner under an autoencoding framework. Additionally, we augment this framework with a normalizing flow so that one can effectively sample from the otherwise intractable latent space. G$^3$ can be used in computer-aided drug discovery, where seeking novel and optimal molecular structures is critical. As a representation learning approach, the interaction of the graph structure and the geometric point cloud also improve significantly the performance of downstream tasks, such as molecular property prediction. We conduct a comprehensive set of experiments to demonstrate that G$^3$ learns more accurately the distribution of given molecules and helps identify novel molecules with better properties of interest.
Reject
This paper presents a generative model for geometric graphs. The main contribution is to separate the representation and generation of geometry from that of graph structure and features. Based on this idea the authors assembled a set of existing ideas and built an auto-encoder style generative model for geometric graphs. This paper sits on the borderline, with reviewers split on both sides. I appreciate the clarifications from the authors during the rebuttal and the interactions with the reviewers. The main concern is the novelty of this approach, as the main contribution is the idea of separating geometry from graph structure, and most other components of the pipeline already exist in the literature. Because of this I think the paper can probably devote a bit more to this ablation study. In particular the paper currently lacks detail about whether the size of the models were controlled when doing the ablation, which could be a confounding factor that explains why the joint model with both geometry and graph structure works better. Also the different architecture choices may also factor into the difference, it would be more convincing if for example the same combination of multi-head attention blocks and GINE networks are used for the ablated graph encoder (you can simply concatenate the features from both on all layers, or even at the end). Based on this I would recommend rejection at this time but encourage the authors to improve the paper and send it to the next venue.
train
[ "peMNNj_Xiti", "YDzeIZh14jh", "7MDFLtNOiP", "aBKiBd96W3", "fg4Zh1IREal", "MUIBtTa6Ww", "CB5VT9uaLpi", "lzjkz-ndiT5", "FFPkfi6Nrf5", "7kEhlC68VCV", "PEjxos7lE1w", "1Nd8jBWvPPH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the updates and clarifications, especially on the link predictor (Appendix G) and interpretability (Appendix H). \nUnlike reviewer qSkw, I think the novelty is enough while the techniques and the experiments are grounded.\nI've raised my original recommendation for an apparent acceptance.", "This pap...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "aBKiBd96W3", "iclr_2022_Q42O1Qaho5N", "fg4Zh1IREal", "YDzeIZh14jh", "7kEhlC68VCV", "PEjxos7lE1w", "PEjxos7lE1w", "1Nd8jBWvPPH", "YDzeIZh14jh", "iclr_2022_Q42O1Qaho5N", "iclr_2022_Q42O1Qaho5N", "iclr_2022_Q42O1Qaho5N" ]
iclr_2022_-HSOjDPfhBJ
PER-ETD: A Polynomially Efficient Emphatic Temporal Difference Learning Method
Emphatic temporal difference (ETD) learning (Sutton et al., 2016) is a successful method to conduct the off-policy value function evaluation with function approximation. Although ETD has been shown to converge asymptotically to a desirable value function, it is well-known that ETD often encounters a large variance so that its sample complexity can increase exponentially fast with the number of iterations. In this work, we propose a new ETD method, called PER-ETD (i.e., PEriodically Restarted-ETD), which restarts and updates the follow-on trace only for a finite period for each iteration of the evaluation parameter. Further, PER-ETD features a design of the logarithmical increase of the restart period with the number of iterations, which guarantees the best trade-off between the variance and bias and keeps both vanishing sublinearly. We show that PER-ETD converges to the same desirable fixed point as ETD, but improves the exponential sample complexity of ETD to be polynomials. Our experiments validate the superior performance of PER-ETD and its advantage over ETD.
Accept (Poster)
This paper investigates TD-based off-policy policy evaluation. This topic is of interest as most SOTA DRL methods are built upon unsound algorithms, whereas more sound variants are difficult to use in practice and have not been widely adopted. This paper introduces a new variant of ETD that addresses the variance issue with the existing algorithm, along with theory characterizing sample efficiency. The paper includes a well done illustrative empirical study to support the theory. The reviewers all scored the paper highly. The AC pointed out several minor issues in the presentation that the authors should address for camera ready. In addition the grammar and word usage is rough in some places. Please take time to improve the text.
train
[ "9lMj2Y8r-Y", "P6oUIDjfVZ0", "88BW8WHeezS", "6KeNJplY2B", "e6GXvsNgHXK", "EEzca8pGJwI", "6zdcXTCODRy", "ZRe0GShf8B4", "9Zo48r7KgFX", "fzYDOF2Ptom", "EXdMccyETqo", "9iOZonnKfYr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. Explanations and comments make sense to me.", " We thank the reviewer very much for the prompt and positive response! ", " Thanks for the response. The explanations make sense to me. I have increased my score accordingly.", "The variance of ETD can grow up exponentially so this pape...
[ -1, -1, -1, 8, 8, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "EEzca8pGJwI", "88BW8WHeezS", "ZRe0GShf8B4", "iclr_2022_-HSOjDPfhBJ", "iclr_2022_-HSOjDPfhBJ", "9iOZonnKfYr", "9iOZonnKfYr", "6KeNJplY2B", "e6GXvsNgHXK", "EXdMccyETqo", "iclr_2022_-HSOjDPfhBJ", "iclr_2022_-HSOjDPfhBJ" ]
iclr_2022_O9DAoNnYVlM
Federated Learning via Plurality Vote
Federated learning allows collaborative workers to solve a machine learning problem while preserving data privacy. Recent studies have tackled various challenges in federated learning, but the joint optimization of communication overhead, learning reliability, and deployment efficiency is still an open problem. To this end, we propose a new scheme named federated learning via plurality vote (FedVote). In each communication round of FedVote, workers transmit binary or ternary weights to the server with low communication overhead. The model parameters are aggregated via weighted voting to enhance the resilience against Byzantine attacks. When deployed for inference, the model with binary or ternary weights is resource-friendly to edge devices. We show that our proposed method can reduce quantization error and converges faster compared with the methods directly quantizing the model updates.
Reject
Reviewers raised several valid concerns about novelty of quantization idea and lack of discussions related to prior art (AISTATS 2020 paper). The rebuttal did not convince the reviewers to raise their score. We hope the authors will benefit from the feedback and improve the paper for future submission.
train
[ "AoqKLtSp_2b", "eMSAe4uFnHL", "nmsNtT8TRjZ", "5w2XA5dbt9Z", "5Sq2C1rL9h6", "aC7y0CxtTXD", "ljhx-EE0yr0", "PRg-tbIdbfv", "jkipdW02Xyh", "UGimMOMYMmt", "4F5gfNyYq9k", "nLbLycWa_Yf" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I want to first thank the author(s) for their responses and effort on revising the paper. Also, I have read through the other reviews and responses. However, the responses (and updates) do not fully address the issues I brought up, and thus, the paper is still not at the level of acceptance for ICLR. I maintain ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "ljhx-EE0yr0", "aC7y0CxtTXD", "nLbLycWa_Yf", "nmsNtT8TRjZ", "4F5gfNyYq9k", "UGimMOMYMmt", "PRg-tbIdbfv", "jkipdW02Xyh", "iclr_2022_O9DAoNnYVlM", "iclr_2022_O9DAoNnYVlM", "iclr_2022_O9DAoNnYVlM", "iclr_2022_O9DAoNnYVlM" ]
iclr_2022_i3abvoMoeCZ
Exploring Covariate and Concept Shift for Detection and Confidence Calibration of Out-of-Distribution Data
Moving beyond testing on in-distribution data, works on Out-of-Distribution (OOD) detection have recently increased in popularity. A recent attempt to categorize OOD data introduces the concept of near and far OOD detection. Specifically, prior works define characteristics of OOD data in terms of detection difficulty. We propose to characterize the spectrum of OOD data using two types of distribution shifts: covariate shift and concept shift, where covariate shift corresponds to change in style, e.g., noise, and concept shift indicates change in semantics. This characterization reveals that sensitivity to each type of shift is important to the detection and model calibration of OOD data. Consequently, we investigate score functions that capture sensitivity to each type of dataset shift and methods that improve them. To this end, we theoretically derive two score functions for OOD detection, the covariate shift score and concept shift score, based on the decomposition of KL-divergence for both scores, and propose a geometrically-inspired method (Geometric ODIN) to improve OOD detection under both shifts with only in-distribution data. Additionally, the proposed method naturally leads to an expressive post-hoc calibration function which yields state-of-the-art calibration performance on both in-distribution and out-of-distribution data. We are the first to propose a method that works well across both OOD detection and calibration, and under different types of shifts. Specifically, we improve the previous state-of-the-art OOD detection by relatively 7% AUROC on CIFAR100 vs. SVHN and achieve the best calibration performance of 0.084 Expected Calibration Error on the corrupted CIFAR100C dataset.
Reject
This paper proposes a method for detecting two types of distributional shifts: covariate shifts in the input space (due to input corruption) and semantic shifts (due to test data falling outside the support set of ID classes). The idea is based on the decomposition of KL-divergence between softmax prediction and a uniform vector. Furthermore, the authors propose Geometric ODIN to improve OOD detection and calibration, outperforming strong baseline on CIFAR10, CIFAR 100, and SVHN datasets. The paper aims to solve a very important problem in ML and the approach is thought-provoking. However, there were several questions and confusions raised by the reviewers, such as the applicability of the model, justification of use of feature norm, discussion on sensitivity vs robustness, framing of the novelty, clear definition of OOD detection, definition of parameters, etc. (please see reviews for a comprehensive list). I invite authors to incorporate these points in the next version of the paper which will significantly improve the paper.
train
[ "NLgOKiRVbeO", "OfTs3n78S_7", "O6mYy8Nq3sD", "nHN5l_QmUok", "Ag4ZTBR0Nwe", "SBC9hH_-npc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper starts from the KL divergence between a uniform distribution and the predicted distribution, based on which two score functions are derived for covariate shift and concept shift, respectively. The covariate shift score measures the feature norm while the concept shift score is essentially the difference ...
[ 5, -1, -1, 6, 3, 5 ]
[ 4, -1, -1, 2, 4, 5 ]
[ "iclr_2022_i3abvoMoeCZ", "NLgOKiRVbeO", "Ag4ZTBR0Nwe", "iclr_2022_i3abvoMoeCZ", "iclr_2022_i3abvoMoeCZ", "iclr_2022_i3abvoMoeCZ" ]
iclr_2022_gULyf2IVll0
Empirical Study of the Decision Region and Robustness in Deep Neural Networks
In general, the Deep Neural Networks (DNNs) is evaluated by the generalization performance measured on the unseen data excluded from the training phase. Along with the development of DNNs, the generalization performance converges to the state-of-the-art and it becomes difficult to evaluate DNNs solely based on the generalization performance. The robustness against the adversarial attack has been used as an additional metric to evaluate DNNs by measuring the vulnerability of them. However, few researches have been performed to analyze the adversarial robustness in terms of the geometry in DNNs. In this work, we perform empirical study to analyze the internal properties of DNNs which affect model robustness under adversarial attacks. Especially, we propose the novel concept Populated Region Set (PRS) where train samples populated more frequently to represent the internal properties of DNNs in the practical setting. From the systematic experiments with the proposed concept, we provide empirical evidences to validate that the low PRS ratio has strong relationship with the adversarial robustness of DNNs.
Reject
This work proposes a concept called Populated Region Set (PRS) as a measure of robustness of deep neural networks (DNNs). The paper provides a suit of empirical results to demonstrate the strong correlation between the PRS ratio and adversarial robustness of DNNs. The authors made great efforts on addressing reviewers' concern, which is greatly appreciated. However, the theory of the work is a bit thin, and it leaves a number of outstanding issues unaddressed. For example, it is not clear the practical advantage of calculating PRS over the direct measure of robust accuracy. What new and better computational procedure can be constructed based on PRS? We encourage the authors keep improving their work for future submission.
train
[ "ebyho0Am7Xh", "edZRBOz9l5y", "TGRQopjsWZd", "tnIggkz61Lz", "QX0pBDsmUZL", "PF3bF4MWzJq", "PKIC_8mlga2", "w7w_npizOo", "1Lp06fR93MJ", "j5EPFFpCzuO", "fodeAvsc8Zn", "4UTg8T-vM27", "eCW2PfbOjmW", "OTwfHUHBmkw", "PZJc8KBguj8", "rM-cToadGFd", "lxJiwC-41X6", "QmCPDLZuAaI", "2L0f2PpUuf...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "...
[ "The paper proposes a new metric, the size of the populated region set (PRS), as an explanation for models with similar clean accuracies reaching very different accuracies under adversarial attacks. PRS is the set of decision regions that have training examples in them. After introducing and defining populated regi...
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2022_gULyf2IVll0", "tnIggkz61Lz", "iclr_2022_gULyf2IVll0", "PKIC_8mlga2", "PKIC_8mlga2", "TGRQopjsWZd", "ebyho0Am7Xh", "lxJiwC-41X6", "rM-cToadGFd", "eCW2PfbOjmW", "PZJc8KBguj8", "OTwfHUHBmkw", "ebyho0Am7Xh", "ebyho0Am7Xh", "ebyho0Am7Xh", "ebyho0Am7Xh", "ebyho0Am7Xh", "0KTI0s...
iclr_2022_Zk3TwMJNj7
Directional Bias Helps Stochastic Gradient Descent to Generalize in Nonparametric Model
This paper studies the Stochastic Gradient Descent (SGD) algorithm in kernel regression. The main finding is that SGD with moderate and annealing step size converges in the direction of the eigenvector that corresponds to the largest eigenvalue of the gram matrix. On the contrary, the Gradient Descent (GD) with a moderate or small step size converges along the direction that corresponds to the smallest eigenvalue. For a general squared risk minimization problem, we show that directional bias towards a larger eigenvalue of the Hessian (which is the gram matrix in our case) results in an estimator that is closer to the ground truth. Adopting this result to kernel regression, the directional bias helps the SGD estimator generalize better. This result gives one way to explain how noise helps in generalization when learning with a nontrivial step size, which may be useful for promoting further understanding of stochastic algorithms in deep learning. The correctness of our theory is supported by simulations and experiments of Neural Network on the FashionMNIST dataset.
Reject
This is an interesting paper, aiming to separate the generalization properties of SGD and GD. Unfortunately, the reviewers had many significant concerns, primarily on the topic of the relationship to prior work by Wu et al. (which has a similar setting and similar proof techniques), but also regarding presentation and interpretation of results in general. As such, I recommend the authors continue with this line of valuable work, aiming in particular to further separate it from existing results.
test
[ "VPETcuXesHH", "1n38ucx8YWX", "7uCr0on7Wq", "F0rkI2b0XLr", "JT6mGD_NFEr", "cdyRq_wHFAc", "oXR3aqAGj59" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the directional bias of SGD in kernel regression. In particular, this paper shows that when using moderate or small step size, GD converges along the direction corresponding to the smallest eigenvalue of the covariance matrix. In contrast, when provided with a moderate initial learning rate with...
[ 3, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2022_Zk3TwMJNj7", "7uCr0on7Wq", "JT6mGD_NFEr", "VPETcuXesHH", "cdyRq_wHFAc", "iclr_2022_Zk3TwMJNj7", "iclr_2022_Zk3TwMJNj7" ]
iclr_2022_VSu5WrtLK3q
A Geometric Perspective on Variational Autoencoders
In this paper, we propose a geometrical interpretation of the Variational Autoencoder framework. We show that VAEs naturally unveil a Riemannian structure of the learned latent space. Moreover, we show that using these geometrical considerations can significantly improve the generation from the vanilla VAE which can now compete with more advanced VAE models on four benchmark data sets. In particular, we propose a new way to generate samples consisting in sampling from the uniform distribution deriving intrinsically from the Riemannian manifold learned by a VAE. We also stress the proposed method's robustness in the low data regime which is known as very challenging for deep generative models. Finally, we validate the method on a complex neuroimaging data set combining both high dimensional data and low sample sizes.
Reject
The paper proposes to use covariance of the approximate posterior to induce a metric on the latent space of the VAE and use it to sample from the Riemannian manifold learned by a VAE. Experiments on MNIST and CelebA show the method outperforms vanilla VAE in terms of sample quality (FID and PR scores). It is also shown to work better than baseline VAE models on a medical imaging classification task. While the reviewers have acknowledged the contributions of the paper, the novelty in the contributions and their importance/impact was seen to be rather limited. The main concern from the reviewers is -- while the paper is mainly based on the use of inverse covariance as the metric for manifold, it doesn't give a reasonable theoretical justification on it is a sensible metric that captures the intrinsic geometry of data. Authors in their response justify it as -- since the covariance matrices are learned from the data and favor through the posterior sampling some direction in the latent space, it is a natural choice as metric. This is not very convincing. A more technical justification for this will certainly make the paper more convincing. I suggest the authors to look at "Kumar, Abhishek, and Ben Poole. "On Implicit Regularization in $ β $-VAEs." International Conference on Machine Learning. PMLR, 2020" which theoretically connects inverse covariance and the Riemannian metric in Sec 5.2, and see if it can be adapted in their context.
train
[ "XwvNmB13sE5", "6RZXx-ZE3YJ", "aDy8hv7thx2", "Np_D_OsGwvZ", "erkAcgTpMMW", "z_yimjH9dv", "2IayOvK_0rJ", "4mwQszi0pXM", "2QYMdAxvEaf", "--2DXG3-CEc", "I2glAczsmRc", "t-AvPTxishH", "dF735AierpG", "uMgVyNfQ6Y", "jtMOcOgdrl", "kdYesHMSIk", "Fq6WBPTaDpl" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nThank you again for your review and comments following our rebuttal. As the end of the discussion period is approaching, we were wondering if you had time to look at our last addition concerning the link between the vanilla VAE and the Riemannian one in Appendix I? \n\nDo not hesitate if you hav...
[ -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, 4, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, 3, 3 ]
[ "4mwQszi0pXM", "iclr_2022_VSu5WrtLK3q", "jtMOcOgdrl", "uMgVyNfQ6Y", "iclr_2022_VSu5WrtLK3q", "2QYMdAxvEaf", "iclr_2022_VSu5WrtLK3q", "--2DXG3-CEc", "erkAcgTpMMW", "dF735AierpG", "iclr_2022_VSu5WrtLK3q", "2IayOvK_0rJ", "I2glAczsmRc", "Fq6WBPTaDpl", "kdYesHMSIk", "iclr_2022_VSu5WrtLK3q",...
iclr_2022_cdwobSbmsjA
RAVE: A variational autoencoder for fast and high-quality neural audio synthesis
Deep generative models applied to audio have improved by a large margin the state-of-the-art in many speech and music related tasks. However, as raw waveform modelling remains an inherently difficult task, audio generative models are either computationally intensive, rely on low sampling rates, are complicated to control or restrict the nature of possible signals. Among those models, Variational AutoEncoders (VAE) give control over the generation by exposing latent variables, although they usually suffer from low synthesis quality. In this paper, we introduce a Realtime Audio Variational autoEncoder (RAVE) allowing both fast and high-quality audio waveform synthesis. We introduce a novel two-stage training procedure, namely representation learning and adversarial fine-tuning. We show that using a post-training analysis of the latent space allows a direct control between the reconstruction fidelity and the representation compactness. By leveraging a multi-band decomposition of the raw waveform, we show that our model is the first able to generate 48kHz audio signals, while simultaneously running 20 times faster than real-time on a standard laptop CPU. We evaluate synthesis quality using both quantitative and qualitative subjective experiments and show the superiority of our approach compared to existing models. Finally, we present applications of our model for timbre transfer and signal compression. All of our source code and audio examples are publicly available.
Reject
This paper presents an approach to high quality waveform synthesis using multi-band decomposition. The resulting synthesis speed is substantially faster the past work on both CPU and GPU -- a feature that all reviewers viewed as a significant strength. However, the majority of reviewers raised concerns about discussion of and contextualization within past work, as well the novelty of the proposed approach. Finally, one reviewer pointed out a potential concern with experimental evaluation (sample rate of proposed system outputs vs baseline's). Author response clarified the relationship with some past work but did provide evidence to mitigate the concerns about experimental evaluation. Overall, this paper could still benefit from another round of review.
train
[ "eISWxHslsMr", "6u0mR5pyjbJ", "EbkIb3TEEMC", "gUi5CPQ2SAj", "_F7q7yGZ9N5", "bQQEm5FDDMl", "kNX0upFQBt7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all the reviewers for their time and expertise. In addition to the individual responses, we would like to clarify here the changes made to our article. \n\n- New audio sample in the accompanying website\n- Discuss key differences with previous approaches in the \"Related work\" section\n- A...
[ -1, -1, -1, -1, 8, 3, 5 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_cdwobSbmsjA", "kNX0upFQBt7", "bQQEm5FDDMl", "_F7q7yGZ9N5", "iclr_2022_cdwobSbmsjA", "iclr_2022_cdwobSbmsjA", "iclr_2022_cdwobSbmsjA" ]
iclr_2022_ofLwshMBL_H
Continual Learning Using Task Conditional Neural Networks
Conventional deep learning models have limited capacity in learning multiple tasks sequentially. The issue of forgetting the previously learned tasks in continual learning is known as catastrophic forgetting or interference. When the input data or the goal of learning changes, a continual model will learn and adapt to the new status. However, the model will not remember or recognise any revisits to the previous states. This causes performance reduction and re-training curves in dealing with periodic or irregularly reoccurring changes in the data or goals. Dynamic approaches, which assign new neuron resources to the upcoming tasks, are introduced to address this issue. However, most of the dynamic methods need task information about the upcoming tasks during the inference phase to activate the corresponding neurons. To address this issue, we introduce Task Conditional Neural Network which allows the model to identify the task information automatically. The proposed model can continually learn and embed new tasks into the model without losing the information about previously learned tasks. We evaluate the proposed model combined with the mixture of experts approach on the MNIST and CIFAR100 datasets and show how it significantly improves the continual learning process without requiring task information in advance.
Reject
This paper proposes an expansion strategy for both task agnostic and task-boundary aware CL. The authors demonstrate the quality of their method using two-standard scenarios with the Split-MNIST and CIFAR datasets. Enabling CL for task-agnostic and task-boundary aware is important and an active area of research. The proposed approach is an interesting method that adds an expert for each new task. Experts are then combined (Mixture of Experts) for prediction. One disadvantage of a MoE approach is that the model size and compute will grow linearly with the number of tasks. This effect is partly limited in the paper as the authors show that experts can be small neural networks. There was a bit of confusion in the original reviews regarding the exact setting this paper works under. As far as I understand this paper mostly deals with the class-incremental setting (task IDs available at training time, but not at test time). The task agnostic setting (task IDs never given) is also explored in Section 5.1. I think this confusion is partly a reflection of the state of the CL literature and the authors provided clear and concise replies to the reviewers. The main limitation that remains is regarding the experiments. I agree with the reviewers that the current experiments seem somewhat preliminary and showing results on larger scale datasets and/or compared to a wider diversity of baselines is important. Reviewer sgG4 made precise comments about this. Other minor comments by the reviewers including providing a detailed report of the memory usage and computational costs of the various methods (partly done in Figure 5.3). I think this method is interesting and could be impactful. I strongly encourage the authors to polish their manuscript and consider adding some of the additional empirical results that were suggested.
train
[ "PeDPZBF6t12", "6jQ9VZPmevo", "TKoRHyOtUnQ", "4P_mO9yW-oV", "pTDveUHwy80", "QUvRDbNGif2" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper provides a new method to infer the task identity without directly accessing the old data distributions. The proposed method can learn task-specific experts with task-specific kernels to decide which expert should be chosen and activated under different tasks given to the model. The proposed method achie...
[ 5, -1, -1, 3, 3, 3 ]
[ 4, -1, -1, 4, 4, 4 ]
[ "iclr_2022_ofLwshMBL_H", "pTDveUHwy80", "QUvRDbNGif2", "iclr_2022_ofLwshMBL_H", "iclr_2022_ofLwshMBL_H", "iclr_2022_ofLwshMBL_H" ]
iclr_2022_ZumkmSpY9G4
Bypassing Logits Bias in Online Class-Incremental Learning with a Generative Framework
Continual learning requires the model to maintain the learned knowledge while learning from a non-i.i.d data stream continually. Due to the single-pass training setting, online continual learning is very challenging, but it is closer to the real-world scenarios where quick adaptation to new data is appealing. In this paper, we focus on online class-incremental learning setting in which new classes emerge over time. Almost all existing methods are replay-based with a softmax classifier. However, the inherent logits bias problem in the softmax classifier is a main cause of catastrophic forgetting while existing solutions are not applicable for online settings. To bypass this problem, we abandon the softmax classifier and propose a novel generative framework based on the feature space. In our framework, a generative classifier which utilizes replay memory is used for inference, and the training objective is a pair-based metric learning loss which is proven theoretically to optimize the feature space in a generative way. In order to improve the ability to learn new data, we further propose a hybrid of generative and discriminative loss to train the model. Extensive experiments on several benchmarks, including newly introduced task-free datasets, show that our method beats a series of state-of-the-art replay-based methods with discriminative classifiers, and reduces catastrophic forgetting consistently with a remarkable margin.
Reject
This paper presents an approach for online continual learning where only a single pass over each task's data is allowed. Instead of the oft-used softmax classification setting in continual learning, the paper proposes to use the generative setting based on the nearest class mean (NCM). The paper claims that it avoids the logits bias problem in the softmax classifier and helps combat catastrophic forgetting. While the reviewers found the basic idea interesting, there were concerns about novelty and lack of clarity regarding the reasons for improved performance. In particular, there are several aspects from existing work that are leveraged in this paper (e.g, replay, metric learning loss, combination of generative and discriminative classification, etc) but the paper lacks in establishing which of these components affect the performance and in what ways. The authors and reviewers engaged in detailed discussions; however, the reviewers were still unsatisfied and did not change their assessment. Based on my own reading of the paper as well as going through the reviews and discussions, I too concur with their assessment. It would be a stronger paper if the paper could shed more light on the above aspects as well as address the other concerns raised by the reviewers. However, in the current shape, it is not ready for publication.
val
[ "zvbvkHSw7x8", "KUo7AIXyUHC", "gSX96jqbzcb", "_UqHASlnJ9t", "fuYmNDxF_ht", "wCOpUbFM_SR", "5h0UrzO8QyK", "qVzT4lgx0ys", "UF_FB2ByUl", "uuiBiazV0ZD", "MFa_jueGmd" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors attempt to improve online continual learning (CL) performance by eliminating the logit bias of the classifier used. They use a nearest-class-mean (NCM) classifier and a multi-similarity metric learning loss coupled with an auxiliary loss to achieve a good plasticity-stability balance. The paper is some...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_ZumkmSpY9G4", "qVzT4lgx0ys", "5h0UrzO8QyK", "fuYmNDxF_ht", "uuiBiazV0ZD", "MFa_jueGmd", "UF_FB2ByUl", "zvbvkHSw7x8", "iclr_2022_ZumkmSpY9G4", "iclr_2022_ZumkmSpY9G4", "iclr_2022_ZumkmSpY9G4" ]
iclr_2022_BlyXYc4wF2-
Multi-Agent Constrained Policy Optimisation
Developing reinforcement learning algorithms that satisfy safety constraints is becoming increasingly important in real-world applications. In multi-agent reinforcement learning (MARL) settings, policy optimisation with safety awareness is particularly challenging because each individual agent has to not only meet its own safety constraints, but also consider those of others so that their joint behaviour can be guaranteed safe. Despite its importance, the problem of safe multi-agent learning has not been rigorously studied; very few solutions have been proposed, nor a sharable testing environment or benchmarks. To fill these gaps, in this work, we formulate the safe MARL problem as a constrained Markov game and solve it with policy optimisation methods. Our solutions---Multi-Agent Constrained Policy Optimisation (MACPO) and MAPPO-Lagrangian---leverage the theories from both constrained policy optimisation and multi-agent trust region learning. Crucially, our methods enjoy theoretical guarantees of both monotonic improvement in reward and satisfaction of safety constraints at every iteration. To examine the effectiveness of our methods, we develop the benchmark suite of Safe Multi-Agent MuJoCo that involves a variety of MARL baselines. Experimental results justify that MACPO/MAPPO-Lagrangian can consistently satisfy safety constraints, meanwhile achieving comparable performance to strong baselines.
Reject
The paper addresses safe multi-agent reinforcement learning and makes two key contributions. First is a safety concerned multi agent benchmark, which is an extension of MAMuJoCo. Second, is the formulation and two solution to safety MARL problem. The authors pose safe MARL, and MARL problem with safety constraints, as a constrained Markov game. The safety constrained MARL is an important, difficult, and understudied problem. The problem is more difficult that the single agent safe RL because of the non-stationarity in the MARL setting, which renders any theoretical guaranties conditioned on the assumptions of the behaviors of other agents. The authors are right to point out the lack of the benchmarks in the space. That said, reflecting on the reviewers' feedback and my own reading of the paper, this paper is attempting to do too much (benchmark, problem formulation, and two methods), in too little space, and is falling short. For example, the benchmark is an important contribution, but it is barely mentioned in the main text of the paper. If this was fully safety benchmark paper, there is an opportunity to go beyond MAMuJoCo, which feels like a forced multi-agent problem, and construct a safety benchmark with energy constraints, cooperative and competitive tasks etc... If this was fully methods paper, there would be an opportunity for more in-depth analysis of the results that the reviewers' pointed out. In it's current form, the paper feels like proposing a benchmark not grounded in a real world problem, and then a method to solve the problem. I would suggest the authors to either: - submit the paper to a journal where a space constraint would not be in a way, or - split it into two papers, a more comprehensive benchmark, and methods paper evaluated on more difficult problems. Minor: - Please update the literature. Some of the papers have been published, and they are cited as Arxiv papers.
val
[ "pEeySaqly05", "VOjynf0yZjx", "vYvhqjwll99", "XU153Hft_UA", "hyQfLOzG2I0", "ZVY6mBuY09E" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ## We thank the reviewer Aw6n for their efforts in reviewing our paper, which will help us improve its quality.\n\n1. > **Reviewer**: There are related works that should be mentioned. For example, CRPO Xu et al. (2021), PDSC Chen et al. (2021), Triple-Q Wei et al. (2021), and CSPDA Bai et. al. (2021).\n\n* **Resp...
[ -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, 3, 3, 3 ]
[ "ZVY6mBuY09E", "hyQfLOzG2I0", "XU153Hft_UA", "iclr_2022_BlyXYc4wF2-", "iclr_2022_BlyXYc4wF2-", "iclr_2022_BlyXYc4wF2-" ]
iclr_2022_wk5-XVtitD
Language Model Pre-training Improves Generalization in Policy Learning
Language model (LM) pre-training has proven useful for a wide variety of language processing tasks, including tasks that require nontrivial planning and reasoning capabilities. Can these capabilities be leveraged for more general machine learning problems? We investigate the effectiveness of LM pretraining to scaffold learning and generalization in autonomous decision-making. We use a pre-trained GPT-2 LM to initialize an interactive policy, which we fine-tune via imitation learning to perform interactive tasks in a simulated household environment featuring partial observability, large action spaces, and long time horizons. To leverage pre-training, we first encode observations, goals, and history information as templated English strings, and train the policy to predict the next action. We find that this form of pre-training enables generalization in policy learning: for test tasks involving novel goals or environment states, initializing policies with language models improves task completion rates by nearly 20%. Additional experiments explore the role of language-based encodings in these results; we find that it is possible to train a simple adapter layer that maps from observations and action histories to LM embeddings, and thus that language modeling provides an effective initializer even for tasks with no language as input or output. Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but natural goals and plans; these representations can aid learning and generalization even outside of language processing.
Reject
The paper studies the use of pretrained language models (LM) for training the policy in embodied environments. Specifically, a pretrained GPT-2 LM is used to initialize the policy. Environment observations, goals, and actions are encoded appropriately (e.g., converted into text strings) to apply the LM-based policy. The experiments study the generalization effect of initializing with pretrained LMs. Reviewers have found the paper made limited contributions. In particular, prior works on text adventure games have explored the use of pretrained LMs for playing games and studied the generalization effect, such as [1]. It's also suggested that the paper should revise the claims made in the experiments, given the limited experimental scope and results. [1] Keep CALM and Explore: Language Models for Action Generation in Text-based Games. Shunyu Yao, Rohan Rao, Matthew Hausknecht, Karthik Narasimhan. EMNLP 2020.
train
[ "ELZweulkI1V", "ReGUNkVijb2", "K_u5CFiVB6d", "9271IOxbmTu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**After rebuttal**: I am keeping my score. But I will not fight against rejecting the paper. I think the results are promising but the scope of the experiments are limited and the claims need to be more precise. As pointed out in my discussion with the authors, there are also several important missing details that...
[ 6, 3, 3, 3 ]
[ 5, 4, 4, 5 ]
[ "iclr_2022_wk5-XVtitD", "iclr_2022_wk5-XVtitD", "iclr_2022_wk5-XVtitD", "iclr_2022_wk5-XVtitD" ]
iclr_2022_8Dhw-NmmwT3
Lifting Imbalanced Regression with Self-Supervised Learning
A new influential task called imbalanced regression, most recently inspired by imbalanced classification, originating straightforwardly from both the imbalance and regression worlds, has received a great deal of attention. Yet we are still at a fairly preliminary stage in the exploration of this task, so more attempts are needed. In this paper, we work on a seamless marriage of imbalanced regression and self-supervised learning. But with this comes the first question of how to measure the similarity and dissimilarity under the regression sense, for which the definition is clear in the classification. To overcome the limitation, the formal definition of similarity in the regression task is given. On top of this, through experimenting on a simple neural network, we found that self-supervised learning could help alleviate the problem. However, the second problem is, it is not guaranteed that the noisy samples are similar to original samples when scaling to a deep network by adding random noise to the input, we specifically propose to limit the volume of noise on the output, and in doing so to find meaningful noise on the input by back propagation. Experimental results show that our approach achieves the state-of-the-art performance.
Reject
This paper proposes to use self-supervised learning in the context of "imbalanced regression", where some values of the outcome variables are rare, such as in long-tailed regression. The author's proposal can be interpreted as a Monte Carlo approximation of a density smoothing technique, akin to Yang et al. 2021. They test their approach on three datasets. Overall, it provides marginal improvements, whose statistical significance are not assessed. All reviewers agreed that the paper has merits but that it should be further improved to demonstrate that the proposed method is indeed a step forward in solving the problem of imbalanced regression. The authors should also provide stronger motivation for their pipeline details and experimental setup choices. I therefore recommend rejection, with encouragement for improvement in two directions: strengthening the experimental section, in particular by assessing statistical significance, and by improving the writing of the paper by developing a more rigorous exposition.
train
[ "NFcgsWGzxG", "5PkaB6f_bak", "zC8f0Ffh-Z", "AVuMaz0hhV-", "dE6Odlca6vT", "jimu3JB3ZsE", "kOseVHkiEr", "vlNAKUOL6v5", "P0mo5q2ycq3", "TCMc5zZAJw", "SBQX-Pq2Ex9", "2VZXLD_M4jg", "emdu3-F36Td", "m9wIef1avZB", "DQ74N9LJbY" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers:\n\nThanks a lot for your efforts in reviewing this paper. We tried our best to address all mentioned concerns/problems, and have uploaded a new version of our paper incorporating all suggestions raised by four reviewers to improve our paper. Are there any unclear explanations here? We are happy to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "iclr_2022_8Dhw-NmmwT3", "iclr_2022_8Dhw-NmmwT3", "dE6Odlca6vT", "iclr_2022_8Dhw-NmmwT3", "jimu3JB3ZsE", "P0mo5q2ycq3", "emdu3-F36Td", "2VZXLD_M4jg", "DQ74N9LJbY", "m9wIef1avZB", "iclr_2022_8Dhw-NmmwT3", "iclr_2022_8Dhw-NmmwT3", "iclr_2022_8Dhw-NmmwT3", "iclr_2022_8Dhw-NmmwT3", "iclr_202...
iclr_2022_0kNbTghw7q
Improving Generative Adversarial Networks via Adversarial Learning in Latent Space
Generative Adversarial Networks (GANs) have been widely studied as generative models, which map a latent distribution to the target distribution. Although many efforts have been made in terms of backbone architecture design, loss function, and training techniques, few results have been obtained on how the sampling in latent space can affect the final performance, and existing works on latent space mainly focus on controllability. We observe that, as the neural generator is a continuous function, two close samples in latent space would be mapped into two nearby images, while their quality can differ much as the quality is not a continuous function in pixel space. From the above continuous mapping function perspective, on the other hand, two distant latent samples are also possible to be mapped into two close images. If the latent samples are mapped in aggregation into limited modes or even a single mode, mode collapse occurs. Accordingly, we propose adding an implicit latent transform before the mapping function to improve latent $z$ from its initial distribution, e.g., Gaussian. This is achieved by using the iterative fast gradient sign method (I-FGSM). We further propose new GAN training strategies to obtain better generation mappings w.r.t quality and diversity by introducing targeted latent transforms into the bi-level optimization of GAN. Experimental results on visual data show that our method can effectively achieve improvement in both quality and diversity.
Reject
To improve the generative adversarial nets, the paper proposes to add an implicit transformation of the Gaussian latent variables before the top-down generator. To further obtain better generations with respect to quality and diversity, this paper introduces targeted latent transforms into a bi-level optimization of GAN. Experiments are conducted to verify the effectiveness of the proposed method. The paper is highly motivated and well-written, but the experiment part still needs to be strengthened because the goal of the paper is to improve the GAN training, comprehensive and thorough evaluation of the proposed method is necessary. After the first round of review, in addition to the clarification issue and missing reference issue, two reviewers point out that the method is only tested in small-scale datasets, and suggest authors evaluate the performance of the proposed method in more complex datasets. Two reviewers point out that the experimental validation and comparison to prior approaches are insufficient. During the rebuttal, the authors provide extra experiment results to partially address some issues. However, most of the major concerns from other reviewers, such as (i) how are the performance of the method in large scale datasets that have complex latent space manifolds, (ii) non-convincing performance gain, and unclear problem setup, still remain. After an internal discussion, AC agrees with all reviewers that the current paper is not ready for publication, thus recommending rejecting the paper. AC urges the authors to improve their paper by taking into account all the suggestions provided by the reviewers, and then resubmit it to the next venue.
train
[ "AV5fBtG_sgE", "QvBoXTomNy8", "4ibyKA84696", "4QbG9BsJ3IZ", "tvcKlkRNXU", "Z3ij0a4ZOZe", "k3lC2P03q7E", "r0cbkxYJx8o", "oWqAHrILaM", "H0EGSIrQXce", "Bhx2ZIau96L", "zyMvqYuvxk", "P0F-CcNyZ5i", "IgLTHRp5Mu9", "BMWWJu2Wf4v", "HfauPrS1NRO", "9C7KkgACzV6", "8qiZ_P3WHPj" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Dear reviewers, since the discussion phase is approaching the end, would you please provide some feedback?", " Dear reviewers, thanks for your comments and suggestions which have inspired us a lot to improve the paper. We are sincerely looking forward to your reply and we could provide more information if neede...
[ -1, -1, -1, -1, 6, 6, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, 3, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_0kNbTghw7q", "iclr_2022_0kNbTghw7q", "8qiZ_P3WHPj", "oWqAHrILaM", "iclr_2022_0kNbTghw7q", "iclr_2022_0kNbTghw7q", "r0cbkxYJx8o", "P0F-CcNyZ5i", "iclr_2022_0kNbTghw7q", "Bhx2ZIau96L", "zyMvqYuvxk", "Z3ij0a4ZOZe", "tvcKlkRNXU", "BMWWJu2Wf4v", "oWqAHrILaM", "8qiZ_P3WHPj", "ic...
iclr_2022_T1A11E__Az
Few-Shot Classification with Task-Adaptive Semantic Feature Learning
Few-shot classification aims to learn a classifier that categorizes objects of unseen classes with limited samples. One general approach is to mine as much information as possible from limited samples. This can be achieved by incorporating data aspects from multiple modals. However, existing multi-modality methods only use additional modality in support samples while adhering to a single modal in query samples. Such approach could lead to information imbalance between support and query samples, which confounds model generalization from support to query samples. Towards this problem, we propose a task-adaptive semantic feature learning mechanism to incorporates semantic features for both support and query samples. The semantic feature learner is trained episodic-wisely by regressing from the feature vectors of support samples. Then the query samples can obtain the semantic features with this module. Such method maintains a consistent training scheme between support and query samples and enables direct model transfer from support to query datasets, which significantly improves model generalization. We develop two modality combination implementations: feature concatenation and feature fusion, based on the semantic feature learner. Extensive experiments conducted on four benchmarks demonstrate that our method outperforms state-of-the-arts, proving the effectiveness of our method.
Reject
This paper proposes a few-shot learning method that learns task-adaptive semantic features that can incorporate for both of the support and query sets. Two approaches for modality combination are developed. The additional experiments in the author response addressed some concerns of the reviewers. However, the technical novelty of the proposed method is high enough since the proposed method uses existing techniques.
train
[ "iGUbP1ireSl", "FcXbUfvWZPM", "Y5pY__9m3v8", "byKNZYHhxJy", "MasVhkznME9", "piMxSmB798Q", "5ZcsWYZSng", "9rqm4PkIROu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a task-adaptive semantic feature learning mechanism to incorporate semantic features for both support and query samples. Two modality combination implementations, feature concatenation and feature fusion, are devised based on the semantic feature learner. Experimental results are provided on fo...
[ 5, 5, -1, -1, -1, -1, 6, 6 ]
[ 3, 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_T1A11E__Az", "iclr_2022_T1A11E__Az", "iGUbP1ireSl", "FcXbUfvWZPM", "9rqm4PkIROu", "5ZcsWYZSng", "iclr_2022_T1A11E__Az", "iclr_2022_T1A11E__Az" ]
iclr_2022_MWQCPYSJRN
Generative Negative Replay for Continual Learning
Learning continually is a key aspect of intelligence and a necessary ability to solve many real-world problems. One of the most effective strategies to control catastrophic forgetting, the Achilles’ heel of continual learning, is storing part of the old data and replay them interleaved with new experiences (also known as the replay approach). Generative replay, that is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional benchmarks. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail.
Reject
This paper suggests a new technique to utilize generative replay for continual learning. Specifically, the authors claim that even though the generated samples are imperfect (thus cannot be used as positive samples for old classes), they can still be used as negative samples for the current class. 3 reviewers are negative and 1 reviewer is positive. The main concerns of negative reviewers are (a) non-ablated effects of baseline and proposed components, (b) insufficient analysis of negative replay, and (c) no assessment of generated data quality. The rebuttal provides an additional experiment to address the issue (a), but the reviewers and AC think the experiments should be better polished. Also, AC believes the issues (b) and (c) should be better analyzed. The rebuttal claims that issue (c) is not applicable as they generate samples on the latent space. However, the main motivation of the paper is the low quality of generated samples, and the paper should provide a quality measure to support their claim. For example, an update of the feature extractor may move the latent space generative replay to the wrong class (i.e., low quality), and thus one should not use it as positive but only as negative, as suggested in this paper. Here, the negative replay would increase the margin of current and old classes, enhancing the accuracy of the current class. To analyze the source of benefits (old vs. current classes), the authors could report the task-wise accuracy trends, not only the overall accuracy. It would be a nice addition to the issue (b). Due to these unresolved concerns, AC tends to recommend rejection.
test
[ "3G_qEuV6t77", "8q_D6BmzIk7", "kR9pbDbscp7", "I-AuXHxn2F9", "0lhlbsP_ZXm", "OMwqFtvHX0D", "0hF1M2Oinzl", "UU97u5_JdRl", "To-HrwiOigm", "RxtJAr5xhTs", "11FSS04Yjj", "txFe-Og7z8t", "q8z8wOyXRUH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed answer to my review.\nThe authors commented on most of my comments, they added an experience in the appendix to compare negative replay without negative replay on CORe50 NC - 8 tasks and made some modification in the text to be more clear.\nI still believe the methodology of this paper ne...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "kR9pbDbscp7", "0hF1M2Oinzl", "I-AuXHxn2F9", "q8z8wOyXRUH", "txFe-Og7z8t", "0hF1M2Oinzl", "11FSS04Yjj", "To-HrwiOigm", "RxtJAr5xhTs", "iclr_2022_MWQCPYSJRN", "iclr_2022_MWQCPYSJRN", "iclr_2022_MWQCPYSJRN", "iclr_2022_MWQCPYSJRN" ]
iclr_2022_mL07kYPn3E
Few-shot Learning with Big Prototypes
Using dense vectors, i.e., prototypes, to represent abstract information of classes has become a common approach in low-data machine learning scenarios. Typically, prototypes are mean output embeddings over the instances for each class. In this case, prototypes have the same dimension of example embeddings, and such tensors could be regarded as ``points'' in the feature space from the geometrical perspective. But these points may lack the expressivity of the whole class-level information due to the biased sampling. In this paper, we propose to use tensor fields (``areas'') to model prototypes to enhance the expressivity of class-level information. Specifically, we present \textit{big prototypes}, where prototypes are represented by hyperspheres with dynamic sizes. A big prototype could be effectively modeled by two sets of learnable parameters, one is the center of the hypersphere, which is an embedding with the same dimension of training examples. The other is the radius of the sphere, which is a constant. Compared with atactic manifolds with complex boundaries, representing hypersphere with parameters is immensely easier. Moreover, it is convenient to perform metric-based classification with big prototypes in few-shot learning, where we only need to calculate the distance from a data point to the surface of the hypersphere. Extensive experiments on few-shot learning tasks across NLP and CV demonstrate the effectiveness of big prototypes.
Reject
Thanks for your submission to ICLR. This paper presents an extension to prototypical networks based on using hyperspheres to represent the prototypes. Strong empirical results are presented using this approach. Overall, this is a very borderline paper and could go either way. The idea itself it simple, though the results seem to be fairly strong. I read through the paper myself and tend to think that it could use a bit more work before it's ready. Some of the issues raised by the reviewers---particularly with respect to experiments and literature review---are worth nailing down. Further, I think that the method could be explored in a more principled/theoretical way. For instance, when reading this idea, the first thing that pops into my mind is that representing the prototype with a hypersphere is very similar to representing a distribution (e.g., a Gaussian) using a mean and covariance (in this case, a spherical covariance). Indeed, if you take the KL divergence between two spherical Gaussians, you get something very similar to the expression used in the paper. This is all to say that there may be other more general directions to take this idea, or other interpretations of what is going on. Please do keep in mind the comments of the reviewers when preparing a future version of the manuscript.
val
[ "MqDwvj8USYh", "QKA5irO2u8E", "XFFF-edrY_d", "AXdLi1Aj5Kt", "PSChBK2rnfB", "UtW-D0wZj-D", "kqq4bjygUA0", "-rV69TaRYWR", "ooYF8lJFK7W", "S77RERnR924", "xP5jPIR-5Zf", "CunDietpUGT", "JY20jJxMjtq", "Y59u18CizFg", "JVtx-bfLGNS", "TJaDLOfVjB", "3KUlUPQHQwr", "9PQ-DqsNT_" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. \nThe idea of extending distance metric to angle difference calculation is reasonable and feasible. I look forward to seeing more evaluation results in the final version. So far, my concerns have all been addressed, and I recommend accepting the paper.", " Thanks, we will add more a...
[ -1, -1, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "JY20jJxMjtq", "XFFF-edrY_d", "iclr_2022_mL07kYPn3E", "iclr_2022_mL07kYPn3E", "UtW-D0wZj-D", "ooYF8lJFK7W", "-rV69TaRYWR", "ooYF8lJFK7W", "CunDietpUGT", "AXdLi1Aj5Kt", "XFFF-edrY_d", "AXdLi1Aj5Kt", "9PQ-DqsNT_", "XFFF-edrY_d", "3KUlUPQHQwr", "iclr_2022_mL07kYPn3E", "iclr_2022_mL07kYP...
iclr_2022_DvcMMKmDJ3q
Generating Symbolic Reasoning Problems with Transformer GANs
Constructing training data for symbolic reasoning domains is challenging: Existing instances are typically hand-crafted and too few to be trained on directly and synthetically generated instances are often hard to evaluate in terms of their meaningfulness. We study the capabilities of GANs and Wasserstein GANs equipped with Transformer encoders to generate sensible and challenging training data for symbolic reasoning domains. We conduct experiments on two problem domains where Transformers have been successfully applied recently: symbolic mathematics and temporal specifications in verification. Even without autoregression, our GAN models produce syntactically correct instances and we show that these can be used as meaningful substitutes for real training data when training a classifier. Using a GAN setting also allows us to alter the target distribution: We show that by adding a classifier uncertainty part to the generator objective, we obtain a dataset that is even harder to solve for a classifier than our original dataset.
Reject
The paper aims to improve complex reasoning. In this regard, authors identify that acquisition of data for symbolic reasoning domains is a challenge and propose generating the data by GANs. A transformer-based architecture is proposed and trained for LTL and Symbolic mathematics. Experiments show samples generated are of good quality (e.g., correct syntax). We thank the reviewers and authors for engaging in an active discussion. However, the reviewers did not find the task of such data on its own not to be particularly interesting. Also, neither the architecture nor the training algorithm is very novel. If authors could provide a complete story i.e., show the augmented data can improve the performance of neural models that compute solutions, etc., it would make the paper much stronger. Thus, unfortunately I cannot recommend an acceptance of the paper in its current form.
train
[ "gwDZ_LGzvv", "F90-vecqQc", "CNSiLVttmwE", "aCAhloMGeg2", "CazzbYvMkFl", "9PMP4ub8NNm", "lKH5FzVwziY", "azLCdHGTD-", "bIWAYrKiL61", "3tuLK8vcPLw", "Gl6v1T7o8zF", "uO1isOkz7E", "kjHgAUzM2TJ", "4w7bOtLGN_3", "8DTST7IrjMA", "MU_ss1cUs8K", "DRwTWMiDmu4", "9EL3oHJxTAa" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I appreciate authors for making clarifications and providing additional details. However, I would keep the score as it is. I would like to mention that multiple other reviewers seem to share the same concerns as me, based on their final official comments, e.g., Reviewer JuGn (\"I still don't find generating synta...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "bIWAYrKiL61", "iclr_2022_DvcMMKmDJ3q", "Gl6v1T7o8zF", "9PMP4ub8NNm", "lKH5FzVwziY", "uO1isOkz7E", "kjHgAUzM2TJ", "iclr_2022_DvcMMKmDJ3q", "3tuLK8vcPLw", "iclr_2022_DvcMMKmDJ3q", "9EL3oHJxTAa", "F90-vecqQc", "DRwTWMiDmu4", "DRwTWMiDmu4", "DRwTWMiDmu4", "3tuLK8vcPLw", "iclr_2022_DvcMM...
iclr_2022_DkeCkhLIVGZ
Understanding Metric Learning on Unit Hypersphere and Generating Better Examples for Adversarial Training
Recent works have shown that adversarial examples can improve the performance of representation learning tasks. In this paper, we boost the performance of deep metric learning (DML) models with adversarial examples generated by attacking two new objective functions: \textit{intra-class alignment} and \textit{hyperspherical uniformity}. These two new objectives come from our theoretical and empirical analysis of the tuple-based metric losses on the hyperspherical embedding space. Our analytical results reveal that a) the metric losses on positive sample pairs are related to intra-class alignment; b) the metric losses on negative sample pairs serve as uniformity regularization on hypersphere. Based on our new understanding on the DML models, we propose Adversarial Deep Metric Learning model with adversarial samples generated by Alignment or Uniformity objective (ADML+A or U). With the same network structure and training settings, ADML+A and ADML+U consistently outperform the state-of-the-art vanilla DML models and a baseline model, adversarial DML model with attacking triplet objective function, on four metric learning benchmarks.
Reject
While the paper has merits, I generally agree with negative reviewers. Among other issues, there were concerns about the theoretical contribution overlaps with prior work. While the authors argued the current work is not an extension, but rather designing ADML is. If this is the case, the paper should be rewritten to deemphasize the less novel contribution and focus more on what the authors believe to be the novel contribution. I don't believe in the practice of putting different messages (some novel and some not) into a paper with the hope that this makes the overall result "more novel". I'd suggest the author rewrite the paper and more clear about the message.
train
[ "-0vhqggt-p", "6wGfuNoKKcC", "gSnnETC0IaH", "-pVvPR36uR", "T1OtlFuufX6", "MNwEeeoUaN2", "nU9zR4HGsJ5", "yX1wNwgNlgH", "MQBngg-5tsX", "JW6OA4XNlc1", "BULg4_6H6Gf", "sIhgt-rBLiq", "ZLmD61XRHLN", "U5JWA_H5tBy" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer UWyt,\n\nThank you again for the valuable comments. We hope the reviewer can read our response and reevaluate our paper based on our response and the revised paper. Please let us know if you have further questions about our paper and we look forward to hearing from you.\n\nSincerely, Paper1943 Autho...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "U5JWA_H5tBy", "sIhgt-rBLiq", "ZLmD61XRHLN", "JW6OA4XNlc1", "iclr_2022_DkeCkhLIVGZ", "BULg4_6H6Gf", "U5JWA_H5tBy", "ZLmD61XRHLN", "sIhgt-rBLiq", "MNwEeeoUaN2", "iclr_2022_DkeCkhLIVGZ", "iclr_2022_DkeCkhLIVGZ", "iclr_2022_DkeCkhLIVGZ", "iclr_2022_DkeCkhLIVGZ" ]
iclr_2022_hyuacPZQFb0
A Systematic Evaluation of Domain Adaptation Algorithms On Time Series Data
Unsupervised domain adaptation methods aim to generalize well on unlabeled test data that may have a different (shifted) distribution from the training data. Such methods are typically developed on image data, and their application to time series data is less explored. Existing works on time series domain adaptation suffer from inconsistencies in evaluation schemes, datasets, and base neural network architectures. Moreover, labeled target data are usually employed for model selection, which violates the fundamental assumption of unsupervised domain adaptation. To address these issues, we propose AdaTime, a standard framework to systematically and fairly evaluate different domain adaptation methods on time series data. Specifically, we standardize the base neural network architectures and benchmarking datasets, while also exploring more realistic model selection approaches that can work with no labeled data or few labeled samples. Our evaluation includes adaptations of state-of-the-art visual domain adaptation methods to time series data in addition to recent methods specifically developed for time series data. We conduct extensive experiments to evaluate 10 state-of-the-art methods on 3 representative datasets spanning 15 cross-domain scenarios. Our results suggest that with careful selection of hyper-parameters, visual domain adaptation methods are competitive with methods proposed for time series domain adaptation. In addition, we find that model selection plays a key role and different selection strategies can significantly affect performance. Our work unveils practical insights for applying domain adaptation methods on time series data and builds a solid foundation for future works in the field.
Reject
This work aims at giving a systematic evaluation of different unsupervised domain adaptation methods on time series classification tasks under a fair setting. By providing extensive experiments on various datasets, competitive baselines, and model selection approaches, this paper has the potential to facilitate future research on this topic if the mentioned concerns are well addressed. After rebuttal and discussion, the final scores were 3/5/5/5/5. AC considered all reviews, author responses, and the discussions, as well as reading through the paper as a neutral referee, and reject the paper based on the following concerns: + *Model Selection Criterion*: As stated by the authors, employing labeled target data for model selection will violate the fundamental assumption of unsupervised domain adaptation. However, the proposed Few-Shot Target Risk (FST Risk) also requires labeling a few target domain samples. If it is possible, why not directly conduct semi-supervised domain adaptation? + *Experiment Details*: As a benchmark paper, it is extremely important to carefully design the experiment details to attain promising results. Among these details, a suitable network backbone for time series classification (Is CNN or ResNet-18 the best choice? Or TCN mentioned by Reviewer f7Xp), large-scale datasets with considerable domain gap, and evaluation metrics are the first consideration to attain insightful findings. + *Novelty or Interesting Findings*: As pointed out by reviewers, it is obvious that the technical novelty is limited but it may be okay for a benchmark paper if solid/interesting experimental results are observed. However, some of the findings are also fragile and the experiments should be carefully conducted to make them more solid. In summary, this paper studies a promising research direction of domain adaptation, but the work cannot be accepted before addressing the reviewers' comments. The weaknesses mentioned above will have a high probability of being asked by the reviewers of the next conference. So the authors need to make sure that they substantially revise their work before submitting it to another venue.
train
[ "tOmo5IQHfP", "9uX19zBRNhx", "Iybx6FWi82G", "Tld6Afnc1GZ", "LPk0-lo56LW", "U3UKUlRDzfb", "jyGnT0YaihG", "aAINXBDfKME", "kCIWvILqdd", "bE0NgItNWaH", "rp-ylKWr1xt", "RvVStI_barD", "Qd7IWsNKiGT", "9xThFg0kc5D", "DGqEmLbOdrh", "vzWd_d9xda7", "P3BKBEvNJf8", "irSArXO3YvI" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers again for their constructive and valuable comments, which have contributed to improving the quality of our manuscript. As the discussion period is closing soon, we would appreciate it if the reviewers can provide any further feedback or follow-up to our first-round response.\n\nWe look forw...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "iclr_2022_hyuacPZQFb0", "Tld6Afnc1GZ", "iclr_2022_hyuacPZQFb0", "LPk0-lo56LW", "U3UKUlRDzfb", "kCIWvILqdd", "irSArXO3YvI", "P3BKBEvNJf8", "bE0NgItNWaH", "Iybx6FWi82G", "vzWd_d9xda7", "Qd7IWsNKiGT", "DGqEmLbOdrh", "iclr_2022_hyuacPZQFb0", "iclr_2022_hyuacPZQFb0", "iclr_2022_hyuacPZQFb0...
iclr_2022_8XM-AXMnAk_
Deep Active Learning by Leveraging Training Dynamics
Active learning theories and methods have been extensively studied in classical statistical learning settings. However, deep active learning, i.e., active learning with deep learning models, is usually based on empirical criteria without solid theoretical justification, thus suffering from heavy doubts when some of those fail to provide benefits in applications. In this paper, by exploring the connection between the generalization performance and the training dynamics, we propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics. In particular, we prove that convergence speed of training and the generalization performance is positively correlated under the ultra-wide condition and show that maximizing the training dynamics leads to a better generalization performance. Further on, to scale up to large deep neural networks and data sets, we introduce two relaxations for the subset selection problem and reduce the time complexity from polynomial to constant. Empirical results show that dynamicAL not only outperforms the other baselines consistently but also scales well on large deep learning models. We hope our work inspires more attempts in bridging the theoretical findings of deep networks and practical impacts in deep active learning applications.
Reject
This paper proposes a novel strategy for deep active learning based on the training dynamics of the underlying deep model, defined as the derivative of the loss of the ultra-wide NTK. All reviewers enjoy the clean story and motivation of the proposed acquisition/objective function and appreciate the authors’ effort in providing theoretical justification and analysis. One note is that -- as Reviewer 8dDv highlighted -- part of the analysis pertaining to the incompatibility between the generalization bound of NTK and the non-iid nature of active learning rely on numerical evidence: the MMD under the covariate shift setting (i.e. assuming that the conditional distributions P(Y |X) remains consistent) is shown empirically to be smaller than the dominant term of the generalization bound. This serves as a reasonable empirical motivation/ justification of the dynamicAL heuristic under the AL setting, but I would suggest the authors be more precise in the abstract / intro (e.g. abstract) that this is an empirical result. While the theoretical results are interesting, not all reviewers are convinced that the experimental results are sufficiently compelling. In particular, Reviewer YgGb points out that the significant performance boost reported in the main paper was mainly due to the non-retraining (i.e. not retraining the model (from scratch)) constraints imposed by the problem setup. Reviewer p3z9 shares the same concern that such a setting would be far from realistic at least for the data sets/labels considered in the experiments. The authors refer to Ostapuk et al, 2018 as a justification of the non-retraining setting; yet they assumed a high budget, e.g., up to 50% of all labels of datasets. In summary, this is a theoretically well-motived work, but the empirical components need to be further clarified and supported with more realistic experiments to merit acceptance for the proposed solution.
train
[ "NnDzMVcfmjy", "Q_QaH28IPML", "EL8Tmm_82dk", "CNbbNsSL1yI", "bg5-ZLT_Ua1", "AZkfEIeqBB4", "i8yPDzFn7M8", "1E9rfUwPi-", "TkIN53Nf0xo", "XJgcrUGgkPB", "H0t8CgJQLjo", "OzW10KaEWjQ", "LZJU7BVoI2", "vUPrligI4OH", "Fr0Inua5jG", "sCpP3GvpUaP", "1Ibq2-2byv7", "EFYtrRsMt89", "nwM2fUTYHpg"...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewers,\n  \n\nWe thank the reviewers for their comments and suggestions that helped us to improve the manuscript. For easy reference of reviewers and ACs, all changes have been highlighted in blue. Some of the main changes include:\n\n1. Provided the derivations of Equation (13) and (15) in Appendix...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "iclr_2022_8XM-AXMnAk_", "EL8Tmm_82dk", "CNbbNsSL1yI", "bg5-ZLT_Ua1", "i8yPDzFn7M8", "1E9rfUwPi-", "TkIN53Nf0xo", "sCpP3GvpUaP", "vUPrligI4OH", "H0t8CgJQLjo", "LZJU7BVoI2", "FDVX1FMmCj", "1Ibq2-2byv7", "Fr0Inua5jG", "IoIpj90QAk", "nwM2fUTYHpg", "EFYtrRsMt89", "iclr_2022_8XM-AXMnAk_...
iclr_2022_lKcq2fe-HB
Metrics Matter: A Closer Look on Self-Paced Reinforcement Learning
Curriculum reinforcement learning (CRL) allows to solve complex tasks by generating a tailored sequence of learning tasks, starting from easy ones and subsequently increasing their difficulty. However, the generation of such task sequences is largely governed by application assumptions, often preventing a theoretical investigation of existing approaches. Recently, Klink et al. (2021) showed how self-paced learning induces a principled interpolation between task distributions in the context of RL, resulting in high learning performance. So far, this interpolation is unfortunately limited to Gaussian distributions. Here, we show that on one side, this parametric restriction is insufficient in many learning cases but that on the other, the interpolation of self-paced RL (SPRL) can be degenerate when not restricted to this parametric form. We show that the introduction of concepts from optimal transport into SPRL prevents aforementioned issues. Experiments demonstrate that the resulting introduction of metric structure into the curriculum allows for a well-behaving non-parametric version of SPRL that leads to stable learning performance across tasks.
Reject
The paper changes the metric in self-paced reinforcement learning to be a Wasserstein distance and shows that this outperforms other metrics in simple toy-like experiments. Even after discussions with the authors, two major concerns were identified with this submission: First, the proposed modification of the metric appears to be rather incremental with regards to the original paper. Second, the proposed method is only evaluated on relatively simple environments. The approach should be evaluated on more difficult tasks. Given that there was no strong champion for acceptance among reviewers of this paper and the above mentioned limitations, I recommend rejecting this paper.
train
[ "0cYA8C3VDjU", "zBugXjxaks", "EYaYkr0609J", "tqoFJ4BUgrf", "-XGDQGG5ZcJ", "QCPLcZd8bi3", "94V_hlgjBLj", "M_mQeuKkt0C", "QXDvaS0u4Z", "bnjgqPA0hjQ", "USxFpHhlhTQ", "tgBOgMfiN5", "srsUkHF9xto" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors extend the Self-Paced Reinforcement Learning, which samples environment instances from a distribution that shifts from an (easy) starting to a (hard) target distribution. The algorithm has previously been shown to enable agents to solve hard environments. Previously, the target distribution was limited...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2022_lKcq2fe-HB", "-XGDQGG5ZcJ", "94V_hlgjBLj", "iclr_2022_lKcq2fe-HB", "QCPLcZd8bi3", "0cYA8C3VDjU", "M_mQeuKkt0C", "QXDvaS0u4Z", "srsUkHF9xto", "USxFpHhlhTQ", "tgBOgMfiN5", "iclr_2022_lKcq2fe-HB", "iclr_2022_lKcq2fe-HB" ]
iclr_2022_7qaCQiuOVf
Interpreting Reinforcement Policies through Local Behaviors
Many works in explainable AI have focused on explaining black-box classification models. Explaining deep reinforcement learning (RL) policies in a manner that could be understood by domain users has received much less attention. In this paper, we propose a novel perspective to understanding RL policies based on identifying important states from automatically learned meta-states. The key conceptual difference between our approach and many previous ones is that we form meta-states based on locality governed by the expert policy dynamics rather than based on similarity of actions, and that we do not assume any particular knowledge of the underlying topology of the state space. Theoretically, we show that our algorithm to find meta-states converges and the objective that selects important states from each meta-state is submodular leading to efficient high quality greedy selection. Experiments on three domains (four rooms, door-key and minipacman) and a carefully conducted user study illustrate that our perspective leads to better understanding of the policy. We conjecture that this is a result of our meta-states being more intuitive in that the corresponding important states are strong indicators of tractable intermediate goals that are easier for humans to interpret and follow.
Reject
This paper presents a new perspective for understanding reinforcement learning policies based on meta-states, as an effort to improve the explainability of RL control policies. After reviewing the revised paper and reading the comments from the reviewers, here are my comments: - The paper is well-written and very concise. - The strategy is novel and deserves merit. - The utility of the explanation is not well described. - The main concerns of the proposal are the utility of the explanation (that is not well described) and its usage in large discrete state spaces or continuous state spaces domains. From the above, it is difficult to see the contribution and applicability of the paper in a clear manner.
train
[ "Fq9RVy7WRYC", "s-HD97v1X4q", "L1FBucOocE", "6xjauSZXFPd", "u7UvJhZCNLo", "MpxCmvIV5v", "64crt114yhv", "Luh-2X8GG41", "PDL3u2d05ub", "YvAqc98a93", "CDtEZgPZ38", "w9x6p5kc5Rf", "k1IRQWr3_Pu", "mSRuvNxh8c9", "v99mAgWB4pCc", "2NQrjlxwfaN", "ZtpNco8lZi", "tBGwqxHUwgi", "UYdpm-pTVuk",...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " Dear Reviewer u9SV, \n\nYou had a positive impression of our paper before the rebuttal, and we would very much like to hear if our responses further improved your view of our work. We understand that you are likely very busy, but please find a moment to verify if our responses and changes to the paper address you...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "1iZlavZzbrw", "L1FBucOocE", "u7UvJhZCNLo", "iclr_2022_7qaCQiuOVf", "evArMs3q1fh", "1iZlavZzbrw", "rOlwhPozcq8", "PDL3u2d05ub", "-Bs2docOHBX", "CDtEZgPZ38", "w9x6p5kc5Rf", "k1IRQWr3_Pu", "TtsEjvAW3uik", "wgeHoqK1YJT_", "2NQrjlxwfaN", "evArMs3q1fh", "tBGwqxHUwgi", "UYdpm-pTVuk", "...
iclr_2022_97ru13Fdmbt
Monotonicity as a requirement and as a regularizer: efficient methods and applications
We study the setting where risk minimization is performed over general classes of models and consider two cases where monotonicity is treated as either a requirement to be satisfied everywhere or a useful property. We specifically consider cases where point-wise gradient penalties are used alongside the empirical risk during training. In our first contribution, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second contribution, we introduce the notion of monotonicity as a regularization bias for convolutional models. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that using group monotonicity can be beneficial in several applications such as: (1) defining strategies to detect anomalous data, (2) allowing for controllable data generation, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.
Reject
This paper proposes a new approach to enforce monotonicity in the context of risk minimization, or to promote it as an inductive bias. This improves upon existing point-wise gradient based methods by expanding the region where monotonicity is enforced. Group monotonicity is found valuable as a regularization for convolutional models, and multiple applications were shown where the approach appears effective. The paper is well written, and received detailed discussion. Despite the rebuttal, some major concerns remain, such as drop in accuracy, and empirical estimate of the probability that Definition 1 would not hold over the distribution in question. Overall, revisions are needed to make the paper publishable.
val
[ "Mjkz9SbjTId", "6bkFJCqJ1se", "0cOFUGKM-7L", "woG3oyUFl8", "NY2mrGdpcvh", "86d8FPgr80G", "HUCegMZyXpi", "urIQ7R8ntTd", "uxX7KVreJ4O", "BD7m9GETglu", "i6hB1JASbcM", "uwpCcCvman", "mBF9SHLpD_7", "GRUkpW5am8j", "kGwQXk1V--N", "a_OBmY9V7vb", "BWR0nmGNCm", "ZgdDEhRz4rf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for engaging with us in the discussion and for the suggestions; we believe they helped improve our paper significantly. We would greatly appreciate it if the reviewer would provide more specific feedback on exactly what is missing in our work. This way, we can improve the paper accordingly.", " Thank ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "6bkFJCqJ1se", "0cOFUGKM-7L", "woG3oyUFl8", "uxX7KVreJ4O", "iclr_2022_97ru13Fdmbt", "urIQ7R8ntTd", "uxX7KVreJ4O", "uwpCcCvman", "i6hB1JASbcM", "a_OBmY9V7vb", "BWR0nmGNCm", "ZgdDEhRz4rf", "GRUkpW5am8j", "kGwQXk1V--N", "iclr_2022_97ru13Fdmbt", "iclr_2022_97ru13Fdmbt", "iclr_2022_97ru13...
iclr_2022_z8xVlqWwRrK
EVaDE : Event-Based Variational Thompson Sampling for Model-Based Reinforcement Learning
Posterior Sampling for Reinforcement Learning (PSRL) is a well-known algorithm that augments model-based reinforcement learning (MBRL) algorithms with Thompson sampling. PSRL maintains posterior distributions of the environment transition dynamics and the reward function to procure posterior samples that are used to generate data for training the controller. Maintaining posterior distributions over all possible transition and reward functions for tasks with high dimensional state and action spaces is intractable. Recent works show that dropout used in conjunction with neural networks induce variational distributions that can approximate these posteriors. In this paper, we propose Event-based Variational Distributions for Exploration (EVaDE), variational distributions that are useful for MBRL, especially when the underlying domain is object-based. We leverage the general domain knowledge of object-based domains to design three types of event-based convolutional layers to direct exploration, namely the noisy event interaction layer, the noisy event weighting layer and the noisy event translation layer respectively. These layers rely on Gaussian dropouts and are inserted in between the layers of the deep neural network model to help facilitate variational Thompson sampling. We empirically show the effectiveness of EVaDE equipped Simulated Policy Learning (SimPLe) on a randomly selected suite of Atari games, where the number of agent environment interactions is limited to 100K.
Reject
This paper proposes to implement posterior sampling for reinforcement learning for MBRL using three types of noisy convolutional layers inspired by object- and event-based domain knowledge. These layers are used to augment the SimPLe agent (Kaiser et al, 2020), resulting in the EVaDE-SimPLe agent, and experiments demonstrate that the EVaDE-SimPLe outperforms SimPLe on average across twelve Atari games. The reviewers' opinions on the paper were mixed. The reviews highlighted several strengths of the paper: that using posterior sampling for exploration in MBRL is well-motivated (Reviewers 9oaA, XiQT) and that the simplicity of the proposed layers is appealing (Reviewers trzPm, XiQT). However, the reviewers also generally felt that the proposed method was overly specific to a particular domain (Reviewer gXzj, XiQT) and that there was not enough analysis demonstrating *why* the proposed layers work, in which cases they would not work, or why these modifications might be better than other similar modifications (Reviewers 9oaA, trzP, gXzj). Initially there were also some concerns raised by Reviewer trzP about the validity of the evaluation due to the number of seeds, though these concerns were addressed by the authors during the rebuttal. I agree with the reviewers that the approach is interesting and that getting posterior sampling to work well in MBRL is an important problem. But I also find myself agreeing that the present approach is not analyzed in sufficient depth (the results are overly focused on just overall performance, rather than analyzing behaviors exhibited by the agents) and that it is unclear how well it would work in other domains (e.g. 3D settings). I therefore feel this work is not quite ready to be presented at ICLR, and recommend rejection.
train
[ "_P6YpLtcMA", "Hgl26AhQ8dl", "KzkRYgcbxYN", "pxtb0ouEuq", "gjpauTBX8_", "mtnfJVrlWIv", "FGUA9kNS6rx", "k5VcdE6SrQi", "sZWkhGVdEmu", "W_vddOz7-qq", "o3ezQbE9uey", "Pdj62ng7PS4", "pUDRK0SBKZN", "tgcCZmSM1y", "SG9ToenJPFx", "IFNq_DvgSNc" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer trzP for their reply and for increasing the score. \n\nWe would like to address the concern raised and clarify that EVaDE-SimPLe is not designed to be specific to Atari games only and can be used in any domain that deals with objects and interactions. Moreover, as domains with objects and intera...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "Hgl26AhQ8dl", "k5VcdE6SrQi", "iclr_2022_z8xVlqWwRrK", "o3ezQbE9uey", "iclr_2022_z8xVlqWwRrK", "iclr_2022_z8xVlqWwRrK", "iclr_2022_z8xVlqWwRrK", "KzkRYgcbxYN", "IFNq_DvgSNc", "SG9ToenJPFx", "sZWkhGVdEmu", "tgcCZmSM1y", "iclr_2022_z8xVlqWwRrK", "iclr_2022_z8xVlqWwRrK", "iclr_2022_z8xVlqWw...
iclr_2022_TCl7CbQ29hH
CPT: Colorful Prompt Tuning for Pre-trained Vision-Language Models
Pre-Trained Vision-Language Models (VL-PTMs) have shown promising capabilities in grounding natural language in image data, facilitating a broad variety of cross-modal tasks. However, we note that there exists a significant gap between the objective forms of model pre-training and fine-tuning, resulting in a need for large amounts of labeled data to stimulate the visual grounding capability of VL-PTMs for downstream tasks. To address the challenge, we present Cross-modal Prompt Tuning (CPT, alternatively, Colorful Prompt Tuning), a novel paradigm for tuning VL-PTMs, which reformulates visual grounding into a fill-in-the-blank problem with color-based co-referential markers in image and text, maximally mitigating the gap. In this way, CPT enables strong few-shot and even zero-shot visual grounding capabilities of VL-PTMs. Comprehensive experimental results show that the prompt-tuned VL-PTMs outperform their fine-tuned counterparts by a large margin (e.g., 17.3% absolute accuracy improvement, and 73.8% relative standard deviation reduction on average with one shot in RefCOCO evaluation). All the data and codes will be available to facilitate future research.
Reject
In my opinion, this is a cool idea, but could use a few more test settings to evaluate the general applicability of their method. It would be interesting to see if the method generalizes to a non-reference based task. Strengths: Novel method that explores the interaction of color masks for learning to prompt about regions in images by identifying the color region they correspond to Paper contains extensive ablation studies & discussions Weaknesses: Experimental results are run on uncommon benchmarks, making it difficult to compare to SOTA V+L methods Consequently, it’s not clear that this method would generalize beyond visual grounding to tasks such as VQA or captioning
train
[ "VxRqDaKRdVp", "KV0FZdEPJpJ", "T37DQTMc3bf", "EyNeZOZRfcj", "GA0tiwCPAUK", "IX04Mk9-lyq", "o5iVwU8iNW0", "uvUZy9l3d3H", "x-_aaD_tkL9", "Ova3cJKjRf", "n8Fi63yHnJ2", "9wTG-wD8Mp", "0KlLK7E-8DX", "4EPbPQNb7CX", "MA7cmzl_U3s", "6qKpkIfp1VA" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed CPT, colorful prompt tuning for visual grounding tasks using the pre-trained V+L model. By adding color-based co-referential markers in both image and text, CPT makes visual ground as a fill-in-the-blank problem and mitigates the gap between pre-training and fine-tuning. The experiments are con...
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_TCl7CbQ29hH", "EyNeZOZRfcj", "iclr_2022_TCl7CbQ29hH", "GA0tiwCPAUK", "uvUZy9l3d3H", "o5iVwU8iNW0", "0KlLK7E-8DX", "4EPbPQNb7CX", "iclr_2022_TCl7CbQ29hH", "iclr_2022_TCl7CbQ29hH", "T37DQTMc3bf", "VxRqDaKRdVp", "6qKpkIfp1VA", "MA7cmzl_U3s", "iclr_2022_TCl7CbQ29hH", "iclr_2022_...
iclr_2022_GiddFXGDmqp
Spatially Invariant Unsupervised 3D Object-Centric Learning and Scene Decomposition
We tackle the problem of deep object-centric learning from a point cloud which is crucial for high-level relational reasoning and scalable machine intelligence. In particular, we introduce a framework, SPAIR3D, to factorize a 3D point cloud into a spatial mixture model where each component corresponds to one object. To model the spatial mixture model on point clouds, we derive the Chamfer Mixture Loss, which fits naturally into our variational training pipeline. Moreover, we adopt an object-specification scheme that describes each object’s location relative to its local voxel grid cell. Such a scheme allows SPAIR3D to model scenes with an arbitrary number of objects. We evaluate our method on the task of unsupervised scene decomposition. Experimental results demonstrate that SPAIR3D has strong scalability and is capable of detecting and segmenting an unknown number of objects from a point cloud in an unsupervised manner.
Reject
This paper introduces a VAE-based generative model of 3D point-clouds inspired by SPAIR that can do unsupervised segmentation, named SPAIR3D. The model uses both global and local latent variables to encode global scene structure as well as individual objects. The proposed model is relatively complex, but the presentation is overall clear. Experimental results on simple synthetic datasets look promising. However, one might argue that for these simple tasks a direct application of a simpler mixture of VAEs (such as IODINE) might be sufficient, so it would be informative to make a direct comparison between these methods and/or show results on a problem clearly out of the scope of these simpler methods (e.g. with high imbalance in the point clouds).
train
[ "22wR6qUV7ZB", "ZKNQRX_5a4e", "iqRmaxMkb7c", "XZQaSyrx8zJ", "VhrO3z-tjre", "PRkzgDtibml", "528DQd9K7qZ", "KJHbLWOm0jg", "Buxv1Ywov5o", "UT0ePazC9d", "7io4EB7Dw9J", "6BtbYu9qqq", "DhMt3rOON8", "Mn73E_FBiJx", "n1Gf-kbFmI5", "0H8JAwoOi8M", "-hC95VnOAMC", "By6cLifkSfZ", "UUp6Ocogvnn"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " 1. As pointed out by the reviewer yyA6, improving the performance of SPAIR3D on a real-world dataset can take some extra effort but is definitely possible.\nOne key design is to use a voxel grid with multiple scales and transform the generative model to its hierarchical version.\nNote that SPAIR3D is already a tw...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 4, 4, 4 ]
[ "ZKNQRX_5a4e", "-hC95VnOAMC", "6BtbYu9qqq", "0H8JAwoOi8M", "iclr_2022_GiddFXGDmqp", "By6cLifkSfZ", "KJHbLWOm0jg", "Buxv1Ywov5o", "UT0ePazC9d", "6BtbYu9qqq", "0H8JAwoOi8M", "8JTvpyUnIF", "UUp6Ocogvnn", "n1Gf-kbFmI5", "5AzUyD7H7S", "DhMt3rOON8", "o_K9lRhqM2Q", "8t8MNYYEM", "iclr_20...
iclr_2022_G0CuTynjgQa
Generalization of GANs and overparameterized models under Lipschitz continuity
Generative adversarial networks (GANs) are really complex, and little has been known about their generalization. The existing learning theories lack efficient tools to analyze generalization of GANs. To fill this gap, we introduce a novel tool to analyze generalization: Lipschitz continuity. We demonstrate its simplicity by showing generalization and consistency of overparameterized neural networks. We then use this tool to derive Lipschitz-based generalization bounds for GANs. In particular, our bounds show that penalizing the zero- and first-order informations of the GAN loss will improve generalization. Therefore, this work provides a unified theory for answering the long mystery of why imposing a Lipschitz constraint can help GANs to generalize well in practice.
Reject
This paper proposes to analyze the generalization error of deep learning models and GANs using the Lipschitz coefficient of the model. There was significant discrepancies in the evaluation of the paper among reviewers. While all reviewers acknowledged the interesting theoretical approach to understand generalization and the relevance to ICLR of the problem, they disagreed about the readiness level of the paper. Some concerns were expressed in terms of clarity (and the AC agrees with these), but most importantly, reviewer wKt9 pointed an important flaw in the current analysis that was not properly responded to by the reviewers (see below). In discussion, other reviewers were also concerned by this flaw, and so the AC decided to recommend a major revision of the paper taking the reviewers comments in consideration. ## Important flaw in the paper analysis (from wKt9) Basically, Theorem 1 assumes that a loss $f(h,x)$ is $L$-Lipschitz w.r.t. input $x$ in some compact set of diameter $B$ for any $h$. The author shows that the: $\sup_{h \in H} |E_{P} f(h,X) - E_{\hat{P}} f(h,X)|$ is upper-bounded by $L B + C \sqrt{\text{stuff}/m}$. The concern of wKt9 is that the LHS is upper-bounded *trivially* and deterministically by the tighter $L B$ [see proof sketch next] for any distribution $P$ and $\hat{P}$ just because of the compactness of the input set and that $f$ is $L$-Lipschitz; one does not even need to include the number of samples $m$ in the analysis (thanks to the very strong assumption on $f$). The reviewer also was concerned that later (Theorem 3), the authors study ways that we can make $L$ exponentially small (which is interesting), but this has both the issues that: 1) it tells you nothing about the absolute performance of your network, as this only bounds the variation between any two distributions (indeed including the empirical and true distribution; but the fact that it also contains all distributions should indicate how loose this bound is!), and so perhaps the best empirical error one can obtained is still big 2) the current version of Theorem 3 uses a loose bound with a dependence on $m$ which was not even needed (as per the result above). While it's true that empirically one can observe small empirical error, and thus combining this with a small Lipschitz constant would indicate good absolute performance; but the current presentation of the theory is rendered quite problematic by the above refinement, and should be corrected in a revision. ### Proof sketch: For simplicity, I'll prove it for $P$ being a discrete distribution and $\hat{P}$ being the empirical; but I'm pretty sure you can extend it to continuous distributions as well. Note that we have $|f(h,x) - f(h,x')| \leq L B$ for all $x, x'$ in the compact set of diameter $B$ and for all $h$. Now $$E_{P} f(h,X) - E_{\hat{P}} f(h,X) = \sum_j \pi_j f(h, x_j') - \frac{1}{m} \sum_i f(h,x_i)$$ For each $x_i$, associate several $x_j$'s so that the total sum of their probabilities is $1/m$ (split some $\pi_j$ in multiple pieces if necessary) -- we can augment the index set for these new pieces, to obtain new probabilities $\pi_j'$ and call $I_i$ the set of associated indices to $x_i$. We have $\sum_{j \in I_i} \pi'_j = 1/m$ We thus have: $$E_{P} f(h,X) - E_{\hat{P}} f(h,X) = \sum_i \sum_{j \in I_i} \pi'_j \left[ f(h, x'_j) - f(h,x_i) \right]$$ Thus: $$|E_{P} f(h,X) - E_{\hat{P}} f(h,X)| \leq \sum_i \sum_{j \in I_i} \pi'_j \left| f(h, x'_j) - f(h,x_i) \right| \leq L B$$ This is true for any $h$, so this is also true for the $\sup$, *deterministically*! QED
test
[ "avKgyrKGGK_", "2hvokzoQggP", "Z8iWxaP_c2", "o_ATbPbC_a", "2c0T7UzXwyd", "jgt6qKyUwpP", "s31dFYvxmzg", "EKLucoo9q1e", "1GBJuH-puyD", "aE9okDGi30f", "AeInvPamgfU", "F0yN9pix9Uc", "3Z5UEgkFlyp", "cZU4FaqTTk4", "AwUoiunh0c-", "BOZbO6X0xFe", "sDbEmhlI-HQ", "R3kjDMzCR6l" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for providing further arguments. However, **your arguments are nonlogical**:\n\n- Consider a family $H$ with members having sufficiently small Lipschitz constants, you can show a small generalization gap for $H$. Note that one cannot conclude a similar generalization gap for Dropout DNNs, since $H$ does...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "2hvokzoQggP", "1GBJuH-puyD", "iclr_2022_G0CuTynjgQa", "2c0T7UzXwyd", "aE9okDGi30f", "R3kjDMzCR6l", "sDbEmhlI-HQ", "sDbEmhlI-HQ", "R3kjDMzCR6l", "sDbEmhlI-HQ", "sDbEmhlI-HQ", "BOZbO6X0xFe", "cZU4FaqTTk4", "AwUoiunh0c-", "iclr_2022_G0CuTynjgQa", "iclr_2022_G0CuTynjgQa", "iclr_2022_G0C...
iclr_2022_RbVp8ieInU7
Low-rank Matrix Recovery with Unknown Correspondence
We study a matrix recovery problem with unknown correspondence: given the observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown permutation matrix, we aim to recover the underlying matrix $M=[A,B]$. Such problem commonly arises in many applications where heterogeneous data are utilized and the correspondence among them are unknown, e.g., due to privacy concerns. We show that it is possible to recover $M$ via solving a nuclear norm minimization problem under a proper low-rank condition on $M$, with provable non-asymptotic error bound for the recovery of $M$. We propose an algorithm, $\text{M}^3\text{O}$ (Matrix recovery via Min-Max Optimization) which recasts this combinatorial problem as a continuous minimax optimization problem and solves it by proximal gradient with a Max-Oracle. $\text{M}^3\text{O}$ can also be applied to a more general scenario where we have missing entries in $M_o$ and multiple groups of data with distinct unknown correspondence. Experiments on simulated data, the MovieLens 100K dataset and Yale B database show that $\text{M}^3\text{O}$ achieves state-of-the-art performance over several baselines and can recover the ground-truth correspondence with high accuracy.
Reject
Overall this paper was discussed at length given the high variance in scores, and it was ultimately felt that the paper was a borderline paper and there was not enough enthusiasm to warrant acceptance. Several concerns in the discussion could not be resolved, in particular the bounds might not be tight, or even useful, and more explanation on the dependence of various parameters involved and assumptions involved is needed. Specifically, as pointed out by a reviewer, there is a concern about the parameter epsilon_3. It seems for natural input distributions epsilon_3 would be so small that the upper bound would scale as n^3 (given the 1/epsilon_3^2 dependence), which is then trivial since it is larger than n. The reviewers were not satisfied with the authors response regarding this.
train
[ "LdpZFq_NXe", "FfS8Wt3wTCC", "gBVgLd-0kuD", "IqqPJ4sNE7e", "Vu3ZkWUEkpn", "eC11cRWH0r_", "4uxIvYbqsZQ", "0bRi6_GcdPY", "OKPbPfqItk", "GriAZ7wjfBx", "q-4LPOQDcvR", "EoTyA5xS4jI", "6yXkkM7ALKf", "zFXHKmk_NN", "MXlj8FWKZAO", "OUQjppp3FHk", "6Rf2VkWOW5k", "9CZ_0UjS8pA", "FBTddoxoWdH"...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", ...
[ " * We have proofread our paper carefully and polished the language for the revised version.\n* We have extended the reply to the Q1 of Reviewer Wx9p and added it as A.2 in the appendix so as to provide more information about the asymptotic behavior of Theorem 1.\n* We have added the discussion of the limit and fu...
[ -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 10 ]
[ -1, 3, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_RbVp8ieInU7", "iclr_2022_RbVp8ieInU7", "IqqPJ4sNE7e", "eC11cRWH0r_", "iclr_2022_RbVp8ieInU7", "q-4LPOQDcvR", "0bRi6_GcdPY", "q-4LPOQDcvR", "FfS8Wt3wTCC", "iclr_2022_RbVp8ieInU7", "6yXkkM7ALKf", "OUQjppp3FHk", "OUQjppp3FHk", "6Rf2VkWOW5k", "lV-k9qyiyE_", "9CZ_0UjS8pA", "FfS...
iclr_2022_dtt435G80Ng
CSQ: Centered Symmetric Quantization for Extremely Low Bit Neural Networks
Recent advances in quantized neural networks (QNNs) are closing the performance gap with the full precision neural networks. However at very low precision (i.e., $\le 3$-bits), QNNs often still suffer significant performance degradation. The conventional uniform symmetric quantization scheme allocates unequal numbers of positive and negative quantization levels. We show that this asymmetry in the number of positive and negative quantization levels can result in significant quantization error and performance degradation at low precision. We propose and analyze a quantizer called centered symmetric quantizer (CSQ), which preserves the symmetry of latent distribution by providing equal representations to the negative and positive sides of the distribution. We also propose a novel method to efficiently map CSQ to binarized neural network hardware using bitwise operations. Our analyses and experimental results using state-of-the-art quantization methods on ImageNet and CIFAR-10 show the importance of using CSQ for weight in place of the conventional quantization scheme at extremely low-bit precision (2$\sim$3 bits).
Reject
### Description The paper investigates the choice of a fixed quantization grid for weights. Namly, the paper observes that symmetric uniform quantization levels such as {-1.5,-0.5,0.5,1.5} lead to better results than non-symmetric ones, e.g. {-2,-1,0,1}. While it is a small thing, it can be appreciated that it is investigated systematically and pedantically, proposing an explanation and showing experimentally that the effect is constantly present in favour of symmetric quantization. While the improvement is small, it comes almost at no cost. A part of the contribution proposes an efficient implementation. ### Decision Reviewers and AC came to a consensus that the contribution of the paper is marginal. Symmetric quantization schemes themseleves were already employed by many models, albeit without analysis or even a discussion of such choice. The analysis presented in the paper was found unconvincing by the reviewers (see below). The efficient implementation follows from basic linear algebra (see below). The potential impact of the work was considered as limited due to a rather marginal observed improvement. The average rating of the paper was 4.5. Therefore must reject. ### Details Regarding the proposed analysis of CSQ, it is not clear, why the number of quantization levels of an elementary product matters, given that these numbers are then summed over all corresponding input channels and spatial dimensions of a convolution kernel applied at a single location. It is questionable whether the number of these quantization levels indeed corresponds to the representation capacity. Finally, the paper misses to demonstrate the effect on binary (1 bit) networks. In this case the standard approach is to use {-1,1} weights and {-1,1} activatinos. The paper could investigate the case of {0,1} activations, where there would be 50% more unique possible outputs from the product, namely {-1,0,1} to validate their hypothesis. If the hypothesis holds, an improvement in the binary case would be observed. This is important since the binary case is know to be the hardest and since the respective recommendation of representations would be non-standard. It could be further questioned why the distribution of real-valued weights has any relevance (such as in the arguments in appendix E) if the model is trained from scratch? A training method need not keep any real-valued latent weights in the first place. The technical part in section 5 "efficient realization" adds very little, if anything, to the paper's contribution. A simple linear algebra suffices to see that $(W-0.5) \ast x = W* x - 0.5 I \ast x,$ where $I$ is the kernel of ones of the same shape as $W$. It is clear that the convolution $I \ast x$ can be implemented efficiently (e.g. it is just a sum over channels followed by a separable spatial only convolution) and is not a bottleneck and. The final detail such as whether to slice by bits and use popcount for it or to use 8-bit addition, depend very much on the choice of the bit-packed representation and the hardware available. It would be known to engineers in the field how to implement it efficiently.
train
[ "s5Oxg4A4fMx", "KW_6EirIrx", "wmfRu9F139l", "aVvU9Pcy9bE", "0XB0B9pT1S2", "v549iLYXEX3", "NUqS9lYl23", "PgPGBu3v4GR", "wCHm3wH2m0m", "Kr7FewT9Af3", "brXauWYZ0bh", "K99GLQJiOV", "Q4O8GRCiLru" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the efforts the authors spent on answering my questions. The author's response addresses some of my concerns and I will discuss them with other reviewers and AC to have a fair decision.", " Q4. ”In the experiment, it is explained that CSQ is applied only to weight quantization.” \n\n \nA4. We would...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "PgPGBu3v4GR", "Q4O8GRCiLru", "Q4O8GRCiLru", "K99GLQJiOV", "K99GLQJiOV", "brXauWYZ0bh", "Kr7FewT9Af3", "Kr7FewT9Af3", "iclr_2022_dtt435G80Ng", "iclr_2022_dtt435G80Ng", "iclr_2022_dtt435G80Ng", "iclr_2022_dtt435G80Ng", "iclr_2022_dtt435G80Ng" ]
iclr_2022_Vq_QHT5kcAK
Greedy Bayesian Posterior Approximation with Deep Ensembles
Ensembles of independently trained neural networks are a state-of-the-art approach to estimate predictive uncertainty in Deep Learning, and can be interpreted as an approximation of the posterior distribution via a mixture of delta functions. The training of ensembles relies on non-convexity of the loss landscape and random initialization of their individual members, making the resulting posterior approximation uncontrolled. This paper proposes a novel and principled method to tackle this limitation, minimizing an $f$-divergence between the true posterior and a kernel density estimator in a function space. We analyze this objective from a combinatorial point of view, and show that it is submodular with respect to mixture components for any $f$. Subsequently, we consider the problem of greedy ensemble construction, and from the marginal gain of the total objective, we derive a novel diversity term for ensemble methods. The performance of our approach is demonstrated on computer vision out-of-distribution detection benchmarks in a range of architectures trained on multiple datasets. The source code of our method is made publicly available.
Reject
The authors present a new framework to make deep ensembles provide better coverage of the posterior and be less reliant on initialisation. The authors generally did a good job presenting their approach, avoiding dubious claims that deep-ensembles are non-Bayesian, and instead focusing on ways in which deep ensembles can be improved, practically and theoretically. It is worth noting in a revised version, however, that many approximate inference procedures do not have theoretical guarantees. The claim that deep ensembles have "arbitrary bad approximation guarantees" is vague and appears to single them out in a way that could confuse the reader. Regarding priors, it is also worth noting that Wilson & Izmailov (2020) provide evidence that the prior in weight space induces a prior in function space with useful properties, although the prior can be improved. The authors do a good job of responding to reviewers, and describing limitations. Ultimately, however, the general opinion was not swayed to accept. In addition to reviewer concerns, the experimental evaluation could be substantially improved. There are several procedures that build on deep ensembles to capture uncertainty within modes. How does this procedure compare? Why are no likelihood evaluations considered? What about accuracy? In its present form, it's unclear what practical value the contributions are providing, besides possibly better OOD detection, but even that direction is explored in a relatively limited way. It could also be interesting to measure the distance of the predictive distribution to a good proxy for the Bayesian model average. Overall, there are the raw ingredients of a good paper here, and the authors are encouraged to continue with this work.
val
[ "KPJcuDGrQ-5", "Z27qhSOfft", "BVx7ExjgB5D", "bfwM-6yzqwv", "DvuaAIpbkFy", "b5O28BTwDqn", "ALuH2T4ou3U", "eqCbd1W4XAD", "OWGpBqiIqp0", "2UL28GUYC_", "117qNiEsKUs" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the further clarification. After reading the rebuttal, I still feel positive about this paper, but, honestly, my confidence is low. So, I'd like to further discuss with the other reviewers to reach a consensus.", " Apologies for too many revisions. We found another disturbing typo, which...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "bfwM-6yzqwv", "BVx7ExjgB5D", "eqCbd1W4XAD", "117qNiEsKUs", "2UL28GUYC_", "ALuH2T4ou3U", "OWGpBqiIqp0", "iclr_2022_Vq_QHT5kcAK", "iclr_2022_Vq_QHT5kcAK", "iclr_2022_Vq_QHT5kcAK", "iclr_2022_Vq_QHT5kcAK" ]
iclr_2022_fM8VzFD_2-
Discovering the neural correlate informed nosological relation among multiple neuropsychiatric disorders through dual utilisation of diagnostic information
The unravelled nosological relation among diverse types of neuropsychiatric disorders serves as an important precursor in advocating the dimensional approach to psychiatric classification. Leveraging high-dimensional abnormal resting-state functional connectivity, the crux of mining corresponded nosological relations is to derive a low-dimensional embedding space that preserves the diagnostic attributes of represented disorders. To accomplish this goal, we seek to exploit the available diagnostic information in learning the optimal embedding space by proposing a novel type of conditional variational auto-encoder that incorporates dual utilisation of diagnostic information. Encouraged by the achieved promising results in challenging the conventional approaches in low dimensional density estimation of synthetic functional connectivity features, we further implement our approach on two empirical neuropsychiatric neuroimaging datasets and discover a reliable nosological relation among autism spectrum disorder, major depressive disorder, and schizophrenia.
Reject
This paper proposes a novel variational autoencoder to utilize functional connectivity (FC) features from resting state fMRI (rs-fMRI) scans in order to uncover latent nosological relationships between diverse yet related neuropsychiatric disorders. The methodology and main technical contributions are clearly articulated and explained, and the experimental results seem convincing. On the other hand, the proposed framework is somewhat limited in scope and clinical applicability, and the writing in the paper needs improvement (as pointed out by two reviewers).
train
[ "DH4VV97L_JZ", "0-PjA-r6h81", "4txv8VtuIEz", "BoVJvbKnMJL", "ok3J-xb6KOf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a method to reduce the dimensions of resting-state functional connectivity for psychiatry disorders such as autism spectrum disorder, major depressive disorder, and schizophrenia. Their method is based on a conditional variational auto-encoder that utilized the diagnostic label. They evaluated ...
[ 1, 8, 6, 6, 5 ]
[ 5, 4, 3, 2, 3 ]
[ "iclr_2022_fM8VzFD_2-", "iclr_2022_fM8VzFD_2-", "iclr_2022_fM8VzFD_2-", "iclr_2022_fM8VzFD_2-", "iclr_2022_fM8VzFD_2-" ]
iclr_2022_f3qFAV_MH-C
Transfer and Marginalize: Explaining Away Label Noise with Privileged Information
Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e.g. the ID of the annotator that provided the label. We argue that privileged information is useful for explaining away label noise, thereby reducing the harmful impact of noisy labels. We develop a simple and efficient method for supervised neural networks: it transfers the knowledge learned with privileged information via weight sharing and approximately marginalizes over privileged information at test time. Our method, TRAM (TRansfer and Marginalize), has minimal training time overhead and has the same test time cost as not using privileged information. TRAM performs strongly on CIFAR-10H, ImageNet and Civil Comments benchmarks.
Reject
The manuscript proposes (TRansfer and Marginalize) TRAM method that integrates the privileged information into the learned network weight through weight sharing at training time and approximately marginalizes over the privileged information at test time. TRAM can also be combined with methods for dealing with noisy labels, distillation (Distilled-TRAM) and heteroscedastic output layers (Het-TRAM). Experiments are performed on both realistic and synthetic datasets including CIFAR-10H, re-labeled ImageNet, and Civil Comments Identities. Reviewers agreed on several positive aspects of the manuscript, including: 1. The proposed methods have simple architectures (not requiring specific modules, e.g., Gaussian dropout [Lambert et al., 2018], for the marginalization); 2. The proposed method can in principle be applied to any neural network model and has zero overhead at prediction time. Reviewers also highlighted several major concerns, including: 1. The analysis is performed on edge cases such as linear and non-linear sine models. There is no analysis for the classification case that this manuscript is targeted for. The simple cases are only true when the feature extraction network is kept unchanged during training; 2. Empirically, the experiments are conducted in a limited and counter-intuitive; 3. Lack empirical evidence suggesting that the representations learned with access to privileged information are more robust against label noise; 4. Lack quantitative (or even qualitative) evidence about how, how much, and what kind of privileged information is transferred through weight sharing in realistic deep neural network models. Several new experiments have been added to show, among others: representations learned with privileged information outperform representations learned without access to privileged information (using a linear classification model on ImageNet), better quantitatively and qualitatively understanding how and how much privileged information is transferred in realistic deep networks. Post-rebuttal, reviewers stayed with borderline ratings, and they have suggested further improvements: simulating with more annotators by using different checkpoints and/or different hyperparameters, collecting a real-world large-scale dataset such that the privileged information is insignificantly expensive to obtain along with the main annotations, and disentangling the effect of the pretraining model on the denoising method.
train
[ "Uwn8E1BAQb", "vBFcW1hDxTO", "3RffeSUz4-7", "xtVPaLR8gkx", "Y-Ga4LPTzBN", "SEzGyrLMlm", "XUBqA_95ZdE", "drejkR9snHy", "RTvuzNIja1T", "P1OCn-I7fQf", "zon_iSTQ8NO", "hic_6gm037Y", "2VPeTTwdL-x", "c_0hM1dARig", "nvt7rOhHAr", "0YY5r38tRBd", "F96Cee6vqG", "9_w0rkngAjY" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_rev...
[ "This paper suggests a notion that using additional information called privilege information (PI) from annotators will help explain away the noise of the label they annotated and propose a way to implement that idea. The authors give the intuition from a simple linear model and a non-linear sine model that PI is us...
[ 3, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_f3qFAV_MH-C", "P1OCn-I7fQf", "iclr_2022_f3qFAV_MH-C", "nvt7rOhHAr", "c_0hM1dARig", "XUBqA_95ZdE", "drejkR9snHy", "RTvuzNIja1T", "hic_6gm037Y", "zon_iSTQ8NO", "2VPeTTwdL-x", "9_w0rkngAjY", "Uwn8E1BAQb", "F96Cee6vqG", "3RffeSUz4-7", "iclr_2022_f3qFAV_MH-C", "iclr_2022_f3qFAV...
iclr_2022_0Tnl8uBHfQw
Deep Classifiers with Label Noise Modeling and Distance Awareness
Uncertainty estimation in deep learning has recently emerged as a crucial area of interest to advance reliability and robustness in safety-critical applications. While there have been many proposed methods that either focus on distance-aware model uncertainties for out-of-distribution detection or on input-dependent label uncertainties for in-distribution calibration, both of these types of uncertainty are often necessary. In this work, we propose the HetSNGP method for jointly modeling the model and data uncertainty. We show that our proposed model affords a favorable combination between these two complementary types of uncertainty and thus outperforms the baseline methods on some challenging out-of-distribution datasets, including CIFAR-100C, Imagenet-C, and Imagenet-A. Moreover, we propose HetSNGP Ensemble, an ensembled version of our method which adds an additional type of uncertainty and also outperforms other ensemble baselines.
Reject
This paper studies the combination between model uncertainty and data uncertainty based on the spectral-normalized Gaussian process. Empirical results show the effectiveness of the proposed method. Overall, the paper is well-motivated and well-written. However, there are several concerns about the paper. (1) The novelty is marginal. The contribution of combining SNGP and heteroscedastic models into a single model may not be enough. (2) More analyses and insights are needed on why the mentioned two types of uncertainty are complementary. (3) More recent state-of-the-art methods on classification with noisy labels are suggested to be included to interest the readers. There are diverse scores. However, no one wants to champion the paper. We believe that the paper will be a strong one by addressing the concerns.
train
[ "CXxwf3t3M_Q", "lQfSYjBHvmZ", "yTteQe7q_Gi", "y45X95BhEL", "7O8WPb8f2qn", "Np-LU77qNri", "kp3lbLhMbib", "VGQhoz4x50X", "yiQT5Mqzze", "1w6QoBr3OD6", "XFVA-zs3Znr", "3Y9yj4hDkGP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your responses.\n\nI think this work is well motivated. But after carefully reading the response, I still have concerns for both technical and empirical contributions which are as follows.\n\n+ The contribution of combining SNGP and heteroscedastic models into a single model may not be enough. As I af...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 8, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "kp3lbLhMbib", "yiQT5Mqzze", "iclr_2022_0Tnl8uBHfQw", "7O8WPb8f2qn", "yTteQe7q_Gi", "kp3lbLhMbib", "3Y9yj4hDkGP", "XFVA-zs3Znr", "1w6QoBr3OD6", "iclr_2022_0Tnl8uBHfQw", "iclr_2022_0Tnl8uBHfQw", "iclr_2022_0Tnl8uBHfQw" ]
iclr_2022_tDirSp3pczB
Sharp Learning Bounds for Contrastive Unsupervised Representation Learning
Contrastive unsupervised representation learning (CURL) encourages data representation to make semantically similar pairs closer than randomly drawn negative samples, which has been successful in various domains such as vision, language, and graphs. Although recent theoretical studies have attempted to explain its success by upper bounds of a downstream classification loss by the contrastive loss, they are still not tight enough to explain an experimental fact: larger negative samples improve the classification performance. This study establishes a downstream classification loss bound with a tight intercept in the negative sample size. By regarding the contrastive loss as a downstream loss estimator, our theory not only improves the existing learning bounds substantially but also explains why downstream classification empirically improves with larger negative samples because the estimation variance of the downstream loss decays with larger negative samples. We verify that our theory is consistent with experiments on synthetic, vision, and language datasets.
Reject
The paper presents an analysis of the benefit of unsupervised contrastive learning for downstream classification tasks using the cross-entropy loss. Building on prior work, the authors show that the contrastive loss can be bounded in terms of the cross=entropy term and an “intercept” term which depends logarithmically on the number of negative samples per positive sample (for contrastive learning) rather than polynomially as in the prior work. There are several differences between the setting here and that of the prior work by Arora et al. (2019). First, the work here focuses only on cross-entropy loss and leverages the similarity of the loss structure between the contrastive loss and the cross-entropy loss. Second, the assumptions here are different, e.g., boundedness of the representation. Finally, the assumption that latent classes are the same as the label classes (which is not the case in the prior work) is significantly restrictive. The writing is poor and the presentation is not clear. Despite the title and various references to learning bounds in the abstract and the main text, there are no learning bounds in the paper. The main result is to bound the contrastive loss in terms of the cross-entropy loss under the assumption that the latent classes and the label classes coincide. Authors state that getting generalization bounds is routine and, therefore, they chose not to give them — I do not see how generalization bounds follow in a straightforward manner here, and even if they do, it is important to write them for completeness. The main contribution here is that the bounds depend logarithmically in K — the number of negative samples per positive sample — compared to sqrt{K} in the previous work. The previous bound however holds for Lipschitz losses as well, for e.g., hinge loss. So the question remains whether this improvement is only for the cross-entropy loss. Regardless, K is typically small in practical applications. Even the experiments in the paper (Figure 7) suggest that the performance degrades for larger K even on simple tasks. So, the improvement is really somewhat insignificant. The reviewers were generally positive and appreciated the paper. However, in the light of comments above (of which I am quite certain), unfortunately, I am unable to accept the paper at this point. I believe the comments above (and from the other reviewers) will help improve the overall quality of the paper. I encourage the authors to incorporate the feedback and work towards a stronger submission.
train
[ "I-fRUsySvlx", "HVveKSn4IQ", "lfpJ3vBe4OL", "IrtyRDKy0w", "NUIW-ZZiVhz", "R1wFwG8kBdw", "X3KRYvk5Cil", "sEvIBQbgTLQ", "RIwF8zbBSle", "ok4wawRjsxg", "arOtOwMGk8G2", "i5lQgMMuLVr9", "FKQDhtSehjam", "RZ4pULPo9fx", "l7AEbshq9Td", "Tq_eEourTmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the rebuttal and I am happy to keep my current score. I look forward to future works based on this result!", "This work establishes a downstream classification loss bound for contrastive learning which shows that larger negative samples improve the classification performance. Existing works cannot e...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "i5lQgMMuLVr9", "iclr_2022_tDirSp3pczB", "sEvIBQbgTLQ", "iclr_2022_tDirSp3pczB", "RIwF8zbBSle", "iclr_2022_tDirSp3pczB", "IrtyRDKy0w", "HVveKSn4IQ", "RZ4pULPo9fx", "l7AEbshq9Td", "X3KRYvk5Cil", "Tq_eEourTmG", "iclr_2022_tDirSp3pczB", "iclr_2022_tDirSp3pczB", "iclr_2022_tDirSp3pczB", "i...
iclr_2022_9W2KnHqm_xN
Successive POI Recommendation via Brain-inspired Spatiotemporal Aware Representation
POI vector representation (embedding) is the core of successive POI recommendation. However, existing approaches only rely on basic discretization and interval analyses and fail to fully exploit complicated spatiotemporal attributes of POIs. Neuroscience research has shown that the mammalian brain entorhinal-hippocampal system provides efficient graph representations for general knowledge. Moreover, entorhinal grid cells present concise spatial representations, while hippocampal place cells represent perception conjunctions effectively. Thus, the entorhinal-hippocampal system provides a novel angle for spatiotemporal aware representation, which inspires us to propose the SpatioTemporal aware Embedding framework (STE) and apply to POIs (STEP). STEP considers two types of POI-specific representations: sequential representation and spatiotemporal conjunctive representation, learned using sparse unlabeled data based on the proposed graph-building policies. Notably, the spatiotemporal conjunctive representation represents POIs from spatial and temporal aspects jointly and precisely. Furthermore, we introduce a user privacy secure successive POI recommendation method using STEP. Experimental results on two datasets demonstrate that STEP captures POI-specific spatiotemporal information more accurately and achieves the state-of-the-art successive POI recommendation performance. Therefore, this work provides a novel solution to spatiotemporal aware representation and paves a new way for spatiotemporal modeling-related tasks.
Reject
Despite some positive points, the criticisms (and overall scores) put this paper below the bar. The reviewers raise issues of novelty, as well as problems with the experiments and argue that some claims are unsupported.
test
[ "TAIv1gts9F", "RPC0wvTAn9O", "syACrp4kJaS", "5goNVdW4imL", "X2E45NszWw", "NAjrQxuX4v", "Nol86X4bHXJ", "okgYcZgSqP", "54moBcrQjkp", "zY0OVuDJUW", "kQQv6LR48Km" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the feedback! \n\nThe core of this paper is to bridge the spatiotemporal representation mechanism in the entorhinal-hippocampal circuit with practical spatiotemporal representation problems. Albeit the entorhinal-hippocampal structure has long been thought to be highly relevant to the outstanding perfo...
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "RPC0wvTAn9O", "okgYcZgSqP", "iclr_2022_9W2KnHqm_xN", "kQQv6LR48Km", "zY0OVuDJUW", "54moBcrQjkp", "okgYcZgSqP", "iclr_2022_9W2KnHqm_xN", "iclr_2022_9W2KnHqm_xN", "iclr_2022_9W2KnHqm_xN", "iclr_2022_9W2KnHqm_xN" ]
iclr_2022_IEKL-OihqX0
Gradient-Guided Importance Sampling for Learning Discrete Energy-Based Models
Learning energy-based models (EBMs) is known to be difficult especially on discrete data where gradient-based learning strategies cannot be applied directly. Although ratio matching is a sound method to learn discrete EBMs, it suffers from expensive computation and excessive memory requirement, thereby resulting in difficulties for learning EBMs on high-dimensional data. In this study, we propose ratio matching with gradient-guided importance sampling (RMwGGIS) to alleviate the above limitations. Particularly, we leverage the gradient of the energy function w.r.t. the discrete data space to approximately construct the provable optimal proposal distribution, which is subsequently used by importance sampling to efficiently estimate the original ratio matching objective. We perform experiments on density modeling over synthetic discrete data and graph generation to evaluate our proposed method. The experimental results demonstrate that our method can significantly alleviate the limitations of ratio matching and perform more effectively in practice.
Reject
The paper introduces a way of making Ratio Matching (RM) scale better to high-dimensional data when training energy-based models (EBMs). The main idea is to estimate the sum over the datapoint dimensions in the RM objective with importance sampling (IS), achieving computational savings by using fewer samples than dimensions. A key part of the method is a proposal that uses gradient information w.r.t. discrete variables to efficiently approximate the optimal (minimum variance) proposal, resulting in much better performance compared to uniform sampling. The authors also introduce a biased version of the estimator that samples from the same proposal but drops the importance weights when averaging over the samples, which, somewhat surprisingly, outperforms the unbiased version. The idea of using Monte Carlo estimation based on importance sampling to speed up Ratio Matching is novel and sound. The use of gradient information to approximate the optimal IS proposal is also novel in this context, though the idea of using gradients this way to reduce the number of EBM energy function evaluations comes from Grathwohl et al. (2021), where it was used to speed up Gibbs sampling. While the method is well described, the paper is insufficiently rigorous in several places, most importantly in claiming that Eq. 4 corresponds to Ratio Matching, which is not true. Eq. 4 is instead equivalent to the objective for Generalized Score Matching (GSM) given by Eq. 17 in (Lyu, 2009). Crucially, while both GSM and RM recover the true model if the model class is nonparametric/unconstrained, as is stated in (Lyu, 2009) (and thus agree with each other as well as with maximum likelihood estimation) ), they do not yield the same solution for constrained model classes such as neural networks. This means that the method in the paper implements GSM and not RM. Unlike RM, which has been used widely in the literature, GSM is essentially empirically unproven and thus is a less interesting choice. The main difference between the GSM and RM objectives is the presence of the squashing function g(u) = 1/(1+u) around the probability ratios in RM to avoid division by zero, when the probability in the denominator is vanishingly small (as is explained above Eq. 12 in (Hyvarinen, 2007)). This means GSM is likely to be prone to stability issues due to using probability ratios directly. This is one possible explanation for the puzzling empirical results in the paper, where the proposed sampling-based methods outperform the exact method they are supposed to approximate, with the biased method clearly performing best. The intuition-based arguments made in the paper to explain these results are not convincing and need to be improved upon. While, as the authors pointed out in their response, it is possible to apply the strategy in the paper to RM by applying IS to Eq. 3 instead of Eq. 4, that would be essentially a different paper. One example of puzzling experimental results is Figure 3, which shows that the base method ("Ratio Matching") does not find the correct solution while the proposed approximate methods do. This suggests that there is something wrong either with the method (e.g. with the objective, as mentioned above) or with the experimental setup. In either case, the cause needs to be thoroughly investigated. Currently the empirical evaluation is primarily MMD based, relying on sampling from the model using MCMC. Ensuring that MCMC chains mix sufficiently well to sample from the true distribution by visiting all of its modes is difficult, and it is important to provide some evidence that this was done. As suggested by a reviewer, the results would be substantially strengthened by reporting the log-likelihoods for the models, estimated e.g. using AIS, even if that requires including scaled-down versions of some of the experiments. The title of the paper is misleading and should be changed because the proposed method is specific to EBMs for binary data, even if the intent is to extend it to other types of discrete data in the future. The clarification and additional results provided by the authors to the reviewers and the AC were appreciated, but unfortunately the outstanding issues with the paper are too major to allow acceptance at this point. The main idea of the paper has substantial promise however, and the authors are encouraged to develop it to its full potential by addressing the points from this meta-review as well as the additional ones from the reviewers. Bibliography correction: Hyvarinen is the solo author of "Estimation of Non-Normalized Statistical Models by Score Matching". Peter Dayan was the editor of that paper and not a co-author. Please correct your bibliography.
train
[ "6ScPSHJn4L", "bh2o6va5XjT", "-QywSoJfmgD", "LG4IilGMCxK", "VV7Cok7SDge", "hYsEUtxn2wv", "Jb4q-DM5Ef9", "rkJT4twwbd", "lwAniy0lito", "wFAspm6cKh-", "Rida2dFhshb", "zPvVC0rIZ2S", "vfmk7elNX_S", "mgLUjIwQ3nF", "RPN-9inDjcq", "FB5FCdM7yE", "2YWF_O8wc5f", "7XOZ76o1bKT", "giQwVc3CsD-"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", ...
[ "This paper proposes ratio matching with gradient-guided importance sampling (RMwGGIS) as an attempt to address the expensive computation and excessive memory requirement of ratio matching in learning discrete energy-based models, and demonstrates the advantages of RMwGGIS over ratio matching with experiments on de...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_IEKL-OihqX0", "VV7Cok7SDge", "LG4IilGMCxK", "fvDw3AiDEyC", "hYsEUtxn2wv", "Jb4q-DM5Ef9", "rkJT4twwbd", "lwAniy0lito", "wFAspm6cKh-", "6ScPSHJn4L", "iclr_2022_IEKL-OihqX0", "IBVe6o6tic6", "pqfU87p_zIV", "Rida2dFhshb", "FB5FCdM7yE", "2YWF_O8wc5f", "7XOZ76o1bKT", "giQwVc3Cs...
iclr_2022_4jUmjIoTz2
Collaborate to Defend Against Adversarial Attacks
Adversarially robust learning methods require invariant predictions to a small neighborhood of its natural inputs, thus often encountering insufficient model capacity. Learning multiple models in an ensemble can mitigate this insufficiency, further improving both generalization and robustness. However, an ensemble still wastes the limited capacity of multiple models. To optimally utilizing the limited capacity, this paper proposes to learn a collaboration among multiple sub-models. Compared with the ensemble, the collaboration enables the possibility of correct predictions even if there exists a single correct sub-model. Besides, learning a collaboration could enable every sub-model to fit its own vulnerability area and reserve the rest of the sub-models to fit other vulnerability areas. To implement the idea, we propose a collaboration framework---CDA$^2$ the abbreviation for Collaborate to Defend against Adversarial Attacks. CDA$^2$ could effectively minimize the vulnerability overlap of all sub-models and then choose a representative sub-model to make correct predictions. Empirical experiments verify that CDA$^2$ outperforms various ensemble methods against black-box and white-box adversarial attacks.
Reject
The paper proposes a novel ensemble method, CDA^2, in which base models collaborate to defend against adversarial attacks. To do so the base models have two heads: the label head for predicting the label and the posterior probability density (PPD) head that is trained by minimizing binary cross entropy between it and the true-label logit given by the label head. During inference the base model with the highest PPD value is chosen to make the prediction. During training base models learn from the adversarial examples produced by other base models. The evaluation of the manuscript of different reviewers was very diverse, resulting in final scores ranging between 3 and 8 after the discussion period. While the rebuttal clearly addressed the concerns of one reviewer and several additional experimental results were added for different adversarial attacks, it did not fully addressed the concerns of another reviewer, who rated his confidence higher. He was also not convinced by the update in the revised version of the manuscript, in which crucial changes in the pseudocode describing the proposed algorithm were made, which contradicted some statements in the first version. Therefore, the paper can unfortunately not be accepted in its current version. In a future version of the manuscript, the description of the algorithm and of he role of the PPD head should be improved and experiments on another dataset next to CIFAR-10 could be added.
train
[ "kmgre10rQ4h", "T_lr0waW3o", "ykE3OrXp8w1", "uI6ZoCVRcYO", "QkS4VGkoGYh", "VxtNgIAiu91", "AcLBJdQ8w-F", "_rWqyfWXGMq", "46HwHOtVWT", "BmrQpLFzHgM", "oktnUicQEF", "hdLFzhOB6gN", "a1MfGozjOk6", "y3zxosTKjxD" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " * to the comment **First,**\n\n firstly, **we have to apologize for our misleading statement \"our confusing writing about Algorithm 1 may lead to the misunderstanding that...\".**\n\n We would like to acknowledge our mistake that **our original submission missed the information about the adversarial traini...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "ykE3OrXp8w1", "y3zxosTKjxD", "y3zxosTKjxD", "a1MfGozjOk6", "iclr_2022_4jUmjIoTz2", "y3zxosTKjxD", "iclr_2022_4jUmjIoTz2", "y3zxosTKjxD", "y3zxosTKjxD", "QkS4VGkoGYh", "QkS4VGkoGYh", "QkS4VGkoGYh", "iclr_2022_4jUmjIoTz2", "iclr_2022_4jUmjIoTz2" ]
iclr_2022_RW_GTtTfHJ6
Causal Reinforcement Learning using Observational and Interventional Data
Learning efficiently a causal model of the environment is a key challenge of model-based RL agents operating in POMDPs. We consider here a scenario where the learning agent has the ability to collect online experiences through direct interactions with the environment (interventional data), but also has access to a large collection of offline experiences, obtained by observing another agent interacting with the environment (observational data). A key ingredient, which makes this situation non-trivial, is that we allow the observed agent to act based on privileged information, hidden from the learning agent. We then ask the following questions: can the online and offline experiences be safely combined for learning a causal transition model ? And can we expect the offline experiences to improve the agent's performances ? To answer these, first we bridge the fields of reinforcement learning and causality, by importing ideas from the well-established causal framework of do-calculus, and expressing model-based reinforcement learning as a causal inference problem. Second, we propose a general yet simple methodology for safely leveraging offline data during learning. In a nutshell, our method relies on learning a latent-based causal transition model that explains both the interventional and observational regimes, and then inferring the standard POMDP transition model via deconfounding using the recovered latent variable. We prove our method is correct and efficient in the sense that it attains better generalization guarantees due to the offline data (in the asymptotic case), and we assess its effectiveness empirically on a series of synthetic toy problems.
Reject
In this paper, the authors provide a model-based approach for combining experimental and observational data in reinforcement learning, specifically in POMDPs. The paper was not received very favorably by reviewers, with the main concerns revolving around: (a) writing quality, (b) validation, (c) extent of contribution given existing work on causal RL. In preparing your revision, in addition to clarifying writing, and adding better validation, I would urge the authors to consult existing causal inference literature on point and partial identification in settings related to RL, such as off-line policy learning. This will help address issues of novelty by extending their approach to settings with more types of confounding. In addition to useful references suggested by reviewers, another useful draft may be: "Path-Dependent Structural Equation Models." Srinivasan, R., Lee, J., Bhattacharya, R., and Shpitser, I.. In Proceedings of the Thirty Seventh Conference on Uncertainty in Artificial Intelligence.
train
[ "9gA-J3B9_Z4", "OuSCKdTHghZ", "S2fuOAxcn91", "GqSyx5OD_rT", "63fNTEcGTnl", "GUhWfo4bOcO", "3bX274Xebto", "fhkbEZzYEkX", "Qu_QcDSjChn", "QBJhC15hbPB", "kwGXVJo-eoA", "DJt-hwbp5K8", "szQIp9o0L3", "qCOjJtJDaL", "5Pv9MUI23b2", "iommZEhV4W", "3nV3nRRAZs", "7m8brEeRpmC", "hOwjBrRE4uC",...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for engaging in discussions, we value this exchange a lot.\n\nJust for clarification, there is no Theorem 2 in the paper. We assume you mean Theorem 1 ?\n\nYou were right, Theorem 1 indeed appears to be an application of Manski's bounds, although additional steps are to be taken. Your two-step pro...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "OuSCKdTHghZ", "3bX274Xebto", "5Pv9MUI23b2", "iclr_2022_RW_GTtTfHJ6", "S2fuOAxcn91", "iclr_2022_RW_GTtTfHJ6", "fhkbEZzYEkX", "3nV3nRRAZs", "iclr_2022_RW_GTtTfHJ6", "7m8brEeRpmC", "7m8brEeRpmC", "hOwjBrRE4uC", "hOwjBrRE4uC", "hOwjBrRE4uC", "GUhWfo4bOcO", "Y411ehR8GME", "Y411ehR8GME", ...
iclr_2022_CZZ7KWOP0-M
ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
Neural networks (NNs) with intensive multiplications (e.g., convolutions and transformers) are powerful yet power hungry, impeding their more extensive deployment into resource-constrained edge devices. As such, multiplication-free networks, which follow a common practice in energy-efficient hardware implementation to parameterize NNs with more efficient operators (e.g., bitwise shifts and additions), have gained growing attention. However, multiplication-free networks in general under-perform their vanilla counterparts in terms of the achieved accuracy. To this end, this work advocates hybrid NNs that consist of both powerful yet costly multiplications and efficient yet less powerful operators for marrying the best of both worlds, and proposes ShiftAddNAS, which can automatically search for more accurate and more efficient NNs. Our ShiftAddNAS highlights two enablers. Specifically, it integrates (1) the first hybrid search space that incorporates both multiplication-based and multiplication-free operators for facilitating the development of both accurate and efficient hybrid NNs; and (2) a novel weight sharing strategy that enables effective weight sharing among different operators that follow heterogeneous distributions (e.g., Gaussian for convolutions vs. Laplacian for add operators) and simultaneously leads to a largely reduced supernet size and much better searched networks. Extensive experiments and ablation studies on various models, datasets, and tasks consistently validate the effectiveness of ShiftAddNAS, e.g., achieving up to a +7.7% higher accuracy or a +4.9 better BLEU score as compared to state-of-the-art expert-designed and neural architecture searched NNs, while leading to up to 93% or 69% energy and latency savings, respectively. All the codes will be released upon acceptance.
Reject
This paper develops a hybrid search space consisting of both multiplication-based and multiplication-free operators. It also presents a weight-sharing mechanism for searching in the introduced search space. Pros: * A hybrid search space is developed. * Strong empirical results are reported for both CV and NLP tasks. * The paper is well written and is easy to follow. Cons: * Incremental technical novelty. * Missing baselines and competing methods. * Missing information on the search cost. * Lack of insights into the discovered architectures The rebuttal has provided most missing information and comparisons, and it has provided additional insights into the searched architectures. However, the reviewers still rate this paper at borderline primarily due to the limited technical novelties. Unfortunately, given these concerns, this submission does not meet the bar for acceptance at ICLR.
train
[ "Q-7E7OiXMW3", "L6I3QFWOvY", "0TaZWuZNJRw", "PbFWOjv4yYY", "JlidA5AnsxL", "XUH75hZ-2qL", "rpbeFNYAfT", "JoqMMoZ63H", "SUMME7VzEKU", "AldZyKbbq5", "vRWyGgDXDYo", "eAVwhwCaLM", "1yk5MMNYwIB", "_pVOtoX8FOP", "Tq93heXd0DJ", "445EHrpoMZR", "MLjKaJ1Dy4D", "RXWjJ4IQog" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad that our rebuttal has addressed most of your concerns and appreciate your updated score. Below are our detailed clarifications for your confusion points:\n\n---\n**C1: Why did the authors use the multi-resolution supernet design instead of the commonly used single-branch supernet design?**\n\nWe adopt...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "JlidA5AnsxL", "0TaZWuZNJRw", "AldZyKbbq5", "iclr_2022_CZZ7KWOP0-M", "rpbeFNYAfT", "JoqMMoZ63H", "JoqMMoZ63H", "eAVwhwCaLM", "PbFWOjv4yYY", "vRWyGgDXDYo", "1yk5MMNYwIB", "PbFWOjv4yYY", "RXWjJ4IQog", "MLjKaJ1Dy4D", "445EHrpoMZR", "iclr_2022_CZZ7KWOP0-M", "iclr_2022_CZZ7KWOP0-M", "ic...
iclr_2022_anWCFENEc5H
Modeling Adversarial Noise for Adversarial Defense
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks. Motivated by the fact that adversarial noise contains well-generalizing features and that the relationship between adversarial data and natural data can help infer natural data and make reliable predictions, in this paper, we study to model adversarial noise by learning the transition relationship between adversarial labels (i.e. the flipped labels used to generate adversarial data) and natural labels (i.e. the ground truth labels of the natural data). Specifically, we introduce an instance-dependent transition matrix to relate adversarial labels and natural labels, which can be seamlessly embedded with the target model (enabling us to model stronger adaptive adversarial noise). Empirical evaluations demonstrate that our method could effectively improve adversarial accuracy.
Reject
This paper aims to model adversarial noise by learning the transition relationship between adversarial labels and natural labels. In particular, an instance-dependent transition matrix to relate adversarial labels and natural labels. Reviewers agreed that the paper is well motivated and well written, and the proposed method is novel. Meanwhile, reviewers raised some concerns about experiments and paper presentation. During discussion, the authors provided a lot of additional results that partially addressed the reviewers' concerns. However, the reviewers still think the experimental part of this paper should be further strengthened before acceptance. Thus, I recommend to reject this paper. I encourage the authors to take the review feedback into account and submit a future version to another venue.
train
[ "nMTHIwVKpWr", "sjfyorWijew", "GPZold6anmk", "ITmZZ9YKta", "tnkGnXSdZvq", "VsaqsZ7joUq", "kdU3oZ3uNcY", "NBKu2k6GpSZ", "QcLNatSuNPk", "_vIkib5wZ7a", "pAI1kT_OKWZ", "7Vw3zbZozZC", "hgEr2giIKKh", "LZxwpGiYx1K", "rG1O2uYDSmU", "HVEr7vmK1N" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the responses! \n\nFrom my perspective, some core arguments made by the authors, e.g., \"the proposed transition matrix actually models the adversarial noise\", are not very convincing: they are more of something that the authors hope to be true but not something that can be proven or demons...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "QcLNatSuNPk", "ITmZZ9YKta", "tnkGnXSdZvq", "_vIkib5wZ7a", "NBKu2k6GpSZ", "pAI1kT_OKWZ", "NBKu2k6GpSZ", "rG1O2uYDSmU", "7Vw3zbZozZC", "HVEr7vmK1N", "HVEr7vmK1N", "LZxwpGiYx1K", "LZxwpGiYx1K", "iclr_2022_anWCFENEc5H", "iclr_2022_anWCFENEc5H", "iclr_2022_anWCFENEc5H" ]
iclr_2022_uEBrNNEfceE
Safe Linear-Quadratic Dual Control with Almost Sure Performance Guarantee
This paper considers the linear-quadratic dual control problem where the system parameters need to be identified and the control objective needs to be optimized in the meantime. Contrary to existing works on data-driven linear-quadratic regulation, which typically provide error or regret bounds within a certain probability, we propose an online algorithm that guarantees the asymptotic optimality of the controller in the almost sure sense. Our dual control strategy consists of two parts: a switched controller with time-decaying exploration noise and Markov parameter inference based on the cross-correlation between the exploration noise and system output. Central to the almost sure performance guarantee is a safe switched control strategy that falls back to a known conservative but stable controller when the actual state deviates significantly from the target state. We prove that this switching strategy rules out any potential destabilizing controllers from being applied, while the performance gap between our switching strategy and the optimal linear state feedback is exponentially small. Under our dual control scheme, the parameter inference error scales as $O(T^{-1/4+\epsilon})$, while the suboptimality gap of control performance scales as $O(T^{-1/2+\epsilon})$, where $T$ is the number of time steps, and $\epsilon$ is an arbitrarily small positive number. Simulation results on an industrial process example are provided to illustrate the effectiveness of our proposed strategy.
Reject
The main contribution of the paper is in providing an additional layer to standard certainty-equivalent control for LQR dynamics, that essentially prevents the state from exploding exponentially, via forcing a "descent" to a small bounded state space in growing epoch sizes. Theoretically, this is shown to ensure a notion of boundedness which is termed as "bounded cost safety". Overall, the reviewers raised several concerns in their initial reviews. These included the relevance of the proposed new approach to modeling "safety" here, whether it is actually novel given the assumptions about open loop stability of the original plant, the fact that the learning algorithm could take the state to arbitrarily large states durign the learning process, the significance of the contribution of the paper wherein a "kill switch" is effectively being designed depending on the state norm, whether least squares is an arguably simpler system estimation method compared to the impulse response estimator here, whether existing approaches based on an input failure probability parameter can be used to yield the same result by iteratively taking it down to zero, and other concerns about the technical exposition. The author(s) provided detailed responses to the reviewers' concerns. Specifically, the safety/stability notion was clarified as being distinct from standard input-to-state stability, that the learning algorithm could, during its operation, drive the system state to arbitrarily large sizes (although asymptotically almost surely this is supposed to not happen), and how this paper's assumptions are different (and lighter) than other related work that achieves almost-sure guarantees. In the discussion that ensued after the author response period, it was clear that the author responses had helped to address many of the reviewers' concerns. However, the overall impression has still not been convincing enough to recommend acceptance. This is primarily on two fronts, which I hope that the author(s) can address in the future to strengthen the submission: (1) The attempt at expressing "safety of learning" is found to be not satisfactory, in view of the observation that the proposed scheme does not guarantee classical notions of stability while in the process of learning. It also makes the main message of the paper confusing -- the reviewers noted that the words "safety" and "safe learning" still recur in the revised manuscript, creating scope for misinterpretation. (2) The algorithmic contribution appears to be incremental -- its essence is to "apply the brakes when the car runs too fast". This is not to take away from the hard work put in to analyzing the algorithm and deriving guarantees. (3) The experiments benchmarking the proposed strategy are not comprehensive -- more relevant baselines drawn from existing work, such as the ones that have emerged from the author response discussion, could be compared against. In fact, it would be compellingly in favor of this submission for the author(s) to show that other approaches fail to offer the same kind of "safe" performance that is expected. Upon a more careful reading by myself in the recent past, I would also like to bring up a fundamental technical criticism about Theorem 1 and Defn. 2 ("bounded-cost safety"), which I believe must be fixed before the paper's conclusions can be accepted. Defn. 2 states that a learning process if bounded cost safe if for all times $k$, $\pi_k$ is not destabilizing in the sense that its value function is finite. However, the sequence of policies $\pi_k$, $k=1, 2, \ldots$ is a sequence of *random* objects. So in what sense is Defn. 2 to be interpreted? If the author(s) mean(s) to say that the event {$\forall k=1, 2, ...: \pi_k$ is not destabilizing} occurs w.p. 1, then this is a very strong requirement and cannot be guaranteed (random noise can cause a 'bad' controller to be learnt and applied at some time t with positive probability). Basically my contention is that Defn. 2 is incomplete for a theory-oriented paper like this, and as a consequence I do not see why Theorem 1 should hold (or more precisely, in what sense it should hold). The proof of Theorem 1 is not clear as well: The expectation in the third sentence of Sec A.2 ought to be taken for a *fixed* controller $\hat{K}_k$ *always applied* to a system from time zero until infinity. I do not see why a fixed controller's (standard infinite horizon average) cost should always be finite, leading me to suspect an irregularity in the proof argument. I wish this point could be discussed and resolved earlier in the author response phase, but it is unfortunately too late.
train
[ "00mBaAx5wML", "ssh5UEEzSY9", "zj5q55LIvP_", "7RTfFjGPqPg", "op9XZYzwQR", "IhYsaVcu0zv", "dw-rszS5o-Y", "YWUhmc0caDZ", "E_zv5wXoa4a", "Xsm0BouSbAy", "lvrF3USO_fj", "1MUYaRLmPAi", "kDw8K_eohj9", "oHiupMrgS9P" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for the reviewer's appreciation of our revisions and valuable further comments.\n\n- About the term: we recognize that safe learning usually deals with constraint satisfaction, which is a different problem from ours. The motivation of defining \"bounded-cost safety\" of a learning process is that...
[ -1, 5, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, 3, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "zj5q55LIvP_", "iclr_2022_uEBrNNEfceE", "YWUhmc0caDZ", "iclr_2022_uEBrNNEfceE", "IhYsaVcu0zv", "lvrF3USO_fj", "7RTfFjGPqPg", "ssh5UEEzSY9", "7RTfFjGPqPg", "iclr_2022_uEBrNNEfceE", "oHiupMrgS9P", "kDw8K_eohj9", "iclr_2022_uEBrNNEfceE", "iclr_2022_uEBrNNEfceE" ]
iclr_2022_97r5Y5DrJTo
The Effect of diversity in Meta-Learning
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples. Recent studies show that task distribution plays a vital role in the performance of the model. Conventional wisdom is that task diversity should improve the performance of meta-learning. In this work, we find evidence to the contrary; we study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms. For this experiment, we train on multiple datasets, and with three broad classes of meta-learning models - Metric-based (i.e., Protonet, Matching Networks), Optimization-based (i.e., MAML, Reptile, and MetaOptNet), and Bayesian meta-learning models (i.e., CNAPs). Our experiments demonstrate that the effect of task diversity on all these algorithms follows a similar trend, and task diversity does not seem to offer any benefits to the learning of the model. Furthermore, we also demonstrate that even a handful of tasks, repeated over multiple batches, would be sufficient to achieve a performance similar to uniform sampling and draws into question the need for additional tasks to create better models.
Reject
This paper set out to show that increasing task diversity during meta-training process does not boost performance. The reviewers mostly agreed (only reviewer wVFn dissented) that the empirical set up of the paper was convincing, but they also felt it over-emphasized empirics over a deeper understanding of the phenomena observed. In turn, this resulted in discussions around how the experiments and the explanations didn't fully prove that increasing task diversity does not help. Overall, the discussion and the additional analysis tools provided by the authors (such as the diversity metric) will greatly improve the paper.
train
[ "AdjBdUzChe", "nVTSMR3GUWb", "GfppxK_pYwy", "kBZgoliNEdn", "V4o1WgoVm05", "vNQ1K3IWxTd", "7HRr30fx0nR", "DsgbUYe26lv", "knu4utjqBr3" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Before giving a more formal definition of task diversity, we set a few more fundamental ideas required to understand our metric better.\n\n**Task Diversity** We define task diversity as the diversity among classes within a task. This diversity is defined as the volume of parallelepiped spanned by the embeddings o...
[ -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "nVTSMR3GUWb", "GfppxK_pYwy", "kBZgoliNEdn", "knu4utjqBr3", "DsgbUYe26lv", "7HRr30fx0nR", "iclr_2022_97r5Y5DrJTo", "iclr_2022_97r5Y5DrJTo", "iclr_2022_97r5Y5DrJTo" ]
iclr_2022_OQL_tkK1vqO
ZARTS: On Zero-order Optimization for Neural Architecture Search
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency. It introduces trainable architecture parameters to represent the importance of candidate operations and proposes first/second-order approximation to estimate their gradients, making it possible to solve NAS by gradient descent algorithm. However, our in-depth empirical results show that the approximation will often distort the loss landscape, leading to the biased objective to optimize and in turn inaccurate gradient estimation for architecture parameters. This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation. Specifically, three representative zero-order optimization methods are introduced: RS, MGS, and GLD, among which MGS performs best by balancing the accuracy and speed. Moreover, we explore the connections between RS/MGS and gradient descent algorithm and show that our ZARTS can be seen as a robust gradient-free counterpart to DARTS. Extensive experiments on multiple datasets and search spaces show the remarkable performance of our method. In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue. Also, we search on the search space of DARTS to compare with peer methods, and our discovered architecture achieves 97.54% accuracy on CIFAR-10 and 75.7% top-1 accuracy on ImageNet, which are state-of-the-art performance.
Reject
The paper proposes a series of zeroth order optimization approaches to stabilize DARTS training. Although the reviewers think that zeroth order approach is novel to the NAS community, they also point out several weaknesses. In particular, the method will introduce extra computation time and the results are not really standing out comparing with other state-of-the-art methods. Therefore, despite some interesting ideas are presented in the paper, we decide to reject the paper and encourage the authors to address those weaknesses in their future revision.
train
[ "GYf9LFKRzlX", "1rdd9QwcwAp", "y0Zf-BgFCVw", "Cy6Asb_8D5L", "9HE2i2ru--V", "mX9BcSizfuK", "bdcO8at25Cb", "xc7HQgIFuQJ", "qRwTiG-fnyk", "Xt-5yXniZNb" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the responses to my concerns. Although the search costs are relatively high comparing to other works that focus on the speed-up of the search, I think that this work explores a new way, i.e. zero-order optimization, to stabilize the search process of DARTS. So, I keep my rating as 6.", " > **Q5:** Is...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "9HE2i2ru--V", "y0Zf-BgFCVw", "xc7HQgIFuQJ", "bdcO8at25Cb", "Xt-5yXniZNb", "qRwTiG-fnyk", "iclr_2022_OQL_tkK1vqO", "iclr_2022_OQL_tkK1vqO", "iclr_2022_OQL_tkK1vqO", "iclr_2022_OQL_tkK1vqO" ]
iclr_2022_D8pn0BlHaGe
Single-Cell Capsule Attention : an interpretable method of cell type classification for single-cell RNA-sequencing data
Single-cell RNA-sequencing technique can obtain genes’ expression level of every cell. Cell type classification (also known as cell type annotation) on single-cell RNA-seq data helps to explore cellular heterogeneity and diversity. Previous methods for cell type classification are either based on statistical hypotheses of gene expression or deep neural networks. However, the hypotheses may not reflect the true expression level. Deep neural networks lack interpretation for the result. Here we present an interpretable neural-network based method single-cell capsule attention(scCA) which assigns cells to different cell types based on their different feature patterns. In our model, we first generate capsules which extract different features of the cells. Then we obtain compound features which combine a set of features’ information through a LSTM model. In the end, we train attention weights and apply them to the compound features. scCA provides a strong interpretation for cell type classification result. Cells from the same cell type share a similar pattern of capsules’ relationship and similar distribution of attention weights for compound features. Compared with previous methods for cell type classification on nine datasets, scCA shows high accuracy on all datasets with robustness and reliable interpretation.
Reject
All reviewers believe that the paper is not ready for publication and clarity issue remain. All reviewers read the rebuttal responses, but they found that the paper wasn't revised during rebuttal, thus they retain their decisions.
train
[ "hQ3Db7vSLd5", "4EaSFFvmxJ0", "H1ZBengcpg4", "EP1CKzScMw", "-9i0xf1LWfV", "ZfV-s5csO2t", "lV9-f-z_rQU", "7ihRKHUM5gy", "gB2RKIpoOE", "DhbRigmnzwQ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments and advice. As you point out, it will be more reasonable if we relate scCA’s capsules or compound features to some biologically meaningful quantities, such as specific marker genes. We will make improvements in future work. Also, we should specify more about scCA’s cell-type specific patt...
[ -1, -1, -1, -1, -1, -1, 3, 3, 1, 1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "4EaSFFvmxJ0", "-9i0xf1LWfV", "DhbRigmnzwQ", "gB2RKIpoOE", "7ihRKHUM5gy", "lV9-f-z_rQU", "iclr_2022_D8pn0BlHaGe", "iclr_2022_D8pn0BlHaGe", "iclr_2022_D8pn0BlHaGe", "iclr_2022_D8pn0BlHaGe" ]
iclr_2022_Ps_m_Uwcu-E
Layer-wise Adaptive Model Aggregation for Scalable Federated Learning
In Federated Learning, a common approach for aggregating local models across clients is periodic averaging of the full model parameters. It is, however, known that different layers of neural networks can have a different degree of model discrepancy across the clients. The conventional full aggregation scheme does not consider such a difference and synchronizes the whole model parameters at once, resulting in inefficient network bandwidth consumption. Aggregating the parameters that are similar across the clients does not make meaningful training progress while increasing the communication cost. We propose FedLAMA, a layer-wise model aggregation scheme for scalable Federated Learning. FedLAMA adaptively adjusts the aggregation interval in a layer-wise manner, jointly considering the model discrepancy and the communication cost. The layer-wise aggregation method enables to finely control the aggregation interval to relax the aggregation frequency without a significant impact on the model accuracy. Our empirical study shows that FedLAMA reduces the communication cost by up to $60\%$ for IID data and $70\%$ for non-IID data while achieving a comparable accuracy to FedAvg.
Reject
This paper proposes a layer-wise adaptive aggregation method for federated learning that seeks to reduce the communication cost. The frequency of aggregation is adjusted separately for each layer of the model that is being trained. The number of iterations $\tau$ after which each layer's parameters are averaged across clients is multiplied by a factor $\phi$ depending on the magnitude of changes to the parameters at each layer. The paper gives a convergence analysis of the proposed method and provides experimental results to demonstrate its effectiveness in reducing communication without compromising accuracy. Reviewer 7ks6 found some errors in the convergence analysis that were fixed by the authors in the discussion period. Reviewer YGZf increased their score to 5 after the discussion with the authors. The reviewers also gave the following suggestions to improve the paper: 1) Showing convergence curves to demonstrate that the communication reduction does not come at the cost of a slowdown in convergence. 2) The results presented in the paper shows only a small communication reduction for many of the layers. Perhaps the strategy can be improved in order to boost the communication reduction. 2) Use a more realistic model to calculate the communication cost so as to account for network delays and other costs rather than just considering the number of parameters communicated. The scores are split on this paper. While one of the reviewers recommends acceptance, three others say that the paper is below the acceptance threshold. So I recommend a rejection while noting that the paper is close to the borderline.
train
[ "2uIz0sAQ3Dh", "skSYtgfNv6", "bCL3W12YpXn", "Rv64rtGY6GN", "TP-OQ9K4Pi", "yE7x350luvB", "GJSSbiBG3Z", "wEUNzYIYn6", "VvxJuVMZBDb", "A1m3SXr27Gu", "17smUPuVjbw" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an adaptive interval schema for layer-wise model aggregation in federated settings. The aim of the proposed method is to reduce communication costs by adaptively decreasing the frequency of layer-wise aggregation with consideration of model discrepancy. Strengths:\n\n1. It is an interesting id...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2022_Ps_m_Uwcu-E", "TP-OQ9K4Pi", "A1m3SXr27Gu", "17smUPuVjbw", "2uIz0sAQ3Dh", "GJSSbiBG3Z", "wEUNzYIYn6", "VvxJuVMZBDb", "iclr_2022_Ps_m_Uwcu-E", "iclr_2022_Ps_m_Uwcu-E", "iclr_2022_Ps_m_Uwcu-E" ]
iclr_2022_tiKNfYpH8le
Pareto Navigation Gradient Descent: a First Order Algorithm for Optimization in Pareto Set
Many modern machine learning applications, such as multi-task learning, require finding optimal model parameters to trade-off multiple objective functions that may conflict with each other. The notion of the Pareto set allows us to focus on the set of (often infinite number of) models that cannot be strictly improved. But it does not provide an actionable procedure for picking one or a few special models to return to practical users. In this paper, we consider \emph{optimization in Pareto set (OPT-in-Pareto)}, the problem of finding Pareto models that optimize an extra reference criterion function within the Pareto set. This function can either encode a specific preference from the users, or represent a generic diversity measure for obtaining a set of diversified Pareto models that are representative of the whole Pareto set. Unfortunately, despite being a highly useful framework, efficient algorithms for OPT-in-Pareto have been largely missing, especially for large-scale, non-convex, and non-linear objectives in deep learning. A naive approach is to apply Riemannian manifold gradient descent on the Pareto set, which yields a high computational cost due to the need for eigen-calculation of Hessian matrices. We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information, with both high practical efficiency and theoretically guaranteed convergence property. Empirically, we demonstrate that our method works efficiently for a variety of challenging multi-task-related problems.
Reject
The authors propose the OPT-in-Pareto algorithm that considers multi-objective optimization, and includes an extra "non-informative" reference metric for choosing between different Pareto-optimal solutions. The reviewers generally agreed that the work was compelling. However, one reviewer (6MZF) brought up the fact that the proposal is extremely similar to one proposed by a different arXiv paper, and convincingly argued that the authors of this paper were aware of the other before submission. This is a difficult situation. On the one hand, for the purposes of establishing priority, an arXiv paper "doesn't count". On the other hand, I believe that authors are obligated to appropriately credit all relevant work of which they are aware, in *any* form: this includes journals, conference proceedings, preprints, emails, personal conversations, stackoverflow posts, tweets, etc. In this case, it seems that the authors did not adhere to this second condition, and while they have updated their manuscript, two reviewers said that they were unsatisfied by the changes on this point. I want to emphasize that this isn't a question of priority: the first to publish "wins", and nobody has published this work, yet. However, other researchers working on the same problem, and proposing similar solutions, *must* be appropriately credited, even by the eventual winners (if they are aware of them).
train
[ "AMaxwAYtVhK", "5DVRtNQxkyG", "-Gc_Js7INkq", "MJbm_M3ONu", "FMee4MzjKdy", "t-uc5rSxeta", "V1ey8gic0G", "-lPhg6oaXM", "4Ove3maKjuA", "xmZ30U1ZXzK" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the updated comments. We still believe that our algorithm differs significantly from Kamani et al's. Modifying their algorithm to a general F might not be impossible but is not obvious (and is also not studied).\n\nFurther, it's also unknown what's the optimality of modifying Kamani et al's to solve th...
[ -1, 5, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ -1, 2, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "FMee4MzjKdy", "iclr_2022_tiKNfYpH8le", "iclr_2022_tiKNfYpH8le", "xmZ30U1ZXzK", "5DVRtNQxkyG", "-lPhg6oaXM", "4Ove3maKjuA", "iclr_2022_tiKNfYpH8le", "iclr_2022_tiKNfYpH8le", "iclr_2022_tiKNfYpH8le" ]
iclr_2022_Hg7xLoENqHW
Robust Imitation via Mirror Descent Inverse Reinforcement Learning
Adversarial imitation learning techniques are based on modeling statistical divergences using agent and expert demonstration data. However, unbiased minimization of these divergences is not usually guaranteed due to the geometry of the underlying space. Furthermore, when the size of demonstrations is not sufficient, estimated reward functions from the discriminative signals become uncertain and fail to give informative feedback. Instead of formulating a global cost at once, we consider reward functions as an iterative sequence in a proximal method. In this paper, we show that rewards dervied by mirror descent ensures minimization of a Bregman divergence in terms of a rigorous regret bound of $\mathcal{O}(1/T)$ for a particular condition of step sizes $\{\eta_t\}_{t=1}^T$. The resulting mirror descent adversarial inverse reinforcement learning (MD-AIRL) algorithm gradually advances a parameterized reward function in an associated reward space, and the sequence of such functions provides optimization targets for the policy space. We empirically validate our method in discrete and continuous benchmarks and show that MD-AIRL outperforms previous methods in various settings.
Reject
This work introduces/applies the mirror descent optimization technique to adversarial inverse reinforcement learning (AIRL). As a result, the proposed algorithm (MD-AIRL) incrementally learns a parameterized reward function in an associated reward space. The two issues of standard adversarial imitation learning algorithms are 1) current "divergence"-based updates may not lead to updates that better match the expert (due to geometry) 2) "divergence"-based updates may suffer when only small number of demonstrations are provided. Thus the goal of this work is to (presumably) to "robustify" the learning of reward function especially by addressing these issues. The proposed algorithm is evaluated on a bandits problem, a multi-goal toy example and standard mujoco benchmark. **Strengths** This work attempts to address the important problem of understanding and improving the updates of IRL algorithms A theoretical analysis is provided **Weaknesses** The major concern is clarity of the manuscript. Even after updating clarity remains a concern While a lot of experiments were performed, evaluation is not entirely convincing. One reason for this that it is hard to tie the results back to the original motivation/claims of this algorithm. As one reviewer notes "it's unclear how the new algorithm affects reward functions". Furthermore, reviewers find the experimental results not entirely convincing **Summary** After rebuttal and revision, the clarity and experimental analysis remain a concern. My recommendation is that the authors are encouraged to take the reviewers feedback and improve the manuscript. In its current form it's not quite ready yet for publication.
train
[ "RdJ7zsnoDR-", "rK5b2yE-QTC", "raQxkEh9nby", "mvRA9GXVC5m", "L-0dBXePwei", "K5cf2yJ4FGU", "ZRWmdUUMHj", "8VsiL9efcQK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a novel mirror-descent adversarial inverse reinforcement learning (MD-AIRL) algorithm. MD-AIRL considers the reward function as an iterative sequence in a proximal method. MD-AIRL has been introduced with dense theoretical analysis and validated with diverse experiments covering both discrete a...
[ 5, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_Hg7xLoENqHW", "8VsiL9efcQK", "iclr_2022_Hg7xLoENqHW", "RdJ7zsnoDR-", "ZRWmdUUMHj", "L-0dBXePwei", "iclr_2022_Hg7xLoENqHW", "iclr_2022_Hg7xLoENqHW" ]
iclr_2022_U-_89RnR8F
Meaningfully Explaining Model Mistakes Using Conceptual Counterfactuals
Understanding and explaining the mistakes made by trained models is critical to many machine learning objectives, such as improving robustness, addressing concept drift, and mitigating biases. However, this is often an ad hoc process that involves manually looking at the model's mistakes on many test samples and guessing at the underlying reasons for those incorrect predictions. In this paper, we propose a systematic approach, \textit{conceptual counterfactual explanations} (CCE), that explains why a classifier makes a mistake on a particular test sample(s) in terms of human-understandable concepts (e.g. this zebra is misclassified as a dog because of faint \emph{stripes}). We base CCE on two prior ideas: counterfactual explanations and concept activation vectors, and validate our approach on well-known pretrained models, showing that it explains the models' mistakes meaningfully. In addition, for new models trained on data with spurious correlations, CCE accurately identifies the spurious correlation as the cause of model mistakes from a single misclassified test sample. On two challenging medical applications, CCE generated useful insights, confirmed by clinicians, into biases and mistakes the model makes in real-world settings. The code for CCE is publicly available and can easily be applied to explain mistakes in new models.
Reject
This paper proposes a novel method called CCE for explaining mistakes by DNNs on image classification. It is built on top of two prior ideas: counterfactual explanations and concept activation vectors. CCE explains a mistake by assigning scores to a shot list of concepts, where a large positive score means that adding that concept to the image will increase the probability of correctly classifying the image, as will removing or reducing a concept with a large negative score. The strengths of the paper include novel combination of previous work, clear presentation, interesting experiments, convincing results on controlled settings. The weaknesses include the lack of results on less controlled settings, the lack of more meaningful spurious correlations in the medical examples, and the lack of user studies. Although the reviewers have shown interests in this paper, they clearly do not support the paper strongly. In addition, the authors have missed the following paper that also combines counterfactual explanations and concept activation vectors: Akula, Arjun, Shuai Wang, and Song-Chun Zhu. “Cocox: Generating conceptual and counterfactual explanations via fault-lines.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 03. 2020.
train
[ "H7oKhD6TI8", "Sw4IeZKOZNl", "rngUzlvz-K", "s8YeecNMsjZ", "jVDulfrvyv9", "nrcYd8gx_e", "7zFA35sZ_bS", "VzMLWpkkQIc", "co4Ym-KetE0", "Zt0p0mINEOW", "gOWn6rVO6Y2", "L8RsXflFSO", "D6qhdJy_zq5x", "HLlR3GwRZ_D", "4lp_E1uvqHW", "mxaqIzKgU5", "RLLBAIpMU-", "jeNapnaqhgY", "dKdOiXOcGdS", ...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "...
[ "Goal: provide a human readable explanation to why a given classifier made a mistake on a given example.\n\nApproach: \n\nStep1: For the given input domain (here images), first obtain a set of concepts that are human interpretable. These concepts each describe a particular aspect of an image, and a human is expecte...
[ 6, 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_U-_89RnR8F", "iclr_2022_U-_89RnR8F", "jVDulfrvyv9", "nrcYd8gx_e", "iclr_2022_U-_89RnR8F", "dKdOiXOcGdS", "Sw4IeZKOZNl", "H7oKhD6TI8", "Zt0p0mINEOW", "gOWn6rVO6Y2", "L8RsXflFSO", "HLlR3GwRZ_D", "Sw4IeZKOZNl", "4lp_E1uvqHW", "WWSQ9NPO1Y34", "iclr_2022_U-_89RnR8F", "FG9ClsxDM...
iclr_2022_G-7GlfTneYg
VoiceFixer: Toward General Speech Restoration with Neural Vocoder
Speech restoration aims to remove distortions in speech signals. Prior methods mainly focus on single-task speech restoration (SSR), such as speech denoising or speech declipping. However, SSR systems only focus on one task and do not address the general speech restoration problem. In addition, previous SSR systems show limited performance in some speech restoration tasks such as speech super-resolution. To overcome those limitations, we propose a general speech restoration (GSR) task that attempts to remove multiple distortions simultaneously. Furthermore, we propose VoiceFixer, a generative framework to address the GSR task. VoiceFixer consists of an analysis stage and a synthesis stage to mimic the speech analysis and comprehension of the human auditory system. We employ a ResUNet to model the analysis stage and a neural vocoder to model the synthesis stage. We evaluate VoiceFixer with additive noise, room reverberation, low-resolution, and clipping distortions. Our baseline GSR model achieves a 0.499 higher mean opinion score (MOS) than the speech denoising SSR model. VoiceFixer further surpasses the GSR baseline model on the MOS score by 0.256. Moreover, we observe that VoiceFixer generalizes well to severely degraded real speech recordings, indicating its potential in restoring old movies and historical speeches.
Reject
PAPER: This paper addresses the problem of learning methods for general speech restoration which generalizes across at least 4 tasks (additive noise, room reverberation, low-resolution and clipping distortion). The proposed approach is based on a two-stage process, which includes both analysis and synthesis stages. DISCUSSION: The reviewers wrote very detailed reviews which ask some important questions and point to some potential issues. The authors responded to all reviews, but only addressed a subset of the issues and questions mentioned by the reviewers. Novelty and comparison with previous approaches was one of the issues mentioned by reviewers. SUMMARY: While reviewers are supportive of this line of research, reviewers were also concerned with the novelty of the proposed approach and details of the experiments. In its current form, the paper may not be ready for publication.
test
[ "c7lgS2OEAyV", "qZoPXVuOMDL", "8uLXm2qyLwp", "yKWua6k32vL", "Am_o2rQUrz", "DtxpN61vYsN", "nCEvgWTCX6k", "pt1A8nc4bFQ", "ebHXdJfea0d", "d4v-r4tpQiW", "FBwKXUzE9q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for responding to the reviews. My key concerns regarding this paper remains and I also agree with the comments from other reviewers (several of which have not been addressed in the rebuttal). I am keeping the score as is. ", " Thanks for your answer. After considering that and the other reviews/answers I...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "nCEvgWTCX6k", "yKWua6k32vL", "Am_o2rQUrz", "FBwKXUzE9q", "d4v-r4tpQiW", "ebHXdJfea0d", "pt1A8nc4bFQ", "iclr_2022_G-7GlfTneYg", "iclr_2022_G-7GlfTneYg", "iclr_2022_G-7GlfTneYg", "iclr_2022_G-7GlfTneYg" ]
iclr_2022_i4qKmHdq6y8
Learning to Abstain in the Presence of Uninformative Data
Learning and decision making in domains with naturally high noise-to-signal ratios – such as Finance or Public Health – can be challenging and yet extremely important. In this paper, we study a problem of learning on datasets in which a significant proportion of samples does not contain useful information. To analyze this setting, we introduce a noisy generative process with a clear distinction between uninformative/not learnable/purely random data and a structured/informative component. This dichotomy is present both during the training and in the inference phase. We propose a novel approach to learn under these conditions via a loss inspired by the selective learning theory. By minimizing the loss, our method is guaranteed to make a near-optimal decision by simultaneously distinguishing structured data from the non-learnable and making predictions, even in a highly imbalanced setting. We build upon the strength of our theoretical guarantees by describing an iterative algorithm, which jointly optimizes both a predictor and a selector, and evaluate its empirical performance under a variety of conditions.
Reject
This paper studies a learning scenario in which there exist 2 classes of examples: "predictable" and "noise". Learning theory is provided for this setting and a novel algorithm is devised that identifies predictable examples and makes predictions at the same time. A more practical algorithm is devised as well. Results are supported by experiments. Reviewers have raised a number of concerns (ranging from how realistic this settings is to missing references). Overall they found this work interesting and relevant to ML community and appreciate the effort that authors have put in in their thoughtful response. However, after a thorough deliberation conference program committee decided that the paper is not sufficiently strong in its current form to be accepted.
train
[ "7LxIsdJlWy2", "_tvtUapaqRB", "82BRwjRs00u", "ONagtyjuOD7", "9JoWkEfUzV8", "bbDhflSPGHJ", "ACVjOqesgwp", "gLhyD5nZtaX", "I1wP5_W0pxw", "uie49JS1dde", "kGk_9iTEShg", "7dxmTBs2zYe", "Nfka8z9uvs", "XntVpA6HuQ5", "_jl4nPeZx7B", "3e4A5I4K_Q", "p1Abzya9wzQ", "dswtt84LbeN" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. We agree that the problem itself becomes more challenging when one needs to recover the selector and classifier jointly as it requires more samples (which depends on the complexity of the selector). We look forward to exploring settings with more complex selector decision boundaries ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 2 ]
[ "_tvtUapaqRB", "82BRwjRs00u", "ONagtyjuOD7", "ACVjOqesgwp", "_jl4nPeZx7B", "dswtt84LbeN", "p1Abzya9wzQ", "p1Abzya9wzQ", "3e4A5I4K_Q", "_jl4nPeZx7B", "_jl4nPeZx7B", "_jl4nPeZx7B", "_jl4nPeZx7B", "iclr_2022_i4qKmHdq6y8", "iclr_2022_i4qKmHdq6y8", "iclr_2022_i4qKmHdq6y8", "iclr_2022_i4qK...
iclr_2022_qWhajfmKEUt
Delving into Feature Space: Improving Adversarial Robustness by Feature Spectral Regularization
The study of adversarial examples in deep neural networks has attracted great attention. Numerous methods are proposed to eliminate the gap of features between natural examples and adversarial examples. Nevertheless, every feature may play a different role in adversarial robustness. It is worth exploring which feature is more beneficial for robustness. In this paper, we delve into this problem from the perspective of spectral analysis in feature space. We define a new metric to measure the change of features along eigenvectors under adversarial attacks. One key finding is that eigenvectors with smaller eigenvalues are more non-robust, i.e., adversary adds more components along such directions. We attribute this phenomenon to the dominance of the top eigenvalues. To alleviate this problem, we propose a method called \textit{Feature Spectral Regularization (FSR)} to penalize the largest eigenvalue, and as a result, the other smaller eigenvalues get increased relatively. Comprehensive experiments demonstrate that FSR is effective to alleviate the dominance of larger eigenvalues and improve adversarial robustness on different datasets. Our codes will be publicly available soon.
Reject
Based on the observation that the eigenvectors with smaller eigenvalues are more non-robust (i.e., adversary adds more components along such directions), the authors propose a method called Feature Spectral Regularization (FSR) to penalize the largest eigenvalue, and as a result, the other smaller eigenvalues get increased relatively. In this paper, in addition to FSR, theoretical analysis along with experimental results on different datasets and models were presented. Although the proposed FSR has some merits, the major concerns from the reviewers include (1) impractical use on large-scale datasets and (2) lack of significant improvement over SOTA. Compared with other submissions I'm handling, I have to reject this manuscript.
test
[ "G0pZo9EZiRa", "mQkuTK5Ic9W", "2Umup7Q47m9", "vvyS4ubEodh", "wsy6ZarqJq_", "b5X3rmF6asj", "ezU6rYaiphr", "Cvi_QQGnpln", "9v8R0dA4qMG", "vVbxhKey1hX", "u-vIQW794PS", "TlTORD3nx3q", "aHXTabuOvPQ", "_jnydEDmhB", "FNUbN9eFknl", "hD2IEAGg18k", "gtEm80Qg0NP", "FZH7RsniP8b", "XUFUQNIW-e...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " Dear authors,\n\nI think we have reached the end of feasibility of discussing a proof over openreview. But let me just set my argument right, because you misunderstood what I wrote. \nFirst of all, with $U_{\\cdot n}$ I denote the $n$th column of $U$, so this should have the same dimension like $Y$ and hence I...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "mQkuTK5Ic9W", "vvyS4ubEodh", "iclr_2022_qWhajfmKEUt", "wsy6ZarqJq_", "b5X3rmF6asj", "Cvi_QQGnpln", "ERi4GRnVj0", "9v8R0dA4qMG", "aHXTabuOvPQ", "XUFUQNIW-eT", "U-hnF0w6Y-l", "_jnydEDmhB", "FNUbN9eFknl", "gtEm80Qg0NP", "FZH7RsniP8b", "iclr_2022_qWhajfmKEUt", "ERi4GRnVj0", "ERi4GRnVj...
iclr_2022_1Z5P--ntu8
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Finding the optimal configuration of parameters in ResNet is a nonconvex minimization problem, but first-order methods nevertheless find the global optimum in the overparameterized regime. We study this phenomenon with mean-field analysis, by translating the training process of ResNet to a gradient-flow partial differential equation (PDE) and examining the convergence properties of this limiting process. The activation function is assumed to be $2$-homogeneous or partially $1$-homogeneous; the regularized ReLU satisfies the latter condition. We show that if the ResNet is sufficiently large, with depth and width depending algebraically on the accuracy and confidence levels, first-order optimization methods can find global minimizers that fit the training data.
Reject
This paper proposes an improved mean-field analysis for multi-player residual networks. Compared with prior works, the proposed analysis removes a full support assumption needed in prior works. The authors have addressed some of the reviewers’ concerns by adding comparisons with the existing analysis of ResNet in the NTK regime, and a more detailed comparison with Ding et al. 2021. While this paper gathers some support from a reviewer, there is still concern that the novelty of this paper is not significant, especially given that the analysis is heavily built upon prior works. I think this paper can benefit from providing a proof sketch to highlight the key difference between the new analysis and existing analyses, or explicitly demonstrating the key proof technique/technical lemmas that enable the removal of the full support assumption. This paper might be a strong work after careful revision.
train
[ "Ujlwl5fgfwl", "yNwjwhqx94b", "mGO9U31Q-nT", "CZTKJOGhiLb", "tUrq-MwM_wl", "ZuSqrQMStC", "SoeJv2277pk", "3X0bq10gqiw", "_bqZLJ4M6E74", "Xj-z-VX86lt", "JGO-wDLwdfU", "hgv4j1cgorW", "f8JuqNIiaw0", "MdS6uApMzkj", "oSAoOV62AnG", "vlM4VjPgVu1" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proves a global convergence result for GD in resnets, whose residual blocks are mean field two-layer neural nets. This is taken with a double limit, where the depth limit is the neural ODE and the width limit is the two-layer MF limit. To prove global convergence of the infinite-depth/width limit, the pa...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_1Z5P--ntu8", "mGO9U31Q-nT", "CZTKJOGhiLb", "tUrq-MwM_wl", "ZuSqrQMStC", "SoeJv2277pk", "3X0bq10gqiw", "_bqZLJ4M6E74", "Ujlwl5fgfwl", "vlM4VjPgVu1", "oSAoOV62AnG", "MdS6uApMzkj", "MdS6uApMzkj", "iclr_2022_1Z5P--ntu8", "iclr_2022_1Z5P--ntu8", "iclr_2022_1Z5P--ntu8" ]
iclr_2022_kcadk-DShNO
Why be adversarial? Let's cooperate!: Cooperative Dataset Alignment via JSD Upper Bound
Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised dataset alignment. We present empirical results of our framework on both simulated and real-world datasets to demonstrate the benefits of our approach.
Reject
This paper offers flow-based alignment methods for alignment of distributions in a domain adaptation setting. While there are many positive aspects of the submission, the experimental results only weakly support the results. The AC agrees with the critical comments mentioned by reviewer sZ2C, and in particular observes that the experimentation is not state of the art with regard to current domain adaptation literature. Unfortunately the submission is not acceptable in present form.
val
[ "iWMtEkgEoN5", "h1pps-P5ul0", "C_6bv9UIe2y", "lIVVBjK60n", "A96edbQrLrg", "g2H51UzP0Pm", "yTB5hTKyHBL", "GyhHoNVfPzP", "YemYXWXjcFI", "jTOF0TpZ3h", "a7L-mIhI_z-", "Y3EnbAGUYW4", "Shf3SvQWrd" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewer fTCq\n\nThank you for your thoughtful original review. In our response, we have added more discussion about our differences from AlignFlow and LRMF based on your comments. We have also added experimental results on CelebA and structured data (see general response).\n\nGiven our response, have we answ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "YemYXWXjcFI", "g2H51UzP0Pm", "lIVVBjK60n", "A96edbQrLrg", "jTOF0TpZ3h", "GyhHoNVfPzP", "iclr_2022_kcadk-DShNO", "a7L-mIhI_z-", "Shf3SvQWrd", "Y3EnbAGUYW4", "iclr_2022_kcadk-DShNO", "iclr_2022_kcadk-DShNO", "iclr_2022_kcadk-DShNO" ]
iclr_2022_QDDVxweQJy0
Proving Theorems using Incremental Learning and Hindsight Experience Replay
Traditional automated theorem provers for first-order logic depend on speed-optimized search and many handcrafted heuristics that are designed to work best over a wide range of domains. Machine learning approaches in literature either depend on these traditional provers to bootstrap themselves or fall short on reaching comparable performance. In this paper, we propose a general incremental learning algorithm for training domain-specific provers for first-order logic without equality, based only on a basic given-clause algorithm, but using a learned clause-scoring function. Clauses are represented as graphs and presented to transformer networks with spectral features. To address the sparsity and the initial lack of training data as well as the lack of a natural curriculum, we adapt hindsight experience replay to theorem proving, so as to be able to learn even when no proof can be found. We show that provers trained this way can match and sometimes surpass state-of-the-art traditional provers on the TPTP dataset in terms of both quantity and quality of the proofs.
Reject
Four reviewers acknowledged the author's response and did not change their largely negative scores. The one enthusiastic reviewer did not respond to the more negative reviewers and has not worked in the theorem proving area. The main problem with the paper seems to be that the reviewers were not convinced by the empirical results. They felt that results should have been presented on more widely used benchmark datasets.
test
[ "haLNPoB9YQF", "36eiB4m8A5Z", "7eYtQvDAJ9", "i9AEHtj3d-N", "cL7Df6SUsJE", "85_0FV1KByD", "K5bwWR5lQ5", "Z2jeyTZTck", "LbjOXfMJnK3", "TBFTgsGqrxK" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response.\n\nWe agree that premise selection is an important component of building a theorem prover and that our approach does not solve that problem.\n\nWe address the problem of using selected premises to arrive at the proof, which is an orthogonal but essential component too.\n\nNote that ma...
[ -1, -1, -1, -1, -1, 6, 3, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, 4, 4, 3, 5, 4 ]
[ "36eiB4m8A5Z", "cL7Df6SUsJE", "i9AEHtj3d-N", "cL7Df6SUsJE", "iclr_2022_QDDVxweQJy0", "iclr_2022_QDDVxweQJy0", "iclr_2022_QDDVxweQJy0", "iclr_2022_QDDVxweQJy0", "iclr_2022_QDDVxweQJy0", "iclr_2022_QDDVxweQJy0" ]
iclr_2022_9TdCcMlmsLm
Text Generation with Efficient (Soft) $Q$-Learning
Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods.
Reject
This work proposes an approach to improve non-ML based methods of text generation. It reformulates the problem with the soft Q-learning approach from RL instead of standard hard RL formulations from previous text generation work. By doing this, the work allows application of path consistency learning. This is an elegant formulation. However, this reformulation into soft Q-learning appears quite straightforward and so the application of path consistency learning does not require much change to be used for text generation. This limits the novelty of the work. The experiments are also relatively small-scale and consists of some non-standard tasks such as prompt generation (which is typically evaluated indirectly, the response to the prompts rather than the prompt itself). As the reviewers mention, evaluating on more large-scale standard tasks such as summarisation or dialog would be more convincing. Finally the work lacks references to recent works in the field, such as LeakGAN.
train
[ "gP37lSKKyr", "aejYg4_Te5S", "-9ZprIJi9Dq", "zN786ZTv2lC", "SMdkkOMdbxr", "Em37eNsBBA", "Q8j4qC01TTS", "gS0M-NCc5Fw" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers and ACs:\n\nWe’ve made responses to all questions/comments by the reviewers earlier. We’re happy to discuss more if you have more questions/comments on our work. Thanks!", " Thank you for the feedback!\n\n* **Novelty**\n\nPlease see our general response above for the summary of technical novelty ...
[ -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_9TdCcMlmsLm", "gS0M-NCc5Fw", "Q8j4qC01TTS", "Em37eNsBBA", "iclr_2022_9TdCcMlmsLm", "iclr_2022_9TdCcMlmsLm", "iclr_2022_9TdCcMlmsLm", "iclr_2022_9TdCcMlmsLm" ]
iclr_2022_Ng8wWGXXIXh
On Invariance Penalties for Risk Minimization
The Invariant Risk Minimization (IRM) principle was first proposed by Arjovsky et al. (2019) to address the domain generalization problem by leveraging data heterogeneity from differing experimental conditions. Specifically, IRM seeks to find a data representation under which an optimal classifier remains invariant across all domains. Despite the conceptual appeal of IRM, the effectiveness of the originally proposed invariance penalty has recently been brought into question through stylized experiments and counterexamples. In this work, we investigate the relationship between the data representation, invariance penalty, and risk. In doing so, we propose a novel invariance penalty, and utilize it to design an adaptive rule for tuning the coefficient of the penalty proposed by Arjovsky et al. (2019). More- over, we provide practical insights on how to avoid the potential failure of IRM considered in the nascent counterexamples. Finally, we conduct numerical experiments on both synthetic and real-world data sets with the objective of building invariant predictors. In our non-synthetic experiments, we sought to build a predictor of human health status using a collection of data sets from various studies which investigate the relationship between human gut microbiome and a particular disease. We substantiate the effectiveness of our proposed approach on these data sets and thus further facilitate the adoption of the IRM principle in other real-world applications.
Reject
This manuscript was the object of a rich and lengthy discussion. The AC also felt compelled to read the paper in details and discussed it further with the SAC. The authors did a thorough job at addressing some of the reviewers points. The added results on cross-entropy loss and additional discussion, as well as the points made in "Further Discussion on the Numerical Experiments" are very much appreciated. However, significant concerns remain on establishing connections with prior work, including related ideas on invariance from the causality literature, so as to gain deeper understanding of the implications of the proposed objective. We also strongly encourage the authors to further work on strengthening their theoretical analysis to clearly demonstrate the value of the proposed approach. The proposed formulation is certainly thought provoking and we urge the authors to pursue their work in view of the above comments.
train
[ "duo_zBXjnr6", "T4kFeiCVdBq", "LItld14refF", "364KUQynLcT", "_cjYJ7xVc2", "fr39BbUM51Q", "nZr2nqqqG5E", "3NBfwIQSXzh", "YvNw5ypjXrt", "GrLumwVQInU", "lvtgb_y0Ozn", "TG2XR4bW4mU" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Ok, so now that we're on the same page, let me summarize the concerns I still have with this submission, and the reason I am maintaining my original recommendation. I want to emphasize again that I think this is a direction with real potential, but I just feel the current submission doesn't sufficiently study, ju...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "364KUQynLcT", "iclr_2022_Ng8wWGXXIXh", "364KUQynLcT", "_cjYJ7xVc2", "nZr2nqqqG5E", "TG2XR4bW4mU", "T4kFeiCVdBq", "lvtgb_y0Ozn", "GrLumwVQInU", "iclr_2022_Ng8wWGXXIXh", "iclr_2022_Ng8wWGXXIXh", "iclr_2022_Ng8wWGXXIXh" ]
iclr_2022_kSqyNY_QrD9
Learning to Solve Multi-Robot Task Allocation with a Covariant-Attention based Neural Architecture
This paper demonstrates how time-constrained multi-robot task allocation (MRTA) problems can be modeled as a Markov Decision Process (MDP) over graphs, such that approximate solutions can be modeled as a policy using Reinforcement Learning (RL) methods. Inspired by emerging approaches for learning to solve related combinatorial optimization (CO) problems such as multi-traveling salesman (mTSP) problems, a graph neural architecture is conceived in this paper to model the MRTA policy. The generalizability and scalability needs of the complex CO problem presented by MRTA are addressed by innovatively using the concept of Covariant Compositional Networks (CCN) to learn the local structures of graphs. The resulting learning architecture is called Covariant Attention-based Mechanism or CAM, which comprises: 1) an encoder: CCN-based embedding model to represent the task space as learnable feature vectors, 2) a decoder: an attention-based model to facilitate sequential decision outputs, and 3) context: to represent the state of the mission and the robots. To learn the feature vectors, a policy-gradient method is used. The CAM architecture is found to generally outperform a state-of-the-art encoder-decoder method that is purely based on Multi-head Attention (MHA) mechanism in terms of task completion and cost function, when applied to a class of MRTA problems with time deadlines, robot ferry range constraints, and multi-tour allowance. CAM also demonstrated significantly better scalability in terms of cost function over unseen scenarios with larger task/robot spaces than those used for training. Lastly, evidence regarding the unique potential of learning-based approaches in delivering highly time-efficient solutions is provided for a benchmark vehicle routing problem -- where solutions are achieved 100-1000 times faster compared to a non-learning baseline, and for a benchmark MRTA problem with time and capacity constraints -- where solutions for larger problems are achieved 10 times faster compared to non-learning baselines.
Reject
The paper considers the problem of solving time-constrained multi-robot task allocation (MRTA) problems. Formulating the problem as a Markov decision process (MDP), the paper proposes Covariant Attention-based Mechanism (CAM), a graph neural network-based policy that can be trained to solve MRTA problems via standard RL methods. The encoder adapts the covariant compositional network to improve generalizability, while the decoder extends a recent combinatorial optimization architecture to the multi-agent optimization domain. Experimental results demonstrate that CAM outperforms an encoder-decoder baseline in terms of task completion, generalizability, and scalability, while also providing greater computational efficiency than non-learning baselines. The paper considers an important topic---multi-agent task allocation is an interesting and challenging combinatorial optimization problem. The proposed CAM architecture adapts existing components in an interesting way and seems sensible for the MRTA domain. The reviewers initially raised concerns regarding the conclusions that can be drawn from the experimental evaluation, the significance of the algorithmic contributions, as well as the motivation for the proposed approach. The authors made a concerted effort to address these concerns through the addition of new experimental evaluations (e.g., comparisons to a myopic baseline and ablation studies), updates to the text, and detailed responses to each reviewer. Unfortunately, only one reviewer responded and updated their review (increasing their score). In light of this, the AC also reviewed the paper. The AC agrees with the strengths identified by the reviewers (including those noted above) and with the contributions provided by the additional evaluations. However, the paper remains unnecessarily dense, while at the same time not being self-contained (e.g., the new experimental results are relegated to the appendix rather than appearing in the main text). The paper would also benefit from a more concise motivation for learning-based solutions to MRTA and a clearer discussion of the paper's contributions.
train
[ "EiUW_FyjbVr", "Bbtitkfyqcn", "bVcOZW0gFZD", "lRk-rIs8Er8", "p8kznu9Cad", "VWLNIx9amX9", "8Rc8u_5ZEam", "wiH5YJ5uN63", "iHnkuI0ItE", "WozcxvLrZEC", "sc-T7_A3vxP", "AwlSrRMgBlh", "QW_JcWgN-XT", "imLuwIKiYCY", "bh7m6u4KoMV", "axzyHhlW6i", "3i8adPdWfLj", "u8XZ-umyWio", "rx6om2tBwj",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "autho...
[ "The paper proposes a graph learning approach for solving the multi-robot task allocation (MRTA) problem. It frames the problem as a Markov Decision Process (MDP) and trains a policy with a graph neural network architecture using REINFORCE. Results show that the proposed approach scales better compared to a non-lea...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "iclr_2022_kSqyNY_QrD9", "EiUW_FyjbVr", "6IuC0uhWmFZ", "6IuC0uhWmFZ", "-3vXyGshTsS", "6IuC0uhWmFZ", "6IuC0uhWmFZ", "6IuC0uhWmFZ", "6IuC0uhWmFZ", "6IuC0uhWmFZ", "EiUW_FyjbVr", "AurQPW3Cghe", "AurQPW3Cghe", "AurQPW3Cghe", "AurQPW3Cghe", "AurQPW3Cghe", "AurQPW3Cghe", "AurQPW3Cghe", ...
iclr_2022_1nlRIagHDUB
Coresets for Kernel Clustering
We devise the first coreset for kernel $k$-Means, and use it to obtain new, more efficient, algorithms. Kernel $k$-Means has superior clustering capability compared to classical $k$-Means particularly when clusters are separable non-linearly, but it also introduces significant computational challenges. We address this computational issue by constructing a coreset, which is a reduced dataset that accurately preserves the clustering costs. Our main result is the first coreset for kernel $k$-Means, whose size is independent of the number of input points $n$, and moreover is constructed in time near-linear in $n$. This result immediately implies new algorithms for kernel $k$-Means, such as a $(1+\epsilon)$-approximation in time near-linear in $n$, and a streaming algorithm using space and update time $\mathrm{poly}(k \epsilon^{-1} \log n)$. We validate our coreset on various datasets with different kernels. Our coreset performs consistently well, achieving small errors while using very few points. We show that our coresets can speed up kernel $k$-Means++ (the kernelized version of the widely used $k$-Means++ algorithm), and we further use this faster kernel $k$-Means++ for spectral clustering. In both applications, we achieve up to 1000x speedup while the error is comparable to baselines that do not use coresets.
Reject
Overall it was decided to reject this paper mainly because it seemed to be a minor extension of known constructions. The reviewers agreed that it certainly is valuable that the paper presents the best known results for kernel k-means, but the paper was viewed by the reviewers as more of an observation and primarily an off-the-shelf application of techniques in the coreset literature. Because of this, the novelty was thought to be a bit below the bar. One suggestion for improving the presentation is that in the thesis of Melanie Schmidt, there seems to be such a construction for kernel k-means which is exponential in 1/eps, and so much worse than what is in this submission. While that's great and certainly something to add and discuss in the paper, the reviewers still felt the technical novelty here was not quite enough to merit acceptance.
train
[ "JyslxPS-o3b", "GjDLJAo7O5t", "AaxNmZu_RI", "lxPtSFNuNAm", "GIydLrwEm0A", "JmcU80pwzdw", "VoPs0nVOfTv", "quPSdfEpFYU" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for addressing my concerns. As for your question, \"Turning Big data into tiny data: Constant-size coresets for $k$-means, PCA and projective clustering\" can be used when the feature map is explicitly given which then a coreset can be obtained. However, the coreset size would be exponential in $k$, i.e., ...
[ -1, -1, -1, -1, -1, 3, 8, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "lxPtSFNuNAm", "iclr_2022_1nlRIagHDUB", "quPSdfEpFYU", "VoPs0nVOfTv", "JmcU80pwzdw", "iclr_2022_1nlRIagHDUB", "iclr_2022_1nlRIagHDUB", "iclr_2022_1nlRIagHDUB" ]
iclr_2022_7Bc2U-dLJ6N
SGDEM: stochastic gradient descent with energy and momentum
In this paper, we propose SGDEM, Stochastic Gradient Descent with Energy and Momentum to solve a large class of general nonconvex stochastic optimization problems, based on the AEGD method that originated in the work [AEGD: Adaptive Gradient Descent with Energy. arXiv: 2010.05109]. SGDEM incorporates both energy and momentum at the same time so as to inherit their dual advantages. We show that SGDEM features an unconditional energy stability property, and derive energy-dependent convergence rates in the general nonconvex stochastic setting, as well as a regret bound in the online convex setting. A lower threshold for the energy variable is also provided. Our experimental results show that SGDEM converges faster than AEGD and generalizes better or at least as well as SGDM in training some deep neural networks.
Reject
The reviewers have the following concerns: 1. The theoretical results for the proposed method are weak. Theorem 4.2 cannot be considered as a convergence result, because the bound depends on some random variables $r_{T,i}$. The reviewers agree that a proper analysis would require some knowledge on the lower bound of these variables. Although there is some empirical explanation for this, the lower bounded assumption of $r_{T,i}$ is not theoretically justified. The authors acknowledge that this is the main challenge for the present algorithm. In addition, the analysis requires bounded gradient and bounded function value, which is also strong for nonconvex settings. 2. The empirical performance is not strong. In most experiments, the proposed method is not better than the baseline AEGD. The novelty and contribution of SGEM over AEGD is quite limited, since the idea of adding momentum is not new. The suggestions to improve this paper are as follows 1. Since the lower bounded assumption on $r_{T,i}$ is not standard and hard to verify, the authors might consider analyzing a theoretical guarantee for it. On the other hand, they could verify more experiments with various data sets to have some sense whether this assumption may be true or not. Next, please try to relax the strong assumptions as discussed. 2. It is better if the authors can show the performance of SGEM for convex settings, and for other deep learning tasks (e.g. NLP) as suggested by the reviewers. The authors should consider to improve the paper based on the reviewers' comments and suggestions and resubmit this paper in the future venues.
val
[ "c67hXXL-HdS", "vhSQSEslX0f", "-WPfJXYNNDc", "I1GshhchCd8", "5x_moHVsQa", "R4VzvES20h-", "XEUlXqtLqwP", "zaW2BgpBJQZ", "VyJtNXrd5dI", "JGRYBjyllv6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors propose a new algorithm called Stochastic Gradient Descent with Energy and Momentum (SGDEM) for non-convex stochastic optimization problems. The idea of 'energy' variable is from the work [AEGD: Adaptive Gradient Descent with Energy]. The authors prove some similar property for SGDEM (u...
[ 5, -1, 5, 6, -1, -1, -1, -1, -1, 5 ]
[ 5, -1, 4, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_7Bc2U-dLJ6N", "VyJtNXrd5dI", "iclr_2022_7Bc2U-dLJ6N", "iclr_2022_7Bc2U-dLJ6N", "R4VzvES20h-", "I1GshhchCd8", "-WPfJXYNNDc", "c67hXXL-HdS", "JGRYBjyllv6", "iclr_2022_7Bc2U-dLJ6N" ]
iclr_2022_Bel1Do_eZC
Inductive Lottery Ticket Learning for Graph Neural Networks
Deep graph neural networks (GNNs) have gained increasing popularity, while usually suffer from unaffordable computations for real-world large-scale applications. Hence, pruning GNNs is of great need but largely unexplored. A recent work, UGS, studies lottery ticket learning for GNNs, aiming to find a subset of model parameters and graph structure that can best maintain the GNN performance. However, it is tailed for the transductive setting, failing to generalize to unseen graphs, which are common in inductive tasks like graph classification. In this work, we propose a simple and effective learning paradigm, Inductive Co-Pruning of GNNs (ICPG), to endow graph lottery tickets with inductive pruning capacity. To prune the input graphs, we design a generative probabilistic model to generate importance scores for each edge based on the input; to prune the model parameters, it views the weight's magnitude as their importance scores. Then we design an iterative co-pruning strategy to trim the graph edges and GNN weights based on their importance scores. Although it might be strikingly simple, ICPG surpasses the existing pruning method and can be universally applicable in both inductive and transductive learning settings. On ten graph-classification and two node-classification benchmarks, ICPG achieves the same performance level with $14.26\%\sim43.12\%$ sparsity for graphs and $48.80\%\sim91.41\%$ sparsity for the model.
Reject
This paper studies the pruning problem of graph neural networks, i.e. finding lottery tickets for GNN. In particular, it generalizes UGS by Chen et al. (2021) from transductive setting to inductive setting where prediction on unseen graphs is possible. The main idea is: 1) learn a mask network to assign importance scores for edges using the embedding features of the nodes connected, that avoids the double parameter memory costs in UGS; 2) prune the edges according to the importance score and weights of GCN according to their magnitudes. Main concerns from reviewers are about the novelty, evaluation, and scalability. Despite that generalization to unseen graphs using the mask functions on embedding features is a new aspect, the evaluation is compared with relatively weak baselines and inference time scalability of is still an issue.
test
[ "X3oF054tIcx", "NsEl7WizpF8", "2hiKRkmoDKV", "TehrlDxt1br", "1UpIsj4DH-b", "9zsF3UgCfiR", "x2A_Pi5rY8K", "M5YfVPDmew", "s_Sq4MUXvjp", "WzIGyS6otjX", "0JfST23pd0P", "nH0ApvYtD6", "p-MYyI4LZlk", "2GcXMqahkY", "Jd80A8IL7ul", "9FQuCK0GpqH", "lSa2TRCWSDy", "NOCqEjrizTK", "_ZXxlG59hN5"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author",...
[ " Dear Reviewer 6qg1,\n\nThank you for your reviews. We want to know if our response address your concerns. Please kindly let us know if there is anything else we can address to convince you for upgrading the scores.\n\nBest wishes,\n\nAuthors", " Dear Reviewer 4F4b,\n\nThank you for your reviews. We want to know...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "Jd80A8IL7ul", "IrkBgzsJia", "TehrlDxt1br", "9FQuCK0GpqH", "iclr_2022_Bel1Do_eZC", "NOCqEjrizTK", "1UpIsj4DH-b", "NOCqEjrizTK", "iclr_2022_Bel1Do_eZC", "NOCqEjrizTK", "NOCqEjrizTK", "p-MYyI4LZlk", "iclr_2022_Bel1Do_eZC", "p-MYyI4LZlk", "iclr_2022_Bel1Do_eZC", "NOCqEjrizTK", "NOCqEjri...
iclr_2022_a0SRWViFYW
Stochastic Projective Splitting: Solving Saddle-Point Problems with Multiple Regularizers
We present a new, stochastic variant of the projective splitting (PS) family of algorithms for monotone inclusion problems. It can solve min-max and noncooperative game formulations arising in applications such as robust ML without the convergence issues associated with gradient descent-ascent, the current de facto standard approach in ML applications. Our proposal is the first version of PS able to use stochastic gradient oracles. It can solve min-max games while handling multiple constraints and nonsmooth regularizers via projection and proximal operators. Unlike other stochastic splitting methods that can solve such problems, our method does not rely on a product-space reformulation of the original problem. We prove almost-sure convergence of the iterates to the solution and a convergence rate for the expected residual. By working with monotone inclusions rather than variational inequalities, our analysis avoids the drawbacks of measuring convergence through the restricted gap function. We close with numerical experiments on a distributionally robust sparse logistic regression problem.
Reject
The submission considers a stochastic variant of the projective splitting algorithm, with a focus on monotone inclusion problems, and it proposes a novel separable algorithm with the ability to handle multiple constraints and non-smooth regularizers. All reviewers felt that there were merits to the submission and that the submission was borderline. Public and non-public discussion concluded that the paper would be of greater value to the community if the suggestions of the reviewers and related issues were addressed.
train
[ "c6WGk9vVXjp", "HKmqBQreGbt", "2Ziqf476eH", "pE_QsTewiJF", "O9K_TmQM6U1", "IWZwGEnNhN7", "cVC1h_kIb1d", "Xozx2MA_NAN", "pJsvUrkL_mJ", "tvY9H10FbR", "7DPyItMHshB", "5ZXofrqyXu-", "hvQC159krBT", "29vFb5mqG7Q", "PrvbDIrTtx", "nl0422ycF3", "4hU1id9dp6e", "4by4yaUBHKU", "ss4k5FAWYJV" ...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for clarifying their position. \n\nWe had thought that your primary concern about the paper was a slow convergence rate, which we have addressed. But it appears you are also concerned with the novelty of the work more generally. We guess that your claim of limited novelty is related to the f...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "HKmqBQreGbt", "4by4yaUBHKU", "iclr_2022_a0SRWViFYW", "iclr_2022_a0SRWViFYW", "nl0422ycF3", "Xozx2MA_NAN", "Xozx2MA_NAN", "7DPyItMHshB", "4by4yaUBHKU", "5ZXofrqyXu-", "5ZXofrqyXu-", "pE_QsTewiJF", "pE_QsTewiJF", "pE_QsTewiJF", "pE_QsTewiJF", "ss4k5FAWYJV", "4by4yaUBHKU", "iclr_2022...
iclr_2022_-RAFyM-YPj
Counting Substructures with Higher-Order Graph Neural Networks: Possibility and Impossibility Results
While message passing Graph Neural Networks (GNNs) have become increasingly popular architectures for learning with graphs, recent works have revealed important shortcomings in their expressive power. In response, several higher-order GNNs have been proposed that substantially increase the expressive power, albeit at a large computational cost. Motivated by this gap, we explore alternative strategies and lower bounds. In particular, we analyze a new recursive pooling technique of local neighborhoods that allows different tradeoffs of computational cost and expressive power. First, we prove that this model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs. Second, we show how recursive pooling can exploit sparsity to reduce the computational complexity compared to the existing higher-order GNNs. More generally, we provide a (near) matching information-theoretic lower bound for counting subgraphs with graph representations that pool over representations of derived (sub-)graphs. We also discuss lower bounds on time complexity.
Reject
This submission has been evaluated by 5 reviewers with 3 leaning towards borderline accept and 2 leaning towards borderline reject. Reviewers have been consistently concerned about several aspects of this work, i.e. that *the method is only demonstrated on toy datasets*, that there is an issue with the scalability to larger substructures, that the proposed approach did not excel *in the simple task of triangle counting* or even that *the authors did not perform any other experiments even on a toy dataset*, and that comparisons on Deep-LRP re. efficiency were not provided, and *more complex settings and sensitivity* were not investigated. Reviewers also noted that the general idea of recursion did already appear in GNNs in one or another setting. In making this decision, AC agrees that there is some potential in the proposed analysis and reviewers also highlighted this as a positive side of the submission. Yet, it is really hard to overlook at the same time the rebuttal where authors had the chance to address all reviewers comments regarding the experiments, their various details, and their variations. Failing to address these comments to the satisfaction of the majority of reviewers makes it impossible for AC to recommend the acceptance even tough there is every chance that the paper will ultimately make it to a high quality venue after a thorough revision (reviewers have really given a fair number of good suggestions that should assist authors).
train
[ "0CxtC-uTR5k", "Pw7bc7rQjEy", "gInme7_vtIF", "67HXg3qyTfZ", "fX9UsS8Fra9", "Zpco_zWuJXI", "LRpZoNm6173", "jNurLKefjsJ", "nlK_Pc63RpF", "znmF69Ex9JG", "4-ThsYonLa", "xbyb1Gq8g3m", "4h8Psabl3HZ", "-3ZCasuG-OO", "8ER_WmwYqFM", "JWf_w7GFzpv", "Naeq7qLa1OH", "iF6DZdumH7Y", "FVVqjIniB-...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " I thank the authors for their response. \n\nSome of the issues I raised were addressed, I do feel however that the paper has to go through major changes in order to make it more accessible to the readers. \n\nI keep my score unchanged. ", " I thank the authors for their response.\n\nRegarding the computational ...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "4h8Psabl3HZ", "nlK_Pc63RpF", "znmF69Ex9JG", "LRpZoNm6173", "iclr_2022_-RAFyM-YPj", "4-ThsYonLa", "jNurLKefjsJ", "iF6DZdumH7Y", "FVVqjIniB-S", "Naeq7qLa1OH", "xbyb1Gq8g3m", "fX9UsS8Fra9", "JWf_w7GFzpv", "iclr_2022_-RAFyM-YPj", "-3ZCasuG-OO", "iclr_2022_-RAFyM-YPj", "iclr_2022_-RAFyM-...
iclr_2022_91muTwt1_t5
Knowledge Guided Geometric Editing for Unsupervised Drug Design
Deep learning models have been widely used in automatic drug design. Current deep approaches always represent and generate candidate molecules as a 1D string or a 2D graph, which rely on large measurement data from lab experiments for training. However, many disease targets in particular newly discovered ones do not have such data available. In this paper, we propose \method, which incorporates physicochemical knowledge into deep models, leading to unsupervised drug design. Specifically, \method directly models drug molecules in the geometric~(3D) space and performs geometric editing with the knowledge guidance by self-training and simulated annealing in a purely training data free fashion. Our experimental results demonstrate that GEKO outperforms baselines on all 12 targets with and without prior drug-target measurement data.
Reject
The authors describes a drug design method that generates molecules by simulating adding or deleting parts of the molecules, and using graphnets to capture atom and fragment level information and construct new molecules. Simulated annealing is used to ‘edit’ the 3D structures, and docking simulations, drug-likeness and synthesizability are used to provide information back into training. The authors compare with multiple baselines on a test set of 12 targets, including the current SOTA model, and report improved performance. Strengths: - The proposed model outperforms other baselines in the multi-objective molecules optimization benchmark. - The model doesn't rely on a data-driven biological activity predictor. Weaknesses: - The reviewers point out that the model seems to be incremental with respect to previous work. - The reviewers have concernts about the reproducibility of the work and find a lot of details lacking. This is a borderline paper with a majority of reviewers voting for rejection. I recommend the authors to addrses the weaknesses above and resubmit to another venue.
train
[ "haxikg5-TmU", "ltsSlSUtkm-", "Wp02sbxL6Vm", "OFQqv88kdnu", "uONMvmFbBLL", "gyLY8QFP-m", "jlsulc4D37P", "YX__AjliFtn", "F2vOkLuT-jp", "7wucx3T-oi", "Iq4Z8OC5uUR", "qD-vuczku8", "RGtyy2oS0tH", "gZqRLiTy-tv", "VWoePfWuG1x", "brF0Jzc3MNc", "LEwytm_vTLd", "tHZmLREqsRx" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " * As mentioned in the previous response, solving protein structure and identifying ligand binding sites has become easier in recent years due to the rapid development of structural biology. In addition, computational methods (e.g. AlphaFold and Fpocket) can also be used to solve these problems. These advances mak...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "OFQqv88kdnu", "gyLY8QFP-m", "iclr_2022_91muTwt1_t5", "RGtyy2oS0tH", "RGtyy2oS0tH", "VWoePfWuG1x", "F2vOkLuT-jp", "Iq4Z8OC5uUR", "gZqRLiTy-tv", "iclr_2022_91muTwt1_t5", "qD-vuczku8", "7wucx3T-oi", "LEwytm_vTLd", "tHZmLREqsRx", "Wp02sbxL6Vm", "iclr_2022_91muTwt1_t5", "iclr_2022_91muTw...
iclr_2022_SGOma2sAF7Q
LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference Time
When deploying deep learning models to a device, it is traditionally assumed that available computational resources (compute, memory, and power) remain static. However, real-world computing systems do not always provide stable resource guarantees. Computational resources need to be conserved when load from other processes is high or battery power is low. Inspired by recent works on neural network subspaces, we propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models that range from highly efficient to highly accurate. Our models require no retraining, thus our subspace of models can be deployed entirely on-device to allow adaptive network compression at inference time. We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity. We achieve accuracies on-par with standard models when testing our uncompressed models, and maintain high accuracy for sparsity rates above 90% when testing our compressed models. We also demonstrate that our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
Reject
This paper proposed a method for adaptive network compression at inference time. However, the paper contains various issues raised by the reviewers that needs to be addressed.
train
[ "-6qtb3osswv", "11jVHKA4Lav", "4yYIxdHJPf", "hHJiDnOTg5z", "DmHzZxEBfsz", "5kWqRLtpA9C", "LoVzcn2yuRK", "uSwRlcpbCR1", "PpkOJ5MIYkhe", "zbJnkGlR7-WR", "oVtnfkD95vQ", "l5zdYnsr6hO", "09Hl29L8661", "N-G9v9L6wr_", "pddY3qvZKc2", "G1OaXu_Jhkr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply, and thank you again for the thoughtful review.\n\nYour understanding of the scenario is correct. Suppose battery is running low, and we wish to compress the on-device model using structured sparsity. We select an alpha corresponding to the desired compression level, then calculate the co...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "11jVHKA4Lav", "4yYIxdHJPf", "09Hl29L8661", "G1OaXu_Jhkr", "pddY3qvZKc2", "pddY3qvZKc2", "N-G9v9L6wr_", "N-G9v9L6wr_", "09Hl29L8661", "iclr_2022_SGOma2sAF7Q", "l5zdYnsr6hO", "iclr_2022_SGOma2sAF7Q", "iclr_2022_SGOma2sAF7Q", "iclr_2022_SGOma2sAF7Q", "iclr_2022_SGOma2sAF7Q", "iclr_2022_S...
iclr_2022_Is5Hpwg2R-h
Targeted Environment Design from Offline Data
In reinforcement learning (RL) the use of simulators is ubiquitous, allowing cheaper and safer agent training than training directly in the real target environment. However, this approach relies on the simulator being a sufficiently accurate reflection of the target environment, which is difficult to achieve in practice, resulting in the need to bridge sim2real gap. Accordingly, recent methods have proposed an alternative paradigm, utilizing offline datasets from the target environment to train an agent, avoiding online access to either the target or any simulated environment but leading to poor generalization outside the support of the offline data. We propose to combine the two paradigms: offline datasets and synthetic simulators, to reduce the sim2real gap by using limited offline data to train realistic simulators. We formalize our approach as offline targeted environment design(OTED), which automatically learns a distribution over simulator parameters to match a provided offline dataset, and then uses the learned simulator to train an RL agent in standard online fashion. We derive an objective for learning the simulator parameters which corresponds to minimizing a divergence between the target offline dataset and the state-action distribution induced by the simulator. We evaluate our method on standard offlineRL benchmarks and show that it learns using as few as 5 demonstrations, and yields up to 17 times higher score compared to strong existing offline RL, behavior cloning (BC), and domain randomization baseline, thus successfully leveraging both offline datasets and simulators for better RL
Reject
This paper proposes a new approach, which combines offline reinforcement learning with learning in simulation. There were different views on the paper among the reviewers and we had quite a lot of discussions. As a consequence, there were still serious concerns remaining, e.g., whether the results are significant enough, whether there are clear advantages of the proposed mehtod over directly using offline RL methods. It is not justified whether the proposed framework can use offline data more efficiently or better reduce the gap between mismatched simulators and offline data. The reviewer who gave the highest score decided not to champion the paper. Considering all the discussions, we believe the paper is not ready for publication at ICIR yet.
test
[ "o9aENePRm4K", "W6V3a7-z74V", "YEb1fCMjlI_", "VV51SO4_7pc", "eB0Pm8aPMz0", "WYR_vb8hXQy", "wnyMbz4PdW7", "6UCZx8QLC7H", "aufPc7rOUJH", "Rx2hIzSjCHJ", "WPggNAC27", "UYy2G1Axpj0", "6mhcpTNIngW", "aoI4RvJweZW", "YlFfa--eqOw", "LUpX_YCAh0t", "PYI0MSErLCu" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback.\n\nAny sim-to-real method will inevitably degrade in performance as the gap between simulation and target environment increases. Our extensive empirical results show that OTED is no exception to this. As the reviewer mentions, our results show exactly the point at which OTED is no lon...
[ -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "VV51SO4_7pc", "wnyMbz4PdW7", "eB0Pm8aPMz0", "WPggNAC27", "aufPc7rOUJH", "iclr_2022_Is5Hpwg2R-h", "6mhcpTNIngW", "PYI0MSErLCu", "Rx2hIzSjCHJ", "YlFfa--eqOw", "LUpX_YCAh0t", "iclr_2022_Is5Hpwg2R-h", "WYR_vb8hXQy", "UYy2G1Axpj0", "iclr_2022_Is5Hpwg2R-h", "iclr_2022_Is5Hpwg2R-h", "iclr_...
iclr_2022_hbGV3vzMPzG
On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
Adversarial training is a popular method to robustify models against adversarial attacks. However, it exhibits much more severe overfitting than training on clean inputs. In this work, we investigate this phenomenon from the perspective of training instances, i.e., training input-target pairs. To this end, we provide a quantitative and model-agnostic metric measuring the difficulty of an instance in the training set and analyze the model's behavior on instances of different difficulty levels. This lets us show that the decay in generalization performance of adversarial training is a result of the model's attempt to fit hard adversarial instances. We theoretically verify our observations for both linear and general nonlinear models, proving that models trained on hard instances have worse generalization performance than ones trained on easy instances. In addition, this gap in generalization performance is larger in adversarial training. Finally, we investigate solutions to mitigating adversarial overfitting in several scenarios, including when relying on fast adversarial training and in the context of fine-tuning a pretrained model with additional data. Our results demonstrate adaptively using training data can improve model's robustness.
Reject
The paper proposes a metric to measure the difficulty of training examples. The main thesis is that hard training examples lead to bad test adversarial error. There are theoretical results on simple models establishing such claims. The paper also proposes a method to adaptively weight training examples to improve training which gives improvement for adversarial error. The reviewers have raised a number of questions and the rebuttal period has been useful. In particularly, I agree with the reviewers that 'model-agnostic' is misleading in this context and the authors have agreed to remove this in the future. It is felt that more experiments, comparison to adversarial training, etc. is needed and I think the paper will need to go through a proper review process again before acceptance.
train
[ "RZBH7nO_Dp5", "9WSqPShM8rV", "o5-FPZT3VAP", "LTw8Ojv8_l3", "kx4OmdEUJav", "hZvuSCHyNgJ", "Ef7ZBwzEoe", "k1qs4I7z_xk", "XziKWWoMBE1" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all the reviewers for their constructive comments. These comments greatly help us improve our paper, making it clearer and rigorous. Below are the summary of our revision. The changes made in our paper are highlighted blue, including the main text and the appendix.\n\n* **[Section 3 and Appendix D.1]** W...
[ -1, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2022_hbGV3vzMPzG", "XziKWWoMBE1", "XziKWWoMBE1", "Ef7ZBwzEoe", "k1qs4I7z_xk", "k1qs4I7z_xk", "iclr_2022_hbGV3vzMPzG", "iclr_2022_hbGV3vzMPzG", "iclr_2022_hbGV3vzMPzG" ]
iclr_2022_y8zhHLm7FsP
Ensemble Kalman Filter (EnKF) for Reinforcement Learning (RL)
This paper is concerned with representing and learning the optimal control law for the linear quadratic Gaussian (LQG) optimal control problem. In recent years, there is a growing interest in re-visiting this classical problem, in part due to the successes of reinforcement learning (RL). The main question of this body of research (and also of our paper) is to approximate the optimal control law without explicitly solving the Riccati equation. For this purpose, a novel simulation-based algorithm, namely an ensemble Kalman filter (EnKF), is introduced in this paper. The algorithm is used to obtain formulae for optimal control, expressed entirely in terms of the EnKF particles. For the general partially observed LQG problem, the proposed EnKF is combined with a standard EnKF (for the estimation problem) to obtain the optimal control input based on the use of the separation principle. The theoretical results and algorithms are illustrated with numerical experiments.
Reject
This paper presents the use of the Ensemble Kalman Filter (EnKF) to solve the linear quadratic Gaussian (LQG) optimal control problem. After reviewing the paper and taking into consideration of the reviewing process, here are my comments: - The related work is limited and needs more improvements to contextualize the problem and the solution. - The reinforcement learning paradigm is not really appreciated in the proposal. - The results are rather limited, so more experiments are needed to clearly validate the solution. From the above, the paper does not fulfill the standards of the ICLR. I suggest improving the paper accordingly and submitting it to a control systems venue.
train
[ "GSCL0UfON_F", "NTDyAnjHnoD", "7XKAmPrUt4S", "HPEa-iqkwZ", "pU-a98RtTzw", "oEmh-nfMfy", "N-WQ070ANO" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking the time to review our paper and provide comments and feedback. We present our responses to their comments here. \n\n**Responses to Strengths in Main Review**\n\nThank you for pointing out the strengths of the work.\n\n**Responses to Weaknesses in Main Review**\n\n1. \n\n> “Contr...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 5, 4, 5 ]
[ "N-WQ070ANO", "pU-a98RtTzw", "oEmh-nfMfy", "NTDyAnjHnoD", "iclr_2022_y8zhHLm7FsP", "iclr_2022_y8zhHLm7FsP", "iclr_2022_y8zhHLm7FsP" ]
iclr_2022_P1zfguZHowl
Robust Losses for Learning Value Functions
Most value function learning algorithms in reinforcement learning are based on the mean squared (projected) Bellman error. However, squared errors are known to be sensitive to outliers, both skewing the solution of the objective and resulting in high-magnitude and high-variance gradients. Typical strategies to control these high-magnitude updates in RL involve clipping gradients, clipping rewards, rescaling rewards, and clipping errors. Clipping errors is related to using robust losses, like the Huber loss, but as yet no work explicitly formalizes and derives value learning algorithms with robust losses. In this work, we build on recent insights reformulating squared Bellman errors as a saddlepoint optimization problem, and propose a saddlepoint reformulation for a Huber Bellman error and Absolute Bellman error. We show that the resulting solutions have significantly lower error for certain problems and are otherwise comparable, in terms of both absolute and squared value error. We show that the resulting gradient-based algorithms are more robust, for both prediction and control, with less stepsize sensitivity.
Reject
The paper proposes to use the Huber and absolute loss for value function estimation in reinforcement learning, and optimizes it by leveraging a recent primal-dual formulation by Dai et al. This is a controversial paper. On one hand, it is a well motivated idea to apply robust loss on RL; the paper implemented the idea well by leveraging the saddle point formulation, and empirically demonstrate its advantages in practice. On the other hand, the technical novelty of this paper is limited. The idea of Huber and standard conjugate formulation are straightforward application of existing techniques (despite being well motivated). The authors seem to think that there has been no application of Huber loss on RL. But existing implementations of RL already uses Huber loss. For example, in the openAI baselines (https://openai.com/blog/openai-baselines-dqn/), they said the following: "Double check your interpretations of papers: In the DQN Nature paper the authors write: “We also found it helpful to clip the error term from the update [...] to be between -1 and 1.”. There are two ways to interpret this statement — clip the objective, or clip the multiplicative term when computing gradient. The former seems more natural, but it causes the gradient to be zero on transitions with high error, which leads to suboptimal performance, as found in one DQN implementation. The latter is correct and has a simple mathematical interpretation — Huber Loss. You can spot bugs like these by checking that the gradients appear as you expect — this can be easily done within TensorFlow by using compute_gradients." The authors discussed the first approach above on in the rebuttal, but I am not sure if the authors have considered the second method. If not, it would be worthwhile to discuss and compare with it. See also "Agarwal et al. An Optimistic Perspective on Offline Reinforcement Learning" and "Dabney et al. Distributional Reinforcement Learning with Quantile Regression." On the other hand, I have not seen the application of saddle point approach by primal-dual method of Dai on Huber specially. It seems that the proposed algorithm is in the end equivalent to MSBE+primal-dual+ (h with softmax output). If it is that simple, I think it would help the readers to explicitly point this out upfront in the beginning (which is an interesting conceptual connection). Because the primal-dual approach need to be approximate h with a neural network, the difference of the two methods is vague in the primal-dual space. A side mark: when we say "an objective for which we can obtain *unbiased* sample gradients", i think that the gradient estimator of the augmented Lagrange is unbiased; the gradient estimates of MHBE and MABE are still biased. Overall, it is a paper with a well motivated and valuable contribution, but limited in terms of technical depth and novelty.
train
[ "fjW-5M8oLj", "hoIgxdYFL1W", "JOYn2DVVc0E", "VjaM0WdO6c", "Z_ONYlv7N68", "y8zRmqXOc5N", "UqeVdEzmZtg", "GHemWQjmLVO", "I5YOzIMkaT4", "7mkFxTNOSCx", "XXBKA8L-tS", "VnznblDqAnp", "mzPpkR8ti3", "FGn0EsZRNbm", "e6WdELwe5VS", "9BIEM2jutmr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal, I believe your response adequately address several of the issues I raised so I have given higher scores where appropriate. ", "This paper starts with the premise that squared error minimization, despite its wide use, might not be the most effective option for learning value functions...
[ -1, 8, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2 ]
[ "VnznblDqAnp", "iclr_2022_P1zfguZHowl", "iclr_2022_P1zfguZHowl", "y8zRmqXOc5N", "iclr_2022_P1zfguZHowl", "XXBKA8L-tS", "GHemWQjmLVO", "I5YOzIMkaT4", "7mkFxTNOSCx", "FGn0EsZRNbm", "Z_ONYlv7N68", "hoIgxdYFL1W", "9BIEM2jutmr", "e6WdELwe5VS", "iclr_2022_P1zfguZHowl", "iclr_2022_P1zfguZHowl...
iclr_2022_a1m8Jba-N6l
$k$-Mixup Regularization for Deep Learning via Optimal Transport
Mixup is a popular regularization technique for training deep neural networks that can improve generalization and increase adversarial robustness. It perturbs input training data in the direction of other randomly-chosen instances in the training set. To better leverage the structure of the data, we extend mixup to $k$-mixup by perturbing $k$-batches of training points in the direction of other $k$-batches using displacement interpolation, i.e. interpolation under the Wasserstein metric. We demonstrate theoretically and in simulations that $k$-mixup preserves cluster and manifold structures, and we extend theory studying the efficacy of standard mixup to the $k$-mixup case. Our empirical results show that training with $k$-mixup further improves generalization and robustness across several network architectures and benchmark datasets of differing modalities.
Reject
This paper proposes an extension of mixup (a data augmentation method) to k-mixup using optimal transport. The idea is to select randomly at each iteration two subsets of k samples and compute the optimal transport solution. Each pairs of samples assigned by the optimal transport plan will then be used to perform mixup and promote smoothness in the prediction function. The authors also provide some theoretical results about preservation of the clusters. Finally numeric experiment show the interest of k-mixup on toy and real life dataset classification and study the effect of k and the $\alpha$ parameter (of the $\beta$ distribution). All reviewers found the paper interesting and acknowledge that it leads to some performance improvements in practice. But they had several concerns that lead to low scores. The justification of the method an more specifically the link with the theoretical findings was found lacking, indeed the result make sens fr a large $k$ which is not was is done in practice (but experiments also show a decrease sometimes for large $k$). One interesting discussion between the proposed approach and minibatch OT is also missing. In addition the reviewers found the numerical experiments interesting but regret that some mixup approaches have not been compared and also noted a small gap in performance for the proposed approach (with no variance reported). Also the Adversarial robustness measure is now considered weak in the literature and those results could have been made stronger with more modern adversaries. Their final concern was the fact that the method now has two parameters that needs tuning and that can have a large impact on the performance for limited gain. The authors did a detailed reply a,d edition of the paper that was very appreciated by the reviewers but that did not change their opinion that this paper still deserves some more work before being accepted. For these reasons the AC recommend to reject the paper but strongly suggests that the authors take into account the reveiwers' comments before resubmitting to a ML venue.
train
[ "iET8AMUCsx3", "5ySEtES_Lr-", "SMxT3JMYgEB", "1Vn9aQOo7rL", "gpYQ3sOw_Pd", "H8dzVXyKTuT", "12deyrN2gTG", "3b48BmYcxXT", "ESQK2jE_hSH", "9vKSMtShxW", "WECshYZkcA2", "bRGULAxy2Rd", "en6ji0WGsjG", "PAAAVT0t8Lk", "zrp755jp-cS", "xz0IgNMepmh", "xZc4ZoJvXw7", "krlGl8qVVGd", "j9fncwOstn...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " Thank you very much for reading our response and responding in kind. Some further questions:\n\n> It is true that the references I gave applied minibatch OT on data fitting experiments but the authors of [3,4] also theoretically studied the minibatch transport plan, that you use in practice to match the source an...
[ -1, -1, 3, -1, 6, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 4, -1, 2, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "5ySEtES_Lr-", "xZc4ZoJvXw7", "iclr_2022_a1m8Jba-N6l", "krlGl8qVVGd", "iclr_2022_a1m8Jba-N6l", "9vKSMtShxW", "iclr_2022_a1m8Jba-N6l", "iclr_2022_a1m8Jba-N6l", "SMxT3JMYgEB", "12deyrN2gTG", "12deyrN2gTG", "12deyrN2gTG", "PAAAVT0t8Lk", "j9fncwOstn7", "12deyrN2gTG", "9gt8HBLSiQM", "SMxT...
iclr_2022_xaTensJtCP5
Semi-Empirical Objective Functions for Neural MCMC Proposal Optimization
Current objective functions used for training neural MCMC proposal distributions implicitly rely on architectural restrictions to yield sensible optimization results, which hampers the development of highly expressive neural MCMC proposal architectures. In this work, we introduce and demonstrate a semi-empirical procedure for determining approximate objective functions suitable for optimizing arbitrarily parameterized proposal distributions in MCMC methods. Our proposed Ab Initio objective functions consist of the weighted combination of functions following constraints on their global optima and transformation invariances that we argue should be upheld by general measures of MCMC efficiency for use in proposal optimization. Our experimental results demonstrate that Ab Initio objective functions maintain favorable performance and preferable optimization behavior compared to existing objective functions for neural MCMC optimization. We find that Ab Initio objective functions are sufficiently robust to enable the confident optimization of neural proposal distributions parameterized by deep generative networks extending beyond the regimes of traditional MCMC schemes.
Reject
This paper proposes guiding principles with which to design objective functions for proposal distributions for MCMC. They design one such objective based on GSM (Titsias and Dellaportas, 2019). The two concerns raised by reviewers that resonated the most with me were: - it was not clear that the actual proposed objective was the best way to implement these guiding principles - a weak empirical evaluation that did not consider online tuning and high-dim, highly non-Gaussian targets. After rebuttal, revision, and discussion, reviewers felt that the authors did a reasonable job of addressing the issue of online tuning, but very highly non-Gaussian targets were not addressed. There was still a sense that the ultimate instantiation of the design principles was a somewhat adhoc loss. Ultimately, I think that this work is just below the bar for acceptance and it can be improved by clarifying the choices made in implementing the objective and some more ambitious experiments.
val
[ "xEeCha77_H", "QdEEVTP4X-x", "on5Uqg4Lz3p", "HrtN7k1e3o1", "R9WoUD8jqB", "mFYQkB4-KnF", "yxgIS7M9aU", "DzL5ujK8NK8", "-9USUT4y2hD", "-w8Z0l4CiW", "SHK5RsE8KlN", "rWBxJHulYm", "KnGLSXXoHg6", "uwar8m6_JZr", "2ys-51NA8u", "IoXCQoFYOx3", "0E6R9slXGaP", "69IS9612bOL", "1m9JqID8YJN", ...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ "The authors design an objective function from first principles that can be used to optimize proposal distributions for MCMC. They empirically verify that their objective function with a fixed hyperparameter--tuned a priori on another task--can reproduce theoretical results over a variety of target distributions. #...
[ 8, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_xaTensJtCP5", "89H6JKBQwew", "HrtN7k1e3o1", "-w8Z0l4CiW", "yxgIS7M9aU", "yxgIS7M9aU", "Si0vSp9EHis", "iclr_2022_xaTensJtCP5", "SHK5RsE8KlN", "rWBxJHulYm", "IoXCQoFYOx3", "KnGLSXXoHg6", "2ys-51NA8u", "69IS9612bOL", "1m9JqID8YJN", "cRqGS-ROpLM", "cRqGS-ROpLM", "xEeCha77_H"...
iclr_2022_kz6rsFehYjd
Towards General Robustness to Bad Training Data
In this paper, we focus on the problem of identifying bad training data when the underlying cause is unknown in advance. Our key insight is that regardless of how bad data are generated, they tend to contribute little to training a model with good prediction performance or more generally, to some utility function of the data analyst. We formulate the problem of good/bad data selection as utility optimization. We propose a theoretical framework for evaluating the worst-case performance of data selection heuristics. Remarkably, our results show that the popular heuristic based on the Shapley value may choose the worst data subset in certain practical scenarios, which sheds lights on its large performance variation observed empirically in the past work. We then develop an algorithmic framework, DataSifter, to detect a variety of and even unknown data issues---a step towards general robustness to bad training data. DataSifter is guided by the theoretically optimal solution to data selection and is made practical by the data utility learning technique. Our evaluation shows that DataSifter achieves and most often significantly improves the state-of-the-art performance over a wide range of tasks, including backdoor, poison, noisy/mislabel data detection, data summarization, and data debiasing.
Reject
The paper considers the question of identifying bad data so that models can be trained on the subset of data that is good. This question is formulated as a utility optimization problem. The paper shows that some popular heuristics are quite bad in the framework they propose. They also propose a new algorithmic framework called DataSifter. There is empirical evaluation provided for this. Questions have been raised in the reviews about the size of the models that have been used in the empirical evaluation. The authors have responded to this by suggesting the use of proxy model techniques. There are also questions about learnability of data utility for which some responses are provided in the rebuttal.
train
[ "T_dGiRbsYa3", "69NWTnf9OzH", "m6KF0U8v-UK", "QOHILjEYBhc", "V5Z0Zyq-8c5", "yf1c2kDtP0q", "M9ua3x3Ari", "0AcGIopBb5s", "wpqgKbKMfI2", "OLL9s31_ztR", "_744oHUFOxn", "uOS6TCXxxrb", "LZ3PSfIdWA0", "6k3VLoJ9KlF", "PtsNBhJLYP", "fLDjnSRWq6t", "D7OWSudD5GC", "tNF9pDrykyt", "jp0VbNtubXq...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " We sincerely thank the reviewer for raising the score! \n\nFor your comment about the potentially stronger baseline for TracIn by excluding mislabeled training examples, we believe you refer to “misclassified” training instances instead of “mislabeled” in the last post. We understood your concern about the impact...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "V5Z0Zyq-8c5", "uOS6TCXxxrb", "LZ3PSfIdWA0", "6k3VLoJ9KlF", "M9ua3x3Ari", "iclr_2022_kz6rsFehYjd", "0AcGIopBb5s", "_744oHUFOxn", "LZ3PSfIdWA0", "6k3VLoJ9KlF", "PtsNBhJLYP", "tNF9pDrykyt", "TxEIr7YUcRK", "pJLiO79bV-", "fLDjnSRWq6t", "D7OWSudD5GC", "yf1c2kDtP0q", "oSJpsC4xjKm", "ic...
iclr_2022_D637S6zBRLD
Learning Symmetric Representations for Equivariant World Models
Encoding known symmetries into world models can improve generalization. However, identifying how latent symmetries manifest in the input space can be difficult. As an example, rotations of objects are equivariant with respect to their orientation, but extracting this orientation from an image is difficult in absence of supervision. In this paper, we use equivariant transition models as an inductive bias to learn symmetric latent representations in a self-supervised manner. This allows us to train non-equivariant networks to encode input data, for which the underlying symmetry may be non-obvious, into a latent space where symmetries may be used to reason about outcomes of actions in a data-efficient manner. Our method is agnostic to the type of latent symmetry; we demonstrate its usefulness over $C_4 \times S_5$ using $G$-convolutions and GNNs, over $D_4 \ltimes (\mathbb{R}^2,+)$ using $E(2)$-steerable CNNs, and over $\mathrm{SO}(3)$ using tensor field networks. In all three cases, we demonstrate improvements relative to both fully-equivariant and non-equivariant baselines.
Reject
This paper proposes a new method for learning symmetric representations for equivariant world models. All reviewers recognized the interesting results in the paper. The reviewers have raised some concerns, which were not addressed well yet after the rebuttal. For example, Reviewers LG1G and Uu3z mentioned about the limitation of using the group for a task and the generality of the approach. Reviewer 5ro3 mentioned about the lack of novelty. Though they gave 6, they were quite neutral about the paper acceptance. Eventually, after a second round of deiscussions, we had to make this difficult decision: The current form of this paper is not ready for publications.
train
[ "gHlmOulw3Wv", "MSdknQ_CImf", "pUkVk0RBnJW", "FOKiy5or_gU", "HbO6hIfpemI", "p9jKRIU3d5", "CHuNXN-4w4Z", "_4XgFCjmP-5", "AqqujS2NhMw", "SfhVka6jbXI", "3VqhYJkMFR-", "-2GzqCRMUzl", "RFNyRTRxx0A", "8kjV2PKpk2O", "vq1_Pe938no", "LzXIvr9VCzO" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to learn latent representation and transition models that are equivariant to symmetry transformations of states and actions. The proposed “meta-architecture” consists of an encoder followed by an equivariant encoder and an equivariant transition model. The model is trained using (state, action, ...
[ 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2022_D637S6zBRLD", "pUkVk0RBnJW", "HbO6hIfpemI", "SfhVka6jbXI", "CHuNXN-4w4Z", "iclr_2022_D637S6zBRLD", "_4XgFCjmP-5", "AqqujS2NhMw", "3VqhYJkMFR-", "p9jKRIU3d5", "gHlmOulw3Wv", "LzXIvr9VCzO", "iclr_2022_D637S6zBRLD", "vq1_Pe938no", "iclr_2022_D637S6zBRLD", "iclr_2022_D637S6zBRLD...
iclr_2022_MDT30TEtaVY
Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets
Permutation invariant neural networks are a promising tool for predictive modeling of set data. We show, however, that existing architectures struggle to perform well when they are deep. In this work, we address this issue for the two most widely used permutation invariant networks, Deep Sets and its transformer analogue Set Transformer. We take inspiration from previous efforts to scale neural network architectures by incorporating normalization layers and skip connections that work for sets. First, we motivate and develop set norm, a normalization tailored for sets. Then, we employ equivariant residual connections and introduce the ``clean path principle'' for their placement. With these changes, our many-layer Deep Sets++ and Set Transformer++ models reach comparable or better performance than their original counterparts on a diverse suite of tasks, from point cloud classification to regression on sets of images. We additionally introduce Flow-RBC, a new single-cell dataset and real-world application of permutation invariant prediction. On this task, our new models outperform existing methods as well as a clinical baseline. We open-source our data and code here: link-omitted-for-anonymity.
Reject
The submission focuses on a set norm normalization layer for neural network models, which stands in contrast to a batch norm. The majority of the reviewers felt that this submission is not suitable for publication at ICLR in its current form. These concerns remained after the post-rebuttal discussion. Quoting from the reviewer discussion, the following points remained as significant concerns: 1. lack of novelty. normalization layers have been used previously in other sets architectures. The systematic approach for normalization in the current paper is nice but I am not sure how valuable it is. There exists already extensive literature on normalization layers for graphs (a generalization of sets). 2. lack of motivation for deep networks. The main claim in the paper (see the introduction and figure 1) is that 50 layers should perform better and since it does not seem to be the case in figure 1, it requires studying normalization layers. I am not sure it is a well-established claim. What are the assumptions leading to the conclusion that deep architectures should perform better on the task considered in figure 1? I am also concerned that normalization layers have a major effect on improving "not so deep" networks with 3-10 layers and not only the extreme 50 layers case, making the comparison in the paper between only 3 and 50 layers not enough for telling the full story. Thus evaluation on more depths is required. On the balance, the paper does not meet the threshold for acceptance in this round of peer review.
train
[ "JKcXaeHySf", "c09iWpEA_Y6", "TT34WBGH8v", "Np2y5cl8InL", "J0ERk4EZz6p", "k7r0mjV6pXq", "q850lD4eEI", "otIRQ2ze98m", "xX2ZcOPkPMR", "cG3bTwvlS5n" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 8. *[Requirements for a design of a normalization layer should be refined]* Thank you for your suggestion, we refined the phrasing for the desiderata. We update the sets of varying sizes requirement to emphasize that a general-purpose normalization layer should handle sets of varying sizes without additional desi...
[ -1, -1, -1, -1, -1, -1, 6, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "xX2ZcOPkPMR", "xX2ZcOPkPMR", "cG3bTwvlS5n", "otIRQ2ze98m", "q850lD4eEI", "iclr_2022_MDT30TEtaVY", "iclr_2022_MDT30TEtaVY", "iclr_2022_MDT30TEtaVY", "iclr_2022_MDT30TEtaVY", "iclr_2022_MDT30TEtaVY" ]
iclr_2022_xw04RdwI2kS
Inverse Contextual Bandits: Learning How Behavior Evolves over Time
Understanding a decision-maker's priorities by observing their behavior is critical for transparency and accountability in decision processes—such as in healthcare. Though conventional approaches to policy learning almost invariably assume stationarity in behavior, this is hardly true in practice: Medical practice is constantly evolving as clinical professionals fine-tune their knowledge over time. For instance, as the medical community's understanding of organ transplantations has progressed over the years, a pertinent question is: How have actual organ allocation policies been evolving? To give an answer, we desire a policy learning method that provides interpretable representations of decision-making, in particular capturing an agent's non-stationary knowledge of the world, as well as operating in an offline manner. First, we model the evolving behavior of decision-makers in terms of contextual bandits, and formalize the problem of Inverse Contextual Bandits ("ICB''). Second, we propose two concrete algorithms as solutions, learning parametric and non-parametric representations of an agent's behavior. Finally, using both real and simulated data for liver transplantations, we illustrate the applicability and explainability of our method, as well as benchmarking and validating the accuracy of our algorithms.
Reject
Summary: This paper studies an inverse (linear) contextual bandits (ICB) problem, where, given a $T$-round realization of a bandit policy’s actions and observed rewards, the goal is to design an algorithm to estimate the underlying environment parameter, along with the “belief trajectory” of the bandit policy. A particular emphasis is placed on the belief trajectory being “interpretable” and capturing changes in the policy’s “knowledge of the world” over time. The paper’s main contributions are (i) formalizing the inverse contextual bandits problem, (ii) designing two algorithms for this problem based on two different ways of modelling beliefs of the bandit policy, and (iii) providing empirical illustrations of how their algorithm can be used to investigate and explain changes in medical decision-making over time Discussion: This paper has received high quality, long and detailed reviews that highlighted some flaws, in particular in the well-posedness of the problem and the clarity of the writing. The authors' response was long and detailed as well, and its quality was recognized by the committee. However, the consensus is that this work would require a full pass allowing to include most of the feedback received in the main text rather than in appendices, to discuss related problems in the literature in more depth and perhaps to refocus the exposition on the problem considered. Recommendation: Reject.
val
[ "W2z6zfimoh9", "ckMs87fVdIW", "F0EBVIrVYRp", "-6j77gXlUZv", "LJ1_6jJKES", "xUQAA9Pjtsj", "LOk7Tv3MB9", "wLm5rSyID09", "QMdkUZkTMPU", "ZV4RaRTJjEK", "h09rdAK-_2", "wsLE6_6NpZQ", "pONgMkTncND", "C-jr_LRyFa", "kREZNLwm7qL", "hp2qc7fv2Bv", "xehtae7oKeF", "TN2bBj2dFiM", "IEMqdKFeHuQ",...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "auth...
[ " In inverse problems, there is a difference between *the agent* having access to environment dynamics and *us, as the investigators*, having access to environment dynamics. In our manuscript, when we talk about whether environment dynamics are known or not, we strictly refer to the agent's knowledge. \n\nWhen the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 2 ]
[ "ckMs87fVdIW", "JlgEAHGgb8m", "LOk7Tv3MB9", "wLm5rSyID09", "QMdkUZkTMPU", "ZV4RaRTJjEK", "jVncPKAGZir", "-P9plnD_q0R", "14e9er0qoPX", "arfo5TutVck", "14e9er0qoPX", "14e9er0qoPX", "iclr_2022_xw04RdwI2kS", "arfo5TutVck", "jVncPKAGZir", "-P9plnD_q0R", "-P9plnD_q0R", "-P9plnD_q0R", "...
iclr_2022_SVey0ddzC4
Connecting Graph Convolution and Graph PCA
Graph convolution operator of the GCN model is originally motivated from a localized first-order approximation of spectral graph convolutions. This work stands on a different view; establishing a mathematical connection between graph convolution and graph-regularized PCA (GPCA). Based on this connection, the GCN architecture, shaped by stacking graph convolution layers, shares a close relationship with stacking GPCA. We empirically demonstrate that the unsupervised embeddings by GPCA paired with a 1- or 2-layer MLP achieves similar or even better performance than many sophisticated baselines on semi-supervised node classification tasks across five datasets including Open Graph Benchmark. This suggests that the prowess of graph convolution is driven by graph based regularization. In addition, we extend GPCA to the (semi-)supervised setting and show that it is equivalent to GPCA on a graph extended with “ghost” edges between nodes of the same label. Finally, we capitalize on the discovered relationship to design an effective initialization strategy based on stacking GPCA, enabling GCN to converge faster and achieve robust performance at large number of layers.
Reject
The paper presents several related results. The initial main result consists in relating GPCA to GCN, showing that GPCA can be understood as a first order approximation of some specific instance of GCN where the W matrix is directly defined on data. This result is then exploited to define a supervised version of GPCA. As a follow-up the authors propose a novel GPCA-based network (GPCANet) and a GPCANet initialisation for GNNs. The paper is well written and easy to read. Empirical results are reported to verify the above mentioned connection between GPCA and GCN, as well as the performances of GPCANet and the proposed initialisation for GNNs. Overall, while the mentioned connection was never explicitly reported in the literature, its existence is not surprising and thus its significance seems to be limited. Also the performances of GPCANet do not seem to be significant from a statistical point of view. The novel initialisation procedure for GNNs seems to be interesting and promising, although the used datasets may not make evident its full power. Authors rebuttal and discussion did not change the reviewers' initial assessment.
test
[ "yvSm1u_qF7l", "ImKKeplfZ5E", "S3h15jAPETp", "ri2sD9IYwdr", "2-PblnR7aOX", "M32Rp7elTJb", "6vze9LT-QZu", "cyb6kYwkTGV", "98Pj_Bqp15", "eWVERgZhunS", "wAzqM8R8IaV", "-Ljx8zFdVHO", "m_kt20pilZK" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The reviewer consistently argues the connection between GPCA and graph convolution is trivial, because \"the proposed method is a GC operator with an orthogonal constraint\". We would like to ask the reviewer one question: if the connection between GPCA and GCN is just by adding the orthogonal constraint to graph...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "ri2sD9IYwdr", "S3h15jAPETp", "2-PblnR7aOX", "6vze9LT-QZu", "m_kt20pilZK", "6vze9LT-QZu", "-Ljx8zFdVHO", "wAzqM8R8IaV", "eWVERgZhunS", "iclr_2022_SVey0ddzC4", "iclr_2022_SVey0ddzC4", "iclr_2022_SVey0ddzC4", "iclr_2022_SVey0ddzC4" ]
iclr_2022_zrdUVVAvcP2
GrASP: Gradient-Based Affordance Selection for Planning
Planning with a learned model is arguably a key component of intelligence. There are several challenges in realizing such a component in large-scale reinforcement learning (RL) problems. One such challenge is dealing effectively with continuous action spaces when using tree-search planning (e.g., it is not feasible to consider every action even at just the root node of the tree). In this paper we present a method for \emph{selecting} affordances useful for planning---for learning which small number of actions/options from a continuous space of actions/options to consider in the tree-expansion process during planning. We consider affordances that are goal-and-state-conditional mappings to actions/options as well as unconditional affordances that simply select actions/options available in all states. Our selection method is gradient based: we compute gradients through the planning procedure to update the parameters of the function that represents affordances. Our empirical work shows that it is feasible to learn to select both primitive-action and option affordances, and that simultaneously learning to select affordances and planning with a learned value-equivalent model can outperform model-free RL.
Reject
The paper proposes a learning framework that allows for planning in continuous action spaces using tree search. Key to the approach is performing tree search over a discrete set of learned affordances that provide a compact abstraction of the action space that facilitates planning. The affordances are learned by passing gradients through a model-based planner that uses learned models of the dynamics, reward, and state-value functions. Experimental evaluations demonstrate the ability to perform tree search-based planning using the learned affordances in a variety of domains for which tree search would otherwise be difficult. The paper is topical, both with regards to its consideration of affordances as temporal abstractions that facilitate planning as well as the broader notion of integrating planning and learning. Several reviewers agree that the means by which affordances are learned by passing gradients through the planner is both interesting and novel. The reviewers also emphasize that the paper is well written and easy to follow, and that the approach is reproducible as a result. The reviewers raised a few concerns with the initial submission, notably the need for experimental comparisons to other recent baselines, which are important to clarifying the significance of the contributions, and the susceptibility to collapse in the affordance distribution. The authors clarified some of these questions and proposed adding comparisons to other baselines (e.g., DREAMER, for which there is already a comparison in the appendix), however it is not clear whether the submission was updated accordingly. The authors are encouraged to take this feedback into account and to include a more thorough experimental evaluation in any future version of the paper.
train
[ "FCEjZlR6Vs7", "Nd4K1G4XxCn", "76m8u7dGlSC", "UJH91RO9uJl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a gradient-based affordance selection, i.e., action/option selection, in addition to the value equivalent modules for tree-expansion procedure(s) in planning.\nThe claim is that GrASP can learn primitive-action and option selection and plan in a continuous state and action space to outperform mo...
[ 5, 6, 5, 8 ]
[ 2, 3, 4, 3 ]
[ "iclr_2022_zrdUVVAvcP2", "iclr_2022_zrdUVVAvcP2", "iclr_2022_zrdUVVAvcP2", "iclr_2022_zrdUVVAvcP2" ]
iclr_2022_uy602F8cTrh
CausalDyna: Improving Generalization of Dyna-style Reinforcement Learning via Counterfactual-Based Data Augmentation
Deep reinforcement learning agents trained in real-world environments with a limited diversity of object properties to learn manipulation tasks tend to suffer overfitting and fail to generalize to unseen testing environments. To improve the agents' ability to generalize to object properties rarely seen or unseen, we propose a data-efficient reinforcement learning algorithm, CausalDyna, that exploits structural causal models (SCMs) to model the state dynamics. The learned SCM enables us to counterfactually reason what would have happened had the object had a different property value. This can help remedy limitations of real-world environments or avoid risky exploration of robots (e.g., heavy objects may damage the robot). We evaluate our algorithm in the CausalWorld robotic-manipulation environment. When augmented with counterfactual data, our CausalDyna outperforms state-of-the-art model-based algorithm, MBPO and model-free algorithm, SAC in both sample efficiency by up to 17% and generalization by up to 30%. Code will be made publicly available.
Reject
The paper describes a new method to improve the generalization of model-based RL by means of interventional data augmentation. The key idea is to intervene the value of a particular variable (e.g., object property) in the learned dynamic model for episode simulations. Experimental results show that it improves (i) the generalization ablity in the OoD scenarios with respect to the intervened variable, (ii) sample efficiency in the presence of unbalanced training distribution. Strengths: - connects data augmentation to counterfactual property generation - clearly written - novel about applying counterfactual data augmentation to DYNA, as opposed to standard data augmentation techniques in other areas of machine learning - The paper well demonstrates the benefits of counterfactual data augmentation for model-based RL Weaknesses: - a lack of explanation for how the model is supervised to be equivariant to different data augmentations. - empirical results seem to suggest that the proposed data augmentation does not have much of an effect on performance - The claimed connection between the SCM and the proposed dynamic model seems vague - The technical contribution seems limited and involves very strong assumptions. - The structural causal model it introduces does not appear to be used by the method at all. - the presentation does not cleanly separate counterfactual reasoning from intervention - he greatest weakness of the method, acknowledged by the authors, is that there is no way to train the model on altered data. Thus, the performance of the policy on these altered data hinges on the extent to which the model, trained without such data, happens to make accurate predictions All the reviewers voted for rejection. I recommend the authors to use the reviewrs' comments to improve the paper and resubmit to another venue.
val
[ "pGRBa7WTQvS", "zZBhk186pP5", "0QG4ebXG6JP", "yfT_NToYBWQ", "tqyCtrE8BS0", "3isoQH-NI-", "kw_q-TJdmE4", "eyjVlt28sM0", "nTvG8d2nHB", "kIM-uGcOOzX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of generalization of a dynamics model to different versions of the same environment, indexed by parameter m. For example, this parameter $m$ might be the mass of an object to be picked up. The problem that the authors tackle is to generalize the agent's policy to environments with ...
[ 3, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2022_uy602F8cTrh", "3isoQH-NI-", "yfT_NToYBWQ", "kIM-uGcOOzX", "nTvG8d2nHB", "pGRBa7WTQvS", "eyjVlt28sM0", "iclr_2022_uy602F8cTrh", "iclr_2022_uy602F8cTrh", "iclr_2022_uy602F8cTrh" ]
iclr_2022_aJ9BXxg352
Intriguing Properties of Input-dependent Randomized Smoothing
Randomized smoothing is currently considered the state-of-the-art method to obtain certifiably robust classifiers. Despite its remarkable performance, the method is associated with various serious problems such as ``certified accuracy waterfalls'', certification vs.\ accuracy trade-off, or even fairness issues. Input-dependent smoothing approaches have been proposed to overcome these flaws. However, we demonstrate that these methods lack formal guarantees and so the resulting certificates are not justified. We show that the input-dependent smoothing, in general, suffers from the curse of dimensionality, forcing the variance function to have low semi-elasticity. On the other hand, we provide a theoretical and practical framework that enables the usage of input-dependent smoothing even in the presence of the curse of dimensionality, under strict restrictions. We present one concrete design of the smoothing variance and test it on CIFAR10 and MNIST. Our design solves some of the problems of classical smoothing and is formally underlined, yet further improvement of the design is still necessary.
Reject
The paper considers input-dependent randomized smoothing to obtain certified robust classification. The main contribution is the derivation of necessary conditions on how the variance of the smoothing distributions (assumed to be spherically symmetric Gaussian distributions) has to change to achieve certified robustness. All reviewers like this result, as it provides guidance on designing input-dependent smoothing, which is an interesting result for the community and certainly helps future research. On the negative side, the smoothing method derived based on the theory provides little (if at all) improvement in practice, it cannot be scaled to higher dimensions, it does not address the problems it claims to address (the "waterfall" effect, as also admitted by the authors in the discussion), and the presentation should be significantly improved. The paper received mixed reviews. While I think that the presented theoretical results are useful and interesting, the problems mentioned above make me to side with the negative reviewers and suggest rejection of the paper at this point (although this was not an easy decision). While this is only lightly touched in the reviews, I strongly recommend the authors to make the presentation of the theoretical results more comprehensible. It is quite hard to follow the paper as notation is introduced continuously in an ad-hoc and confusing way (e.g., in the proof of Theorem 2, $a$ denotes $\delta$ and $\|\delta\|$), and things are often not adequately defined (e.g., the certified robust radius is not defined formally; in Lemma 1, $x$ is undefined and used for $x_0$ as well as a free parameter, $\chi_N^2$ is only implicitly defined, etc.)
train
[ "FHRrwRzyjfe", "CWBklkldAz", "veB_mHEo8Ix", "Y68c7mX9ezp", "Jt_ZkR0dahT", "ofxog53HX2h", "4QB7sRDtbd", "BNBb2k6vGB", "W4F05-12R4", "WtRWSWUeW7W", "isXuhqSkzoi", "fovPzEV30ll", "39aIOeYyZ16", "bHNSIbsb28", "NaNXYOYzVJa", "4qfPYE2afH3", "u-1s_Ui1lPD", "1O097gSxiG", "BptetzPDnNc", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Thank you for quick clarification of your sentences. \n\nWe just realised you might have meant it in this way at the very same moment you have written it. \n\nIn this case there is probably nothing more to be said here, because we already explained extensively why we didn't consider the anisotropic smoothing and ...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 8 ]
[ -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "veB_mHEo8Ix", "Y68c7mX9ezp", "CWBklkldAz", "BptetzPDnNc", "iclr_2022_aJ9BXxg352", "BNBb2k6vGB", "BNBb2k6vGB", "1O097gSxiG", "iclr_2022_aJ9BXxg352", "4qfPYE2afH3", "xK6Ik_XdNKg", "u-1s_Ui1lPD", "xK6Ik_XdNKg", "x_lh9LPychx", "OYUQxTHkTKW", "Jt_ZkR0dahT", "x_lh9LPychx", "OYUQxTHkTKW"...
iclr_2022_yCS5dckx_vj
Towards Demystifying Representation Learning with Non-contrastive Self-supervision
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn representations by minimizing the distance between two views of the same image. These approaches have achieved remarkable performance in practice, but it is not well understood 1) why these methods do not collapse to the trivial solutions and 2) how the representation is learned. Tian et al made an initial attempt on the first question and proposed DirectPred that sets the predictor directly. In our work, we analyze a generalized version of DirectPred, called DirectSet($\alpha$). We show that in a simple linear network, DirectSet($\alpha$) provably learns a desirable projection matrix and also reduces the sample complexity on downstream tasks. Our analysis suggests that weight decay acts as an implicit threshold that discard the features with high variance under augmentation, and keep the features with low variance. Inspired by our theory, we simplify DirectPred by removing the expensive eigen-decomposition step. On CIFAR-10, CIFAR-100, STL-10 and ImageNet, DirectCopy, our simpler and more computationally efficient algorithm rivals or even outperforms DirectPred.
Reject
This paper furthers recent work by Tian et al. 2021 to explain how representation learning with non-contrastive self-supervision works. The paper accomplishes this by analyzing a family of algorithms in which DirectPred from Tian et al. (2021) is a special case. Their theoretical analysis is performed with linear networks. Overall, the reviewers questioned the added value relative to Tian et al. 2021, noting that "The analysis of DirectSet and DirectCopy succeeds at proving that it can successfully learn a projection matrix onto an invariant feature space subspace, but essentially boils down to a similar approach as DirectPred (albeit more efficient)" The authors in their reply state "how the representation is related to the data distribution and augmentation process," however the relative contribution and why its important isn't transparent.
train
[ "2MGf95BYsWw", "MrWdEAAHpYw2", "6sMlUyCqlrch", "x2MGM8WgYc1p", "uzgCu1-BYZ0E", "aHuY4OzWXmFj", "7h8yljHiw3x", "RnmS0bH_5Nv", "KTkfH0tYakX", "dTP4w3p8-V3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response! I'll keep my rating after reading the response.", " A common concern between Reviewer 5xNt and Reviewer G6yG is that our algorithm and analysis are based on Tian et al 2021 and some results may seem similar. Here, we clarify the major differences between our paper and Tian et al 2021...
[ -1, -1, -1, -1, -1, -1, 6, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "aHuY4OzWXmFj", "iclr_2022_yCS5dckx_vj", "dTP4w3p8-V3", "KTkfH0tYakX", "RnmS0bH_5Nv", "7h8yljHiw3x", "iclr_2022_yCS5dckx_vj", "iclr_2022_yCS5dckx_vj", "iclr_2022_yCS5dckx_vj", "iclr_2022_yCS5dckx_vj" ]
iclr_2022_0Q6BzWbvg0P
Less is More: Dimension Reduction Finds On-Manifold Adversarial Examples in Hard-Label Attacks
Designing deep networks robust to adversarial examples remains an open problem. Likewise, recent zeroth-order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives. It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors. In this paper, we argue that query efficiency in the zeroth-order setting is connected to an adversary's traversal through the data manifold. To explain this behavior, we propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate. Through numerical experiments of manifold-gradient mutual information, we show this behavior acts as a function of the effective problem dimensionality. On high-dimensional real-world datasets and multiple zeroth-order attacks using dimension reduction, we observe the same behavior to produce samples closer to the data manifold. This can result in up to 4x decrease in the manifold distance measure, regardless of the model robustness. Our results suggest that taking the manifold-gradient mutual information into account can thus inform better robust model design in the future, and avoid leakage of the sensitive data manifold information.
Reject
In this work, authors study query efficiency in the zeroth-order setting of adversarial examples. Reviewers pointed out several weaknesses in the work. They mentioned the paper is not well-organized and poorly written, experiments are not comprehensive and the practical significance of the proposed method is unclear. Although reviewers appreciated authors' efforts and responses in the discussion period, they felt that the paper is not above the accept threshold this round and still needs a bit more work.
train
[ "UsAuF3KD0g_", "3fWvpL2cbGe", "EfsO3RFw23z", "_yZYkK_xmjJ", "uC63C9EyIBn", "rBdqKJlpm0M", "WabLmUQpX9-", "5jAZehnhftz", "xkyKDUitSSV", "gmDbxa0-XYN", "EvdKt8PMpIf", "lI_5MVTI62k", "5bn-bVymOG" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Area Chair and Reviewers,\n\nWe thank Reviewer cq6A for updating the post-rebuttal comments. As the discussion deadline is closing soon, we would like to follow up to ensure we have successfully conveyed the merits and main contributions of our work. We took the silence of the post-rebuttal discussion as a p...
[ -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, 2, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "_yZYkK_xmjJ", "iclr_2022_0Q6BzWbvg0P", "iclr_2022_0Q6BzWbvg0P", "iclr_2022_0Q6BzWbvg0P", "3fWvpL2cbGe", "iclr_2022_0Q6BzWbvg0P", "EfsO3RFw23z", "5bn-bVymOG", "3fWvpL2cbGe", "3fWvpL2cbGe", "lI_5MVTI62k", "iclr_2022_0Q6BzWbvg0P", "iclr_2022_0Q6BzWbvg0P" ]
iclr_2022_djZBr4Z7jcz
On the regularization landscape for the linear recommendation models
Recently, a wide range of recommendation algorithms inspired by deep learning techniques have emerged as the performance leaders several standard recommendation benchmarks. While these algorithms were built on different DL techniques (e.g., dropouts, autoencoder), they have similar performance and even similar cost functions. This paper studies whether the models' comparable performance are sheer coincidence, or they can be unified into a single framework. We find that all linear performance leaders effectively add only a nuclear-norm based regularizer, or a Frobenius-norm based regularizer. The former ones possess a (surprisnig) rigid structure that limits the models' predictive power but their solutions are low rank and have closed form. The latter ones are more expressive and more efficient for recommendation but their solutions are either full-rank or require executing hard-to-tune numeric procedures such as ADMM. Along this line of finding, we further propose two low-rank, closed-form solutions, derived from carefully generalizing Frobenius-norm based regularizers. The new solutions get the best of both nuclear-norm and Frobenius-norm world.
Reject
The paper presents a new perspective on recommendation systems, categorizing them as linear predictors where the main difference between the various methods is the regularizer. The authors then propose an objective function that aims at optimizing the Frobenius norm while maintaining a low-rank solution, and present algorithm that have closed-form solutions based on the SVD. The reviewers noted the novelty of the framework, but the overall assessment after the discussion was that the theoretical contribution was limited. The algorithm proposed by the authors does not provide any improvement on standard criteria (performance, computational complexity), which makes the algorithmic/experimental contribution limited as well.
train
[ "QhLOmPXEN-N", "1Ei5aNBnE41", "Oo1X631wx_D", "9sNW321Gjp", "jw2Fh-Krt5", "8n3eJCDPmRv", "MAbMKZum5ms", "Zehw1-6Mjl", "b6XA2Clz0L", "u7PIkXWfUlx", "GneCnY1GW8w", "E-QqXaeBT0B", "Np3GE8VgGns", "dxs-n-v5M5C", "p_BafG4uGNu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper unifies under one theoretical framework the different state-of-the-art regularization approaches for linear collaborative-filtering recommendation models which leverage the user-item interaction data matrix. \n\nThe authors classify these algorithms among two families: 1) the nuclear-norm-based regulari...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_djZBr4Z7jcz", "iclr_2022_djZBr4Z7jcz", "Np3GE8VgGns", "jw2Fh-Krt5", "8n3eJCDPmRv", "1Ei5aNBnE41", "QhLOmPXEN-N", "dxs-n-v5M5C", "p_BafG4uGNu", "b6XA2Clz0L", "Zehw1-6Mjl", "MAbMKZum5ms", "iclr_2022_djZBr4Z7jcz", "iclr_2022_djZBr4Z7jcz", "iclr_2022_djZBr4Z7jcz" ]
iclr_2022_3pZTPQjeQDR
How BPE Affects Memorization in Transformers
Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We conjecture this effect is caused by reduction in the sequences' length that happens as the BPE vocabulary grows. Our findings can allow a more informed choice of hyper-parameters, that is better tailored for a particular use-case.
Reject
This paper investigates the role of BPE and vocabulary sizes in memorization in transformer models. Through a series of experiments on random label prediction, training data recovery and membership inference attacks, the paper shows that larger vocabulary sizes lead to improved memorization. The Reviewers all agree that the paper investigates an important question and does so thoroughly. The main concerns were about: (1) the validity of the conclusion that it is sequence length indeed which affects memorization; and (2) the lack of more tasks to validate the findings. For (1) the authors added another set of experiments which further rule out frequency effects as a factor, but I agree with Reviewer KAZC that more evidence is needed which directly shows that sequence length is responsible (e.g. are shorter PAQ questions memorized better?). For (2) the authors shared a google drive link with additional results on NMT after the deadline, which the reviewers appreciated. Overall, however, the paper needs more work in order to unify all these results in a single draft.
test
[ "RXRHsv-x3aV", "kvjX77J-v5O", "AQ6YnH3UhBX", "3m08v_qkBrl", "d092zqPWgUN", "v6m_y7vN4h2", "SL_mu-1NgE-", "8PjLtl7y9T5", "jdKfQDshKBM", "sUmRsifp1J", "llRvxA9Ytcr", "dy35ev9gB_V" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the memorization properties of an NLP models conditioned on how large is the vocabulary size. They concentrate on widely used BPE algorithm in order to split original data into subword units. Further they construct a test-bed consisting on several tasks where each task is related with a specific...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_3pZTPQjeQDR", "iclr_2022_3pZTPQjeQDR", "d092zqPWgUN", "dy35ev9gB_V", "kvjX77J-v5O", "SL_mu-1NgE-", "jdKfQDshKBM", "RXRHsv-x3aV", "RXRHsv-x3aV", "kvjX77J-v5O", "dy35ev9gB_V", "iclr_2022_3pZTPQjeQDR" ]
iclr_2022_7Rnf1F7rQhR
Best Practices in Pool-based Active Learning for Image Classification
The recent popularity of active learning (AL) methods for image classification using deep-learning has led to a large number of publications that lead to significant progress in the field. Benchmarking the latest works in an exhaustive and unified way and evaluating the improvements made by the novel methods is of key importance to advance the research in AL. Reproducing state-of-the-art AL methods is often cumbersome, since the results and the ranking order of different strategies are highly dependent on several factors, such as training settings, used data type, network architectures, loss function and more. With our work we highlight the main factors that should be considered when proposing new AL strategies. In addition, we provide solid benchmarks to compare new with existing methods. We therefore conduct a comprehensive study on the influence of these key aspects, providing best practices in pool-based AL for image classification. We emphasize aspects such as the importance of using data augmentation, the need of separating the contribution of a classification network and the acquisition strategy to the overall performance, the advantages that a proper initialization of the network can bring to AL. Moreover, we make a new codebase available, that enables state-of-the-art performance for the investigated methods, which we hope will serve the AL community as a new starting point when proposing new AL strategies.
Reject
The authors study the training settings that may affect active deep learning performance, including code/warm start, leveraging unlabeled data, and initial set selection, for each active learning strategy. The findings on several data sets help understand AL more, with some pieces of insights to inspire future research. The reviewers were at best lukewarm about the work prior to the rebuttal. Some turned more positive but none were willing to strongly champion for the paper's acceptance, even after the authors provided a decent rebuttal. This leaves the paper to be a borderline case, and the recommendation comes from carefully checking the latest revision and calibrating its score with other submissions. The reviewers are generally positive about the breadth of the study, the potential impact of the codebase and the systematic study that can inspire future works. Some clarified issues include comments on future research directions and the labeling efficiency plot (which is, however, not analyzed deeper in the main text), and results on additional settings like transfer learning (somewhat preliminary). In the end, two remaining concerns surround whether the technical contribution and the conclusions are sufficiently solid, including * limited insights: Some reviewers comment that the insights are on the lighter side. The authors identify several issues that may affect the performance of the underlying tasks of active learning, and find that the best setting differs across different active learning strategies. But given that the paper offers at best "best practices of training models on actively-queried labels", it is not clear whether the authors achieve their claimed goal of "compare different strategies in a fair way"---in particular, the conclusion for this particular comparison seems to be missing (e.g. which is recommended in practice, BADGE or LL4AL or others?). Also, given that only three data sets (5 after rebuttal) have been studied in this work (see item below), the "generalization ability" of the conclusions in this paper cannot be clearly established. While the authors provided some additional pieces in the rebuttal, the pieces can use more study to be fully conclusive. Some reviewers are also concerned that the conclusions are rather scattered. From a practical perspective, it appears to be a chicken-egg problem on whether to fix the active strategy first (and then train the model with the best setting/practice), or fix the training setting first (and then select the best strategy). The authors may want to add more arguments on why they focus on the former rather than the latter. * limited experiments: several reviewers point out that the few data sets used could not fully justify the "best practice", and demand data sets like ImageNet. The authors offered some new results on TinyImageNet and CIFAR100, but those are not studied as deeply as other data sets at the current point. A more careful study on the two (and other) data sets are thus strongly recommended.
train
[ "C4rkpC7nT-T", "Rf-nGVD2NP8", "FTydbCbYF5d", "C33rb2v1_Va", "TzXJ6MyHp6", "xcJpUnQhQtV", "jbPusZHq6Fe", "XX_NW8tGane", "7TuL1FMIEkb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThe paper provides benchmarking of some of the popular active learning methods on CIFAR10, SVHN and FashionMNIST datasets. Effects of factors such as choice of backbone, data augmentation, optimizers, learning rate, cold vs warm starting are studied and the conclusions are provided as best practices. Analysis is...
[ 6, 8, -1, -1, -1, -1, -1, 5, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_7Rnf1F7rQhR", "iclr_2022_7Rnf1F7rQhR", "C4rkpC7nT-T", "Rf-nGVD2NP8", "7TuL1FMIEkb", "XX_NW8tGane", "iclr_2022_7Rnf1F7rQhR", "iclr_2022_7Rnf1F7rQhR", "iclr_2022_7Rnf1F7rQhR" ]
iclr_2022_0uZu36la_y4
Protect the weak: Class focused online learning for adversarial training
Adversarial training promises a defense against adversarial perturbations in terms of average accuracy. In this work, we identify that the focus on the average accuracy metric can create vulnerabilities to the "weakest" class. For instance, on CIFAR10, where the average accuracy is 47%, the worst class accuracy can be as low as 14%. The performance sacrifice of the weakest class can be detrimental for real-world systems, if indeed the threat model can adversarially choose the class to attack. To this end, we propose to explicitly minimize the worst class error, which results in a min-max-max optimization formulation. We provide high probability convergence guarantees of the worst class loss for our method, dubbed as class focused online learning (CFOL), which can be plugged into existing training setups with virtually no overhead in computation. We observe significant improvements on the worst class accuracy of 30% for CIFAR10. We also observe consistent behavior across CIFAR100 and STL10. Intriugingly, we find that minimizing the worst case can even sometimes improve the average.
Reject
The paper looks at the worst-class adversarial error for multi-class classification problems. The question is given a certain level of adversarial error on average, is it possible that some classes have adversarial error significantly worse than average? And if so, is this a problem? I agree with the authors that there are applications where such an imbalance could be problematic; other than the examples provided by the authors I can also think of this being important from a point of view of fairness, depending on what exactly the class labels represent. The reviewers have raised the question of low accuracies reported in the empirical results compared to the state of the art on those datasets for adversarial learning. I share these concerns -- especially it's worth understanding whether more accurate models also have such an imbalance, or whether this imbalance is a result of incomplete training or models that are not representationally powerful enough. While I agree with the authors that 'state of the art' results' are not required for ICLR submissions, especially those making conceptual contriubtions, in this case I think further experiments may be needed in addition to addressing the other questions raised in the reviews. The authors acknowledge that they have made significant revisions in response to the reviews, but I think that would require a fresh review cycle.
train
[ "hop70nW0dWa", "03Ars1ThtN", "WmAoD7kupFb", "6_Yi0NWqCev", "fFpzckYtXj", "xx3mKoSHZB", "y-gRVg1fWj1", "e15UKUK6FpP", "yrBffwqOzeM", "yMg9hsKqb7", "_Fan6XwEixg", "HZJ4mF4Twtg", "TghEbmLCuZ", "bK5zOnlC6Gt", "hkqyH4Q9ljG", "wlTIEW7FEqy" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' thorough response and for accordingly updating the paper!\n\nMy main remaining concern is that of the low clean test accuracy of the baseline/starting empirical setup, as I agree with Reviewer 85vN: indeed, while there is no need for state-of-art results, the fact that the starting baselin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "yrBffwqOzeM", "yrBffwqOzeM", "y-gRVg1fWj1", "HZJ4mF4Twtg", "xx3mKoSHZB", "_Fan6XwEixg", "yMg9hsKqb7", "iclr_2022_0uZu36la_y4", "wlTIEW7FEqy", "hkqyH4Q9ljG", "bK5zOnlC6Gt", "TghEbmLCuZ", "iclr_2022_0uZu36la_y4", "iclr_2022_0uZu36la_y4", "iclr_2022_0uZu36la_y4", "iclr_2022_0uZu36la_y4" ...
iclr_2022_VMuenFh7IpP
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Data poisoning is a threat model in which a malicious actor tampers with training data to manipulate outcomes at inference time. A variety of defenses against this threat model have been proposed, but each suffers from at least one of the following flaws: they are easily overcome by adaptive attacks, they severely reduce testing performance, or they cannot generalize to diverse data poisoning threat models. Adversarial training, and its variants, are currently considered the only empirically strong defense against (inference-time) adversarial attacks. In this work, we extend the adversarial training framework to defend against (training-time) data poisoning. Our method desensitizes networks to the effects of such attacks by creating poisons during training and injecting them into training batches. We show that this defense withstands adaptive attacks, generalizes to diverse threat models, and incurs a better performance trade-off than previous defenses.
Reject
Reviewer E8z9 remained with a number of serious concerns, including efficacy of the defense in higher poisoning setting, overclaiming contributions in terms of L0 defenses (which are mostly achieved by the baseline CutMix), as well as novelty and generalizability of the approach. The author responses were unconvincing, and all other reviewers participated in the discussion, conceding that they too were unable to provide compelling arguments against E8z9's comments. Other reviewers claimed that these drawbacks may be "acceptable" for a first step, but were not willing to defend it very strongly. We note that E8z9 claims they are a reviewer for a previous version of the paper and these issues were present before. The authors claim that E8z9's key points had been addressed in this version, but the reviewer maintains that the issues still persist in the latest version. The authors are advised to take their comments into account for further versions of this paper.
train
[ "7YVWXv4nEa_", "CxYlCz-UUzJ", "uO7RrZFVvNN", "D0VYgDICHng", "gg710RPwxS6", "H1R1GXFzupR", "_kUT96Z8_Ok", "kckmbGIcRxj", "P2PW5oEKx0R", "w9kyH9a66I_", "TGx2BKVj64O", "a87qvPvLg8b", "sdMn-0mIM9J", "fFqFdi4sXN0" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a defense against backdoor attacks and targeted attacks. Specifically, the proposed defense injects randomly chosen poisoned instances and targeted instances into the training data, and then forces the model to have right behaviors on these instances. Experimental results demonstrate the effect...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_VMuenFh7IpP", "iclr_2022_VMuenFh7IpP", "D0VYgDICHng", "TGx2BKVj64O", "P2PW5oEKx0R", "_kUT96Z8_Ok", "kckmbGIcRxj", "w9kyH9a66I_", "CxYlCz-UUzJ", "7YVWXv4nEa_", "fFqFdi4sXN0", "sdMn-0mIM9J", "iclr_2022_VMuenFh7IpP", "iclr_2022_VMuenFh7IpP" ]
iclr_2022_hdSn_X7Hfvz
Deep Probability Estimation
Reliable probability estimation is of crucial importance in many real-world applications where there is inherent uncertainty, such as weather forecasting, medical prognosis, or collision avoidance in autonomous vehicles. Probability-estimation models are trained on observed outcomes (e.g. whether it has rained or not, or whether a patient has died or not), because the ground-truth probabilities of the events of interest are typically unknown. The problem is therefore analogous to binary classification, with the important difference that the objective is to estimate probabilities rather than predicting the specific outcome. The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks. There exist several methods to improve the probabilities generated by these models but they mostly focus on classification problems where the probabilities are related to model uncertainty. In the case of problems with inherent uncertainty, it is challenging to evaluate performance without access to ground-truth probabilities. To address this, we build a synthetic dataset to study and compare different computable metrics. We evaluate existing methods on the synthetic data as well as on three real-world probability estimation tasks, all of which involve inherent uncertainty: precipitation forecasting from radar images, predicting cancer patient survival from histopathology images, and predicting car crashes from dashcam videos. Finally, we also propose a new method for probability estimation using neural networks, which modifies the training process to promote output probabilities that are consistent with empirical probabilities computed from the data. The method outperforms existing approaches on most metrics on the simulated as well as real-world data.
Reject
The paper addresses the problem of uncertainty quantification in deep neural nets. The authors introduces the CaPE calibration loss to deal with the inherent uncertainty in probabilistic prediction, e.g. medical prognosis, weather prediction or collision prediction. The paper initially received contrasted reviews: two weak acceptance, one weak rejection, and one strong rejection recommendation. The main limitation pointed out by reviewers related to the unclear definition of the problem setting, the limited contributions, and clarifications on experiments (comparison with deep ensembles). After authors' feedback, the reviewers were not convinced by the clarification on the problem setting, and there was a consensus among reviewers to reject the paper. The AC's own readings confirmed the concerns raised by the reviewers, and also identifies additional shortcomings of the current submission. The paper addresses the problem of proper quantification of data uncertainty (generally referred as aleatoric uncertainty), and the CaPE calibration loss should be positioned with respect to the literature on the topic. The AC thus recommends rejection, but encourages the authors to re-submit their work after specifying the focus and motivation of their work.
train
[ "tscoO_knIeG", "qKrM0d4Do9W", "coOHZJZkUS", "bSrftwInm_q", "B0kZ2U2fLVy", "s6BiOYr7_-", "fzlpEg0V4YZ", "ei-g5M2mNxR", "dzNhSqCfGB2", "619sH2a2zfU", "Ag1CJovWXPl", "qFqpktToIQ", "K10LCXrDRQT", "euvx2twmMWf", "GvOhlRCotbM", "Y7NugTmYSjS", "RzRT-QMFuAq", "aV6NC_zW6k", "rlNHjPUp0a", ...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_r...
[ " I would like to thank the authors for the detailed response! It is highly appreciated.\n\nHaving read the rebuttal and the other reviews, I would like to remain with the current score, and am potentially leaning toward weak rejection in the light of other reviewers' comments. I am also a bit confused by the discu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "qFqpktToIQ", "bSrftwInm_q", "bSrftwInm_q", "Ammv_LWxUDS", "s6BiOYr7_-", "fzlpEg0V4YZ", "dzNhSqCfGB2", "Ag1CJovWXPl", "RzRT-QMFuAq", "iclr_2022_hdSn_X7Hfvz", "euvx2twmMWf", "K10LCXrDRQT", "ZDTLMa4sYXb", "619sH2a2zfU", "Y7NugTmYSjS", "Ammv_LWxUDS", "aV6NC_zW6k", "rlNHjPUp0a", "BSd...
iclr_2022_xbu1tzbjvd
Analyzing Populations of Neural Networks via Dynamical Model Embedding
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task. Motivated by this problem, we introduce \textsc{Dynamo}, an algorithm that constructs low-dimensional manifolds where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes. \textsc{Dynamo} takes as input a collection of pre-trained neural networks and outputs a \emph{meta-model} that emulates the dynamics of the hidden states as well as the outputs of any model in the collection. The specific model to be emulated is determined by a \emph{model embedding vector} that the meta-model takes as input; these model embedding vectors constitute a manifold corresponding to the given population of models. We apply \textsc{Dynamo} to both RNNs and CNNs, and find that the resulting model embedding manifolds enable novel applications: clustering of neural networks on the basis of their high-level computational processes in a manner that is less sensitive to reparameterization; model averaging of several neural networks trained on the same task to arrive at a new, operable neural network with similar task performance; and semi-supervised learning via optimization on the model embedding manifold. Using a fixed-point analysis of meta-models trained on populations of RNNs, we gain new insights into how similarities of the topology of RNN dynamics correspond to similarities of their high-level computational processes.
Reject
This paper introduces an algorithm for making a meta-model from an ensemble of models by learning model embedding. All reviewers appreciated the originality and potential usefulness of the paper. However, they also all think that the work is not completely ready for publication. Both the presentation and quality of the results can be improved. There are a lot of good feedback in the discussion that can be used to make an important updated version for the next conference.
train
[ "GcXlwpDWCQ-", "F0txt83eyi", "t76Mpc6Y3OJ", "lVpPC_46pCe", "kx9cAMaq2Y3", "z7ru4-a-LaS", "HspG4XdVLp", "W70X5v44C3s", "45hd5uKzUl9", "4BJNfmZO6mP", "rlxB9quP5oF", "W2F8_iEP8st", "YxCEnnVWgTk", "1Y8jPvKTvz" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose an algorithm that takes several neural networks and outputs an embedding for such networks and a meta-model. This meta-model can take any embedding of a network, and emulate its hidden states and outputs. The authors suggest that these embeddings can measure models similarity, and show how to i...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_xbu1tzbjvd", "45hd5uKzUl9", "lVpPC_46pCe", "kx9cAMaq2Y3", "1Y8jPvKTvz", "YxCEnnVWgTk", "YxCEnnVWgTk", "YxCEnnVWgTk", "GcXlwpDWCQ-", "W2F8_iEP8st", "W2F8_iEP8st", "iclr_2022_xbu1tzbjvd", "iclr_2022_xbu1tzbjvd", "iclr_2022_xbu1tzbjvd" ]
iclr_2022_LOz0xDpw4Y
Learning to Efficiently Sample from Diffusion Probabilistic Models
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a powerful family of generative models that, yielding high-fidelity samples and competitive log-likelihoods across a range of domains, including image and speech synthesis. Key advantages of DDPMs include ease of training, in contrast to generative adversarial networks, and speed of generation, in contrast to autoregressive models. However, DDPMs typically require hundreds-to-thousands of steps to generate a high fidelity sample, making them prohibitively expensive for high dimensional problems. Fortunately, DDPMs allow trading generation speed for sample quality through adjusting the number of refinement steps during inference. Prior work has been successful in improving generation speed through handcrafting the time schedule through trial and error. In our work, we view the selection of the inference time schedules as an optimization problem, and introduce an exact dynamic programming algorithm that finds the log-likelihood-optimal discrete time schedules for any pre-trained DDPM. Our method exploits the fact that the evidence lower bound (ELBO) can be decomposed into separate KL divergence terms, and given any computation budget, we discover the time schedule that maximizes the training ELBO exactly. Our method is efficient, has no hyper-parameters of its own, and can be applied to any pre-trained DDPM with no retraining. We discover inference time schedules requiring as few as 32 refinement steps, while sacrificing less than 0.1 bits per dimension compared to the default 4,000 steps used on an ImageNet 64x64 model.
Reject
This paper proposes a dynamic programming strategy for faster approximate generation in denoising diffusion probabilistic models. All reviewers appreciated the paper, but they are not overly excited. Two reviewers are focused on the log likelihood not being the objective for image quality. This AC does not really buy this argument. The method and story around are well-rounded and finished. So it is hard to think of any major modifications that will change the overall story a lot. One could therefore argue for acceptance as it stands. On the other hand this is difficult to argue for given the below acceptance level scores. So the final recommendation is reject with a strong encouragement to submit to the next conference. Updating the paper with preemptive arguments on why the ELBO and not FID is the right thing to consider.
train
[ "MOFeCp7SZ8", "xHmHbkMWdFe", "9p4-cUa5M6", "qj2Uq8_5jV7", "R8jkWvED5dB", "q47RKSnk4U", "k58Bw3Ldjww", "EMdnIZU5Kf", "CsCf2oSYPoF", "h5WYrI4cSR9", "lEv4KcsYvo", "A9Mx4ieSKrZ", "eCYeaACnE_3", "8c5LTmQLPs5" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the reviewers for their response and apologise my late response. I have read the response and the discussion with other reviewers as well. There were clarifications made in this process. \n\nI am maintaining my score, but I think the work has merit and I will not stand in the way of acceptance if the AC b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "xHmHbkMWdFe", "R8jkWvED5dB", "qj2Uq8_5jV7", "lEv4KcsYvo", "eCYeaACnE_3", "k58Bw3Ldjww", "iclr_2022_LOz0xDpw4Y", "CsCf2oSYPoF", "h5WYrI4cSR9", "8c5LTmQLPs5", "A9Mx4ieSKrZ", "iclr_2022_LOz0xDpw4Y", "iclr_2022_LOz0xDpw4Y", "iclr_2022_LOz0xDpw4Y" ]
iclr_2022_ZOjKx9dEmLB
NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks, e.g., image classification on CIFAR and ImageNet. This makes the applicability of NAS approaches in more diverse areas inadequately understood. In this paper, we present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs). To construct it, we curate a collection of ten tasks spanning a diverse array of application domains, dataset sizes, problem dimensionalities, and learning objectives. By carefully selecting tasks that can both interoperate with modern CNN-based search methods but that are also far-afield from their original development domain, we can use NAS-Bench-360 to investigate the following central question: do existing state-of-the-art NAS methods perform well on diverse tasks? Our experiments show that a modern NAS procedure designed for image classification can indeed find good architectures for tasks with other dimensionalities and learning objectives; however, the same method struggles against more task-specific methods and performs catastrophically poorly on classification in non-vision domains. The case for NAS robustness becomes even more dire in a resource-constrained setting, where a recent NAS method provides little-to-no benefit over much simpler baselines. These results demonstrate the need for a benchmark such as NAS-Bench-360 to help develop NAS approaches that work well on a variety of tasks, a crucial component of a truly robust and automated pipeline. We conclude with a demonstration of the kind of future research our suite of tasks will enable. All data and code is made publicly available.
Reject
This paper at first used the name NAS-Bench-360 for the benchmark, which confused several reviewers (who expected a tabular benchmark behind this name). The authors renamed the benchmark, which removed this issue, emphasizing that the contribution does not lie in proposed a new tabular NAS benchmark, but a new performance evaluation of NAS on different data sets. One reviewer recommended acceptance, but 3 reviewers stuck with their rejection scores, the reasons being that - there are by now several papers applying NAS outside of computer vision, with a seemingly more comprehensive analysis - more analysis would be useful - it is unclear how general the conclusions are that can be drawn from performance on the included datasets. (Low technical novelty was also mentioned, but I do not believe that this type of paper can be very impactful even if it has no technical novelty.) Overall, although I agree with the accepting reviewer that this type of work can be very useful to the community, the rejecting reviewers have too many criticisms to accept the paper in its current form. I encourage the authors to address them and to resubmit. One note (which did not affect the decision, but which I'd like to notify the authors about) is that a reviewer found that the author identity was revealed in the anonymous codes provided by the authors (https://anonymous.4open.science/r/NAS-Bench-360-26D1).
train
[ "46lJ-_3G4VQ", "5bstXIHffnp", "m-M6LsmGhtU", "WcVq9-89xSk", "s2wECQaicZC", "W6N128Odvjd", "2lagGvQNpcT", "0j9Qz9739rQ", "Od8c11inYqk", "nbGA-Mx9AWl", "exUnCx0WQya", "0ScC3FXyd1e", "gVib5cXAVW9", "TUuuvSypdIi", "KKFDjV9vSaf", "wFGrH5W-pzx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\n\nThanks for your responses. The new name looks good to me.\nHowever, my major concerns are that there are a lot (not lack) of NAS studies outside of CV/NLP. For example, there are a lot of NAS works studying speech tasks, RL-related tasks, time-series tasks, etc. At least, there work evaluated the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, 3, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, 5, 5, 4, 4 ]
[ "Od8c11inYqk", "W6N128Odvjd", "2lagGvQNpcT", "0j9Qz9739rQ", "iclr_2022_ZOjKx9dEmLB", "wFGrH5W-pzx", "KKFDjV9vSaf", "exUnCx0WQya", "TUuuvSypdIi", "gVib5cXAVW9", "iclr_2022_ZOjKx9dEmLB", "iclr_2022_ZOjKx9dEmLB", "iclr_2022_ZOjKx9dEmLB", "iclr_2022_ZOjKx9dEmLB", "iclr_2022_ZOjKx9dEmLB", "...
iclr_2022_DIsWHvtU7lF
Composing Partial Differential Equations with Physics-Aware Neural Networks
We introduce a compositional physics-aware neural network (FINN) for learning spatiotemporal advection-diffusion processes. FINN implements a new way of combining the learning abilities of artificial neural networks with physical and structural knowledge from numerical simulation by modeling the constituents of partial differential equations (PDEs) in a compositional manner. Results on both one- and two-dimensional PDEs (Burger's, diffusion-sorption, diffusion-reaction) demonstrate FINN's superior process modeling accuracy and excellent out-of-distribution generalization ability beyond initial and boundary conditions. With only one tenth of the number of parameters on average, FINN outperforms pure machine learning and other state-of-the-art physics-aware models in all cases---often even by multiple orders of magnitude. Moreover, FINN outperforms a calibrated physical model when approximating sparse real-world data in a diffusion-sorption scenario, confirming its generalization abilities and showing explanatory potential by revealing the unknown retardation factor of the observed process.
Reject
Thank you for your submission to ICLR. There is some disagreement about this paper, and several of the reviews are of relatively low confidence. While I appreciate the effort that the authors have put into addressing the concerns of the reviewers, after going through the paper and the responses myself, I'm ultimately coming down on the side of the less positive reviews. My reasoning, honestly, is that I think the authors are vastly overestimating the knowledge that the ICLR audience will have about numerical methods for PDE solutions. Reading through the paper, I honestly have very little idea about how the actual numerical techniques are carried out, and it's unclear to me precisely where this method falls in between a traditional numerical solver an actual neural network. Reading through the reviews, even the more positive ones, I don't think I'm alone in this perception (and the authors will hopefully believe me that these reviewers _are_ indeed emblematic of the subgroup of ICLR that is most experience with differential equations). I really feel like either a substantial rewrite of the paper is needed, to make clear the full extent of the numerical methods being applied; or alternatively, the work may really be better suited for a numerical methods venue.
train
[ "q-NwgT0Dxo8", "OLMgilPL3xH", "hVnbfhOAQUn", "JNDE6DntS-w", "Zr3fzpi2UXy", "jQTLgkpjGRc", "ab3WGs3GXoh", "Jv3wiWa7x64", "t9EwYgmzhCX", "XUNFvQ8Kyu9", "H_RpHNJn55v", "y5qYQwRl090", "CmJ7WF572Q", "X3UUAOSIRSb", "tWoHtF6XqOG", "ZrtunbMNsg6" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. To re-emphasize, the FVM solver is used to generate the datasets for the synthetic examples. Therefore, the FVM solver itself will always be 100% accurate because it is considered as the \"ground truth\" (i.e. reference data), while the comparison between FINN and the FVM solver in a ...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "OLMgilPL3xH", "y5qYQwRl090", "iclr_2022_DIsWHvtU7lF", "Zr3fzpi2UXy", "XUNFvQ8Kyu9", "Jv3wiWa7x64", "iclr_2022_DIsWHvtU7lF", "H_RpHNJn55v", "ZrtunbMNsg6", "tWoHtF6XqOG", "ab3WGs3GXoh", "X3UUAOSIRSb", "iclr_2022_DIsWHvtU7lF", "iclr_2022_DIsWHvtU7lF", "iclr_2022_DIsWHvtU7lF", "iclr_2022_...
iclr_2022_AsQz_GFFDQp
Agnostic Personalized Federated Learning with Kernel Factorization
Considering the futuristic scenarios of federated learning at a worldwide scale, it is highly probable that local participants can have their own personalized labels, which might not be compatible with each other even for the same class, and can be also possibly from a variety of multiple domains. Nevertheless, they should be benefited from others while selectively taking helpful knowledge. Toward such extreme scenarios of federated learning, however, most existing approaches are limited in that they often assume: (1) labeling schemes are all synchronized amongst clients; (2) the local data is from the same single dataset (domain). In this sense, we introduce an intensively realistic problem of federated learning, namely Agnostic Personalized Federated Learning (APFL), where any clients, regardless of what they have learned with their personalized labels, can collaboratively learn while benefiting each other. We then study two essential challenges of the agnostic personalized federated learning, which are (1) Label Heterogeneity where local clients learn from the same single domain but labeling schemes are not synchronized with each other and (2) Domain Heterogeneity where the clients learn from the different datasets which can be semantically similar or dissimilar for each other. To tackle these problems, we propose our novel method, namely Similarity Matching and Kernel Factorization (SimFed). Our method measures semantic similarity/dissimilarity between locally learned knowledge and matches/aggregates the relevant ones that are beneficial to each other. Furthermore, we factorize our model parameters into two basis vectors and the sparse masks to effectively capture permutation-robust representations and reduce information loss when aggregating the heterogeneous knowledge. We exhaustively validate our method on both single- and multi-domain datasets, showing that our method outperforms the current state-of-the-art federated learning methods.
Reject
The paper studies two aspects of personalized federated learning: (1) Clients having their own labeling scheme. (2) Domain heterogeneity across clients. They propose a way to collaborate across clients by similarity matching. The key novelty is to measure similarity of client pairs, based on on how much their representation layer agrees (measured with cosine similarity). A second novelty is a low-rank factorization of model weights. Empirical evaluations show wins on MNIST, CIFAR10, 100. Reviewers had various grave concerns. On the method side, they were concerned that thee is not enough theoretical insight and analysis of the proposed approach, esp. the kernel factorization and its effect. On the empirical side, they were concerned that comparisons were not made with most recent baselines. There was a large number of PFL approaches published in 2021. e.g. FedBN. Among these, its worth noting pFedHN (ICML2021) which actually discussed the case of heterogeneous (permuted) labels (their Sec 3.3). In a discussion, reviewers appreciated the responses by the authors, the additional experiments and ablation studies. Unfortunately however, they found that the paper is not ready for publication in ICLR.
train
[ "c0wY33DLAtT", "fOVBikFlcc7", "k2nxy9epKHP", "G82LRj9Y4j2", "dHcpvqc7yWa", "7NQOc3AM2EO", "jv_lPKPkiPm", "-HiUpdwQp0j", "mguWi1r5SkQ", "v8PeRZlz_pE", "7I9vEezRHay", "3JJGcP2QE_G", "gPBE7I3q-81", "o41rGASJX-", "p1yOMpWkmWj", "C4ow7HSCLwa", "8l7AihhmEI", "HuSsl8CQP31", "Vuw53hZ9nGu...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies two challenges of personalized federated learning: (1) Label Heterogeneity where label schemes are not synchronized in local clients and (2) Domain Heterogeneity where the datasets owned by the clients can be semantically dissimilar. The authors propose a method called Similarity Matching and Ke...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "iclr_2022_AsQz_GFFDQp", "mguWi1r5SkQ", "-HiUpdwQp0j", "IW6nJ26bQnv", "c0wY33DLAtT", "Vuw53hZ9nGu", "iclr_2022_AsQz_GFFDQp", "Vuw53hZ9nGu", "Vuw53hZ9nGu", "c0wY33DLAtT", "c0wY33DLAtT", "IW6nJ26bQnv", "iclr_2022_AsQz_GFFDQp", "IW6nJ26bQnv", "8l7AihhmEI", "HuSsl8CQP31", "iclr_2022_AsQz...
iclr_2022_vpiOnyOBTzQ
Disentangled generative models for robust dynamical system prediction
Deep neural networks have become increasingly of interest in dynamical system prediction, but out-of-distribution generalization and long-term stability still remains challenging. In this work, we treat the domain parameters of dynamical systems as factors of variation of the data generating process. By leveraging ideas from supervised disentanglement and causal factorization, we aim to separate the domain parameters from the dynamics in the latent space of generative models. In our experiments we model dynamics both in phase space and in video sequences and conduct rigorous OOD evaluations. Results indicate that disentangled models adapt better to domain parameters spaces that were not present in the training data while, at the same time, provide better long-term predictions in video sequences.
Reject
This manuscript tackles an interesting and significant line of research of long-term prediction and out-of-distribution generalization in time series models. I strongly believe this problem is an important one to solve. However, in its current form, its novelty is marginal, and the experiments fail to decisively show advantages. It also lacks of systematic improvements and error analysis. Further work could make it ready for publication at a next conference.
val
[ "4OpLUZdWTlO", "fu-9gwumYlm", "ZpcdKcgfMXL", "rcHyU-2sp1_", "LwZuvwR4GZD", "ge7USkeZbn", "UJHI1-j6fsi", "mMiiO4vUrUd", "sxfkmeWAuh8", "g6rhOCvjsnN", "yHNSvmQ67-X", "w0EscM-uUf9", "NQbpFCqK4dz", "1mtsSXLaIKc", "UvhTx8rJlbG", "RKjRxJemwJr", "4aBbfNDHK-N", "0cbxrB0TSM", "N_5As25hiO7...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " I would like to thank the authors for updating the manuscript and their response to the comments.\n\nAfter reviewing again the results in the current manuscript, I'm still worried that the claim of consistent improvement is well supported by the experimental results. The averaged RMSE error can be quite sensitive...
[ -1, 1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "RKjRxJemwJr", "iclr_2022_vpiOnyOBTzQ", "fu-9gwumYlm", "iclr_2022_vpiOnyOBTzQ", "1mtsSXLaIKc", "NQbpFCqK4dz", "iclr_2022_vpiOnyOBTzQ", "0cbxrB0TSM", "fu-9gwumYlm", "fu-9gwumYlm", "fu-9gwumYlm", "fu-9gwumYlm", "ict4xu8zV97", "rcHyU-2sp1_", "rcHyU-2sp1_", "N_5As25hiO7", "0cbxrB0TSM", ...
iclr_2022_PlFtf_pnkZu
Examining Scaling and Transfer of Language Model Architectures for Machine Translation
Natural language understanding and generation models follow one of the two dominant architectural paradigms: language models (LMs) that process concatenated sequences in a single stack of layers, and encoder-decoder models (EncDec) that utilize separate layer stacks for input and output processing. In machine translation, EncDec has long been the favoured approach, but with few studies investigating the performance of LMs. In this work, we thoroughly examine the role of several architectural design choices on the performance of LMs on bilingual, (massively) multilingual and zero-shot translation tasks, under systematic variations of data conditions and model sizes. Our results show that: (i) Different LMs have different scaling properties, where architectural differences often have a significant impact on model performance at small scales, but the performance gap narrows as the number of parameters increases, (ii) Several design choices, including causal masking and language-modeling objectives for the source sequence, have detrimental effects on translation quality, and (iii) When paired with full-visible masking for source sequences, LMs could perform on par with EncDec on supervised bilingual and multilingual translation tasks, but improve greatly on zero-shot directions by facilitating the reduction of off-target translations.
Reject
This paper has conducted extensive experiments to examine the scaling and transferring laws of LMs for machine translation and has concluded several interesting findings which could be inspiring to the future work. The main concerns from reviewers are that the novelty of this paper is not enough. In addition, the experiments are not well-designed and the clarity of this paper can be further improved. We hope the reviews can help authors improve their paper.
test
[ "W-9XjjiXjH", "bZEGn4DNVyb", "JKItwYnuO1Q", "vOt3fc5FPI", "DzFJ_Z3kAhA", "KqbMscxW7iw", "79I9WztycEm", "VaTSCoiKfHE", "LGOGZ87bSae", "GaZ7j4MltVD", "x-gj0d9f6o5", "Y_8uTqRWR_U", "ZuM4hIEX36", "q-RsjLgnDtB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your insightful comments again! Please feel free to add any follow-up questions!", " Thanks for your insightful comments again! Please feel free to add any follow-up questions!", " Thanks for your insightful comments again! Please feel free to add any follow-up questions!", " Thanks for your insi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "DzFJ_Z3kAhA", "VaTSCoiKfHE", "KqbMscxW7iw", "GaZ7j4MltVD", "q-RsjLgnDtB", "79I9WztycEm", "Y_8uTqRWR_U", "LGOGZ87bSae", "ZuM4hIEX36", "x-gj0d9f6o5", "iclr_2022_PlFtf_pnkZu", "iclr_2022_PlFtf_pnkZu", "iclr_2022_PlFtf_pnkZu", "iclr_2022_PlFtf_pnkZu" ]
iclr_2022_zPLQSnfd14w
Two Regimes of Generalization for Non-Linear Metric Learning
A common approach to metric learning is to seek an embedding of the input data that behaves well with respect to the labels. While generalization bounds for linear embeddings are known, the non-linear case is not well understood. In this work we fill this gap by providing uniform generalization guarantees for the case where the metric is induced by a neural network type embedding of the data. Specifically, we discover and analyze two regimes of behavior of the networks, which are roughly related to the sparsity of the last layer. The bounds corresponding to the first regime are based on the spectral and $(2,1)$-norms of the weight matrices, while the second regime bounds use the $(2,\infty)$-norm at the last layer, and are significantly stronger when the last layer is dense. In addition, we empirically evaluate the behavior of the bounds for networks trained with SGD on the MNIST and 20newsgroups datasets. In particular, we demonstrate that both regimes occur naturally on realistic data.
Reject
The paper provides two new generalization bounds for non-linear metric learning with deep neural networks, by extending results of Bartlett et al. 2017 to the metric learning setting. The main contribution of the paper is by extending the techniques of Bartlett et al. from a classification setting to the metric learning setting (which has very different objectives) and consider two regimes. In the first regime the techniques are fairly similar but the second regime is more novel. However, the current version of the paper does not highlight the similarity and differences between the results and techniques with Bartlett et al. 2017; it also does not give sufficient intuition on how the metric learning setting is fundamentally different from the classification setting and how the paper leverage the difference to get improved bounds. All the reviewers had some confusions to different degrees, and the paper would be much stronger if it can explain the intuition and make more explicit comparisons.
train
[ "fd1VimL9Psd", "P3dutkVwous", "2POq_bwMAl", "bAqMulVpkzN", "7Kv8tuRA4oE", "dU2_gjj-kgr", "M3wSO3u6RmV", "yJx6t0ydCSi", "2omZtOd5LAp", "bmXMP79ttn_", "RMMuZkxCsrB", "7MwP7ZcuVDI" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for getting back to us. However, it is not clear what is meant by \n>For me personally the issues are too large \n\nWhat issues were not cleared by the above response? Are there specific problems with the paper that we can address?\n\nAs mentioned earlier, we will take the feedback into acco...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "P3dutkVwous", "dU2_gjj-kgr", "7Kv8tuRA4oE", "iclr_2022_zPLQSnfd14w", "2omZtOd5LAp", "7MwP7ZcuVDI", "RMMuZkxCsrB", "bmXMP79ttn_", "bAqMulVpkzN", "iclr_2022_zPLQSnfd14w", "iclr_2022_zPLQSnfd14w", "iclr_2022_zPLQSnfd14w" ]
iclr_2022_AgDwZa1AiJt
When in Doubt, Summon the Titans: A Framework for Efficient Inference with Large Models
Scaling neural networks to "large" sizes, with billions of parameters, has been shown to yield impressive results on many challenging problems. However, the inference cost incurred by such large models often prevent their application in most real-world settings. In this paper, we propose a two-stage framework based on distillation that realizes the modelling benefits of the large models, while largely preserving the computational benefits of inference with more lightweight models. In a nutshell, we use the large teacher models to guide the lightweight student models to only make correct predictions on a subset of "easy" examples; for the "hard" examples, we fall-back to the teacher. Such an approach allows us to efficiently employ large models in practical scenarios where easy examples are much more frequent than rare hard examples. Our proposed use of distillation to only handle easy instances allows for a more aggressive trade-off in the student size, thereby reducing the amortized cost of inference and achieving better accuracy than standard distillation. Empirically, we demonstrate the benefits of our approach on both image classification and natural language processing benchmarks.
Reject
This paper a distillation framework where a light-weight student model is trained to handle easy (frequent) instances, while the large teacher model is still used to handle the more difficult (rare) inputs. The models are trained to perform well in this two-stage inference setting. Experiments are conducted on computer vision and NLP tasks. While the idea is potentially interesting, the experimental results are fairly weak and not very convincing.
test
[ "xJWhdJLiHX-", "dceYPB9L8HX", "9ffoumQlnKl", "rdK_7P05NYi", "jU0mjYJobIP", "tQirhICxzwT", "W3xm2X9JIjo", "mWJVwCSLTgK", "CQbmhcKSeVh", "VFmzfCRmC2e" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response to address my concerns. ", "The paper proposes a two-stage distillation framework to improve inference efficiency and reduce the dependency on large teacher models. The goal of this framework is to only use the large/teacher model for difficult and rare examples and to use the student, s...
[ -1, 6, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "W3xm2X9JIjo", "iclr_2022_AgDwZa1AiJt", "tQirhICxzwT", "CQbmhcKSeVh", "VFmzfCRmC2e", "dceYPB9L8HX", "mWJVwCSLTgK", "iclr_2022_AgDwZa1AiJt", "iclr_2022_AgDwZa1AiJt", "iclr_2022_AgDwZa1AiJt" ]
iclr_2022_agBJ7SYcUVb
DFSSATTEN: Dynamic Fine-grained Structured Sparse Attention Mechanism
Transformers are becoming mainstream solutions for various tasks like NLP and Computer vision. Despite their success, the quadratic complexity of their attention mechanism hinders them from applying to latency sensitive tasks. Tremendous efforts have been made to alleviate this problem, and many of them successfully reduce the asymptotic complexity to linear. Nevertheless, few of them achieve practical speedup over the original full attention, especially under the moderate sequence length. In this paper, we present DFSSATTEN, an attention mechanism that dynamically prunes the full attention weight matrix to the 50% fine-grained structured sparse pattern used by the sparse tensor core on NVIDIA A100 GPU. We provide both theoretical and empirical evidences that demonstrate DFSSAT- TEN is a good approximation of the full attention mechanism and can achieve speedups in wall-clock time under arbitrary sequence length. We evaluate our method on tasks from various domains under different sequence lengths from 256 to 4096. DFSSATTEN achieves 1.27 ∼ 1.89× speedups over the full-attention mechanism with no accuracy loss.
Reject
This paper presents a package for "Dynamic Fine-grained Structured Sparse Attention Mechanism" (DFSSATTEN), which aims to improve the computational efficiency of attention mechanisms by leveraging the specific sparse pattern supported by sparse tensor cores of NVIDIA A100. DFSSATTEN shows theoretical and empirical advantage in terms of performance and speedup compared to various baselines, with 1.27~1.89x speedup over the vanilla attention network across different sequence lengths. Reviewers praised the simplicity of the method and the clean code implementation. Speeding up attention mechanisms is an important problem is leveraging sparse tensor cores for attention speedup is a sensible idea. The practical speedups are significant (1.27~1.89x over the vanilla attention across different sequence lengths). However, they also pointed out some weaknesses: the fact that the proposed method is very specific to the particular sparse pattern offered by NVIDIA A100, and not easily generalizable to other future hardware; the fact that the method focuses on inference acceleration and not training from scratch (not completely clear in the paper), which limits its scope; and the fact that the method still has O(N^2) complexity (it still requires the computation of QK^T, which has quadratic memory and computation cost), and therefore it does not really address the quadratic bottleneck of transformers, unlike other existing work in efficient transformers for long sequences. I tend to agree with the reviewers and, even though the package can be potentially useful to other researchers, the scope seems limited and the paper seems a bit thin to deserve publication at ICLR. Other comments and suggestions: - When talking about linear transformers, you should cite [1], which predates Performers - It is not clear to me why 1:2 and 2:4 are called "fine-grained *structured* sparsity" - Citations for the systems in Tab 4 are missing - When comparing to other methods, it would be include to include their Pareto curves since those methods have tradeoffs in terms of sparsity / approximation error (or downstream accuracy). [1] Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, François Fleuret (https://arxiv.org/abs/2006.16236)
test
[ "T-GlkkfEyz", "X0jV8wUktY7", "L-iyyH92fD", "EpNNG4jLO0H", "jh_w2l_TMyr", "qDC-MGLsf8", "Kl9G9aLZ5bS", "ECFY4ABC4Rz", "Rb4s7fX4r4", "ByCI_ZLT-ZL", "X11OAe5CoNj", "2zYmVwLk72h", "N7JBkvHnJw3", "3caN0w0CRi", "zrlbgRbxeFj", "GitUfdm2Q71", "TUbUJwkU91Y", "RkCC0isT0NY" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate your acknowledgment!", " Thank you for your detailed and thoughtful response; the responses addressed all my questions and provided additional context in the revised paper. I raised my score with acceptance.", "This paper focus on the dynamic N:M fine-grained structured sparse attention...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "X0jV8wUktY7", "jh_w2l_TMyr", "iclr_2022_agBJ7SYcUVb", "L-iyyH92fD", "EpNNG4jLO0H", "Kl9G9aLZ5bS", "ECFY4ABC4Rz", "RkCC0isT0NY", "ByCI_ZLT-ZL", "X11OAe5CoNj", "2zYmVwLk72h", "TUbUJwkU91Y", "3caN0w0CRi", "zrlbgRbxeFj", "GitUfdm2Q71", "iclr_2022_agBJ7SYcUVb", "iclr_2022_agBJ7SYcUVb", ...