paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_eypsJ0rvAqo
1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
To train large models (like BERT and GPT-3) on hundreds of GPUs, communication has become a major bottleneck, especially on commodity systems with limited-bandwidth TCP network. On one side large batch-size optimization such as LAMB algorithm was proposed to reduce the frequency of communication. On the other side, communication compression algorithms such as 1-bit Adam help to reduce the volume of each communication. However, we find that simply using one of the techniques is not sufficient to solve the communication challenge, especially under low network bandwidth. Motivated by this we aim to combine the power of large-batch optimization and communication compression, but we find that existing compression strategies cannot be directly applied to LAMB due to its unique adaptive layerwise learning rates. To this end, we design a new communication-efficient algorithm, 1-bit LAMB, which introduces a novel way to support adaptive layerwise learning rates under compression. In addition, we introduce a new system implementation for compressed communication using the NCCL backend of PyTorch distributed, which improves both usability and performance. For BERT-Large pre-training task with batch sizes from 8K to 64K, our evaluations on up to 256 GPUs demonstrate that 1-bit LAMB with NCCL-based backend is able to achieve up to 4.6x communication volume reduction, up to 2.8x end-to-end time-wise speedup, and the same sample-wise convergence speed (and same fine-tuning task accuracy) compared to uncompressed LAMB.
Reject
The authors propose a communication-efficient distributed LAMB optimizer using a 1-bit compression. This work is similar in spirit to other prior work, eg 1-bit Adam. Although the algorithm works reasonably well it is a bit unclear how much compression is achieved. Overall, the algorithmic novelty is limited, given the prior work, and the benefits of the algorithm don't shine through as the experiments are quite limited in their data sets and models. The theoretical results are also of unclear usefulness due to the assumptions made.
train
[ "FyswakELiFA", "BQ078vCFoT5", "WYmGqZpjYe", "zsaoiuQcysD", "01m0v29qO6", "DdN1jOXaqml" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments and below are our replies to some of them. In addition, in the rebuttal paper revision we added some new experiments/analysis related to the comments from reviewers: 1) Appendix A.7 where we provide additional BERT/SQuAD convergence analysis with different number of workers and differe...
[ -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, 3, 4, 3 ]
[ "01m0v29qO6", "DdN1jOXaqml", "zsaoiuQcysD", "iclr_2022_eypsJ0rvAqo", "iclr_2022_eypsJ0rvAqo", "iclr_2022_eypsJ0rvAqo" ]
iclr_2022_WxBFVNbDUT6
Benchmarking Sample Selection Strategies for Batch Reinforcement Learning
Training sample section techniques, such as prioritized experience replay (PER), have been recognized as of significant importance for online reinforcement learning algorithms. Efficient sample selection can help further improve the learning efficiency and the final learning performance. However, the impact of sample selection for batch reinforcement learning algorithms, where we aim to learn a near-optimal policy exclusively from the offline logged dataset, has not been well studied. In this work, we investigate the application of non-uniform sampling techniques in batch reinforcement learning. In particular, we compare six variants of PER based on various heuristic priority metrics that focus on different aspects of the offline learning setting. These metrics include temporal-difference error, n-step return, self-imitation learning objective, pseudo-count, uncertainty, and likelihood. Through extensive experiments on the standard batch RL datasets, we find that non-uniform sampling is also effective in batch RL settings. Furthermore, there is no single metric that works in all situations. Our findings also show that it is insufficient to avoid the bootstrapping error in batch reinforcement learning by only changing the sampling scheme.
Reject
The paper empirically benchmarks multiple sample selection strategies for offline RL based on the prioritized experience replay framework, including TD errors, N-step return, Generalized SIL, Pseudo-count, Uncertainty, and Likelihood. These are all benchmarked for the base algorithm TD3BC. The experiments study the performance and bootstrapping errors. Among other things, it is shown that non-uniform sampling strategies are also interesting in a batch RL setting. The authors show that non-uniform sampling can be helpful in offline RL compared to uniform sampling but they fail to avoid bootstrap error. They also found that there is no one outperforming metric for prioritized sampling in offline RL settings. The reviewers are in agreement that the question studied is a sensible and interesting one - Are PER strategies which are effective in online RL also useful for batch RL? The overall study conducted by the paper is clear and well presented. While the study/benchmark and the results presented is clear, the reviewers point out the following shortcomings 1. The study is not comprehensive for this work to become a definitive exploration of this space of ideas. Only algorithm has been tested with these ideas. 2. The results of the study are unfortunately inconclusive - while there are benefits these are achieved via different strategies and as mentioned by the paper no clear conclusions can be drawn. Since the paper is targeted purely as a benchmark, the originality aspect of the paper is naturally low. For benchmark papers in that case the impact factor squarely falls on comprehensiveness of the study and the emergence of some clear conclusions to further research in that area. The reviewers unanimously believe the paper falls short in both respects and therefore the decision. Hopefully the authors can consider the feedback provided and incorporate it to improve the paper.
train
[ "rnciN3kDKHR", "oJam7W__XJu", "gqGrn3zY_ti", "fhAfbRokgxV", "LVt6W3gBrT9", "0vAKu9svzBj", "xuP3X5Prr2", "r8PnYrfCIX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper empirically investigates several sample selection strategies in offline RL based on TD3BC and the PER framework, including TD errors, N-step return, Generalized SIL, Pseudo-count, Uncertainty, and Likelihood. The paper finds that some sampling strategies improve the performance on D4RL dataset but they f...
[ 5, 3, 3, 3, -1, -1, -1, -1 ]
[ 4, 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2022_WxBFVNbDUT6", "iclr_2022_WxBFVNbDUT6", "iclr_2022_WxBFVNbDUT6", "iclr_2022_WxBFVNbDUT6", "gqGrn3zY_ti", "rnciN3kDKHR", "fhAfbRokgxV", "oJam7W__XJu" ]
iclr_2022_93SVBUB1r5C
Learning with convolution and pooling operations in kernel methods
Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for the success of these architectures is that they encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and fully characterize the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We then study the gain in sample efficiency of kernel methods using these kernels over standard inner-product kernels. In particular, we show that 1) the convolution layer breaks the curse of dimensionality by restricting the RKHS to `local' functions; 2) local pooling biases learning towards low-frequency functions, which are stable by small translations; 3) downsampling may modify the high-frequency eigenspaces but leaves the low-frequency part approximately unchanged. Notably, our results quantify how choosing an architecture adapted to the target function leads to a large improvement in the sample complexity.
Reject
The paper analyzes convolutional kernels and their sample complexity as compared to different architectures, and in particular the effect of pooling. The analysis proceeds by characterizing the RKHS in this setting (for a distribution on the cube) and using results by Mei and others to obtain separation between different architectures. The reviewers appreciated the fact that this is an example worked out in detail, resulting in a clear message about sample complexity gaps between architectures. However, there was also concerns that some of the conclusions do appear in previous works, so that there is no surprising insight here. In future versions, the authors are encouraged to more clearly explain the novel aspects of the paper (as well as where the main technical novelties and tools are).
train
[ "Bo1OxmlPqp", "-yEmhHJLJmK", "yCMGyTpj7q3", "EX2xof2PQ9E", "F6wLfngMy4i", "Nva1yVcmha", "f5fdBGQKCuG", "xgtUb2Sv3SA", "poCIVAax4Rw", "ZeBtRf75kjw", "OrIcs8AORZ0", "5Yu0DkPwzet", "u7u7DD2S139" ]
[ "official_reviewer", "author", "author", "official_reviewer", "public", "official_reviewer", "author", "public", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes to study CKNs (defined via random features) which are defined via convolutions on “patches” (e.g., operators that writes (x,y)->\\sum_k h(<x_k,y_k>) where $x_k,y_k$ are some patches of $x,y$), and to exploit the fact that those types of kernels are actually related to convolutions on this space,...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_93SVBUB1r5C", "EX2xof2PQ9E", "EX2xof2PQ9E", "poCIVAax4Rw", "f5fdBGQKCuG", "ZeBtRf75kjw", "xgtUb2Sv3SA", "iclr_2022_93SVBUB1r5C", "Bo1OxmlPqp", "u7u7DD2S139", "5Yu0DkPwzet", "iclr_2022_93SVBUB1r5C", "iclr_2022_93SVBUB1r5C" ]
iclr_2022_lgOylcEZQgr
Online Unsupervised Learning of Visual Representations and Categories
Real world learning scenarios involve a nonstationary distribution of classes with sequential dependencies among the samples, in contrast to the standard machine learning formulation of drawing samples independently from a fixed, typically uniform distribution. Furthermore, real world interactions demand learning on-the-fly from few or no class labels. In this work, we propose an unsupervised model that simultaneously performs online visual representation learning and few-shot learning of new categories without relying on any class labels. Our model is a prototype-based memory network with a control component that determines when to form a new class prototype. We formulate it as an online Gaussian mixture model, where components are created online with only a single new example, and assignments do not have to be balanced, which permits an approximation to natural imbalanced distributions from uncurated raw data. Learning includes a contrastive loss that encourages different views of the same image to be assigned to the same prototype. The result is a mechanism that forms categorical representations of objects in nonstationary environments. Experiments show that our method can learn from an online stream of visual input data and is significantly better at category recognition compared to state-of-the-art self-supervised learning methods.
Reject
This paper tackles a small-batch online unsupervised learning problem, specifically proposing an online unsupervised prototypical network architecture that leverages an online mixture-based clustering algorithm and corresponding EM algorithm. Special features are added to deal specifically with the non-stationary distributions that are induced. Results are shown on more realistic streams of data, namely from the RoamingRooms dataset, and compared to existing self-supervised learning algorithms including ones based on clustering principles e.g. SWaV. Overall, the reviewers were positive about the problem setting and method, but had some concerns about hyper-parameters (hYzM, cvrN, LjvY) and motivation for the specific setting where the method excels compared to other methods not designed for such a setting (hYzM, cvrN), i.e. small-batch setting, where it is not clear where the line should be drawn in terms of batch size and memory requirements with respect to performance differences between the proposed approach and existing self-supervised methods. Importantly, all reviewers had significant confusions about all aspects of the work ranging from low-level details of the proposed method to the empirical setting and evaluation (including for competing methods). After a long discussion, the authors provided a large amount of details about their work, which the reviewers and AC highly appreciate. However, in the end incorporating all of the feedback requires a major revision of the entire paper. Even the reviewers that were more on the positive side (cvrN and LjvY) mentioned it would be extremely beneficial for this paper to be significantly revised and go through another review. Since so many aspects were confusing, it is not clear to the AC that the underlying method, technical contributions, and other aspects of the works had a sufficient chance to be evaluated fairly, given that much of the review period was spent on clearing up such confusion. In summary, while the paper is definitely promising and tackles an important area for the community, it requires a major revision and should go through the review process when it is more clearly presented. As a result, I recommend rejection at this point, since it is not ready for publication in its current form.
val
[ "6Cd3CyJiTBt", "O_uEEzNIGd3", "L2xYODoDGvs", "LTz5GIDqcip", "TfhCs4YbcR", "PsDaCh4r9hi", "DJA_-ZqAZG", "2f5PdzhbMvM", "lkvAkOMcpmd", "-FH1599pIE", "9v1Z1HKHwif", "uPB4XG1zg5u", "slV3_BhZmM", "Z3ENhbWOQRD", "X6ezW0Ug61N", "vjOlbZDaOu", "pPPcQ9RI752", "E_tVfyHpzaC", "350tDj4IWI", ...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a method for unsupervised learning of instance-based clustering which is more robust to data imbalance and non-iid distributions than existing approaches, such as SwAW. In particular, they propose an Expectation Maximization algorithm that operates in temporal episodes (supposed to correspond t...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2022_lgOylcEZQgr", "iclr_2022_lgOylcEZQgr", "9v1Z1HKHwif", "TfhCs4YbcR", "PsDaCh4r9hi", "6Cd3CyJiTBt", "NBiEi25P7nH", "lkvAkOMcpmd", "-FH1599pIE", "9v1Z1HKHwif", "qhflwo46Euf", "slV3_BhZmM", "Z3ENhbWOQRD", "iclr_2022_lgOylcEZQgr", "vjOlbZDaOu", "pPPcQ9RI752", "E_tVfyHpzaC", "...
iclr_2022_d7-GwtDWNNJ
Learning Graph Structure from Convolutional Mixtures
Machine learning frameworks such as graph neural networks typically rely on a given, fixed graph to exploit relational inductive biases and thus effectively learn from network data. However, assuming the knowledge of said graphs may be untenable in practice, which motivates the problem of inferring graph structure from data. In this paper, we postulate a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem. In lieu of eigendecomposition-based spectral methods or iterative optimization solutions, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs in a supervised fashion, and perform link-prediction or edge-weight regression tasks by adapting the loss function. Since layers directly operate on, combine, and refine graph objects (instead of node features), GDNs are inherently inductive and can generalize to larger-sized graphs after training. Algorithm unrolling offers an explicit handle on computational complexity; we trade-off training time in return for quick approximations to the inverse problem solution, obtained via a forward pass through the learnt model. We corroborate GDN's superior graph recovery performance using synthetic data in supervised settings, as well as its ability to generalize to graphs orders of magnitude larger that those seen in training. Using the Human Connectome Project-Young Adult neuroimaging dataset, we demonstrate the robustness and representation power of our model by inferring structural brain networks from functional connectivity estimated using fMRI signals.
Reject
The paper addresses the problem of recovering a graph structure from empirical observations. The proposed approach consists of formulating the problem as an inverse problem, and then unrolling a proximal gradient descent algorithm to generate a solution. Whereas the paper has definitely some merit, it received borderline reviews, with three borderline rejects and one borderline accept. The reviewers have appreciated the clarifications and discussions provided by the rebuttal, and one reviewer went up from reject to borderline reject. More precisely, this reviewer agrees that the paper has become stronger, but he/she believes that the paper requires additional experimental work (see section "After rebuttal" from his/her review). Another active reviewer during the rebuttal/discussion stage was not convinced by the rebuttal, after raising issues about identifiability. The area chair agrees that solving the identifiability issue is not a key requirement for this paper; however, this raises legitimate questions about the guarantees/properties of the returned solutions. Overall, this is a borderline paper, which introduces an interesting idea, but which requires additional experimental work and discussions about the properties of the solutions. Unfortunately, the area chair agrees with the majority of the reviewers and follows their recommendation. The two previous points should be addressed if the paper is resubmitted elsewhere.
train
[ "s50a6yw_Qko", "3vb3pb62AAd", "yM2N2zkxY3", "dNa7TODHohF", "8JSaFPclpA", "oIOzUSalvc-", "L4mGniDKURa", "ydmAnfBj5nC", "6RjQTVpSjVk", "_xrqS67TNEG", "CJRIpSSFFb", "zZp8mZvvW1H", "njBpCHVTE3a", "_bgkC68A62y", "BevsTbd2ww0", "pcIAjesV6L", "htxUOPum08J", "4Aa2u56u1x", "WfNNNJ0-n_X" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your evaluation of our paper. We hope you have had a chance to examine our response and the revisions made to the paper, which also include the results on another real dataset in Section 5.2 as promised in our earlier response. This is a follow up to inquire if your concerns have been appropriate...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 2 ]
[ "_bgkC68A62y", "yM2N2zkxY3", "_xrqS67TNEG", "8JSaFPclpA", "pcIAjesV6L", "L4mGniDKURa", "6RjQTVpSjVk", "iclr_2022_d7-GwtDWNNJ", "CJRIpSSFFb", "4Aa2u56u1x", "njBpCHVTE3a", "ydmAnfBj5nC", "ydmAnfBj5nC", "htxUOPum08J", "htxUOPum08J", "WfNNNJ0-n_X", "iclr_2022_d7-GwtDWNNJ", "iclr_2022_d...
iclr_2022_yOBqNg-CqB0
Re-evaluating Word Mover's Distance
The word mover's distance (WMD) is a fundamental technique for measuring the similarity of two documents. As the crux of WMD, it can take advantage of the underlying geometry of the word space by employing an optimal transport formulation. The original study on WMD reported that WMD outperforms classical baselines such as bag-of-words (BOW) and TF-IDF by significant margins in various datasets. In this paper, we point out that the evaluation in the original study could be misleading. We re-evaluate the performances of WMD and the classical baselines and find that the classical baselines are competitive with WMD if we employ an appropriate preprocessing, i.e., L1 normalization. In addition, We introduce an analogy between WMD and L1-normalized BOW and find that not only the performance of WMD but also the distance values resemble those of BOW in high dimensional spaces.
Reject
The authors conduct extensive experiments to show that there were some errors in the original claims of the WMD paper and as opposed to what was claimed in the original paper, WMD does not outperform simpler baselines like BOW and TF-IDF. The authors claim that this is significant because WMD is widely used in the literature and hence pointing out errors in the original paper may help the community. Out of the 4 reviewers, 1 reviewer wrote a very short review and despite reminders did not elaborate on the reasons for a "Strong Accept". The other reviewer with a "Strong Accept" rating also did not champion the paper in the final discussions. The main objection of the two reviewers who were not in favor of accepting the paper were that (i) it focuses on cirticising a single paper and (ii) some of the criticism is not fair. In response, the authors claim that given the huge amount of derivative work which uses or builds upon the original WMD metric it is crucial to point out these errors. Having read the reviews and the responses, it is not clear to me whether such a paper which focuses only on such criticism of a single paper (not matter how popular it is) has enough merit in being accepted. Alternatively, if such criticism was a part of a broader work (maybe a work on new document similarity metrics) then it would have more merit. Further, it should be noted that of the 4 misleading conclusions of the original paper identified by the authors at least 2 are debatable (one being an error in the dataset and the other being a normalisation technique which was not mentioned in the paper but used in the code). The authors have also rephrased one of the original 4 misleading points and from the discussion it seems that they agree it is not misleading. It would have been easier for me to accept the paper if it had a new metric and ablation studies which showed that (i) Hey, normalisation is important and should be done for all baseline algorithms that are being compared (ii) Hey, there are errors in the dataset which affect the results
train
[ "DbQ8etI7A5W", "SSfB28_y9du", "xLjZaFv8yK", "o9TtH4rOojQ", "LIK3Ke6LBub", "5dIDGy0Wig", "MPi3HoK4VJe", "gVWsHKcVpUh", "Vu_t_vzFkfy", "MC5E-3Ie7em", "ofB19yjOuRp", "hAhH3iJOZ-f", "UJIjlMmCzKX" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the response and revision. I indeed hadn't read the code of Kusner et al. It is good to that you pointed out that this bit was missing, but I disagree with the aggressive framing against the original paper. I would like to point out that the figure 2 in the paper under review cites figure 3 in Kusner...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SSfB28_y9du", "xLjZaFv8yK", "Vu_t_vzFkfy", "iclr_2022_yOBqNg-CqB0", "gVWsHKcVpUh", "iclr_2022_yOBqNg-CqB0", "ofB19yjOuRp", "o9TtH4rOojQ", "hAhH3iJOZ-f", "UJIjlMmCzKX", "iclr_2022_yOBqNg-CqB0", "iclr_2022_yOBqNg-CqB0", "iclr_2022_yOBqNg-CqB0" ]
iclr_2022_QFNIpIrkANz
Learning Invariant Reward Functions through Trajectory Interventions
Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations. The commonplace scarcity of such demonstrations potentially leads to the absorption of spurious correlations in the data by the learning model, which as a result, exhibits behavioural overfitting to the expert dataset when trained on the obtained reward function. We study the generalization properties of the maximum entropy method for solving the inverse reinforcement learning problem for both exact and approximate formulations and demonstrate that by applying an instantiation of the invariant risk minimization principle, we can recover reward functions which induce better performing policies across domains in the transfer setting.
Reject
This work addresses the issue of learning reward functions that overfit less/are invariant to irrelevant features of expert demonstrations. The proposed algorithm builds on top of adversarial imitation learning (AIRL) and proposes to include a regularization principle that is based on invariant risk minimization. The proposed algorithm is evaluated both in grid worlds as well as continuous control tasks. Both zero-shot policy transfer, as well as transfer of the reward function to learn out-of-distribution tasks from scratch. **Strenghts** This work is well motivated and addresses an important problem The proposed method is well motivated, and provides theoretical foundations **Weaknesses** The manuscript had many missing details/no appendix only one baseline is provided, while many relevant IRL algorithms exist The evaluation is very limited in actually evaluating the invariance properties of the learned reward function poor alignment between how the proposed algorithm is motivated (learning invariant reward functions), and on what most of the experimental evaluation is focussed (zero-shot transfer of policy)**(more details on this below). **Rebuttal** the authors have updated the manuscript to include an appendix and were able to address most structural issues and provided many of the missing details. No additional baselines were provided, and the experimental evaluation remains limited/poorly aligned with the initial motivation **Summary** This manuscript addresses an important problem and proposes a promising algorithm. My major remaining concern is the experimental evaluation that seems not well aligned with the main contribution of this paper. As the authors state in their rebuttal the main supporting evidence for their claim is provided in Section 5.3, with only one set of experiments on using the reward function to learn policies on OOO tasks and very little analysis (< quarter of a page). While the majority of the evaluation (Section 5.2) is focussed on zero-shot transfer of the learned policy (which is trained during the IRL training phase). These zero-shot transfer experiments are not motivated in the context of the "learning invariant reward functions", so it's unclear what these results show. If these results are still relevant in showing that the proposed algorithm learns "invariant rewards", then this needs to be explained. Furthermore, more baselines would have been required (e.g algorithms that are focussed on learning a good policy by learning a "pseudo"-reward - such as GAIL). Because of this, my recommendation is that this manuscript is not quite ready yet for publication.
train
[ "RdcHIGC_mcC", "1Yk348pSTUU", "49Qz1jmUlPi", "YNZqWfBSvoh", "nTUzuTF4GQo", "Sa0riUWia3b", "veNtERDeR5v", "AvywU1sDj2H", "osE1e77UH0e", "jaRvdt4UE7M", "H0Ika8Van2" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the valuable feedback and the revision of the score.\n\nWe appreciate the suggestion to vary the degree of OOD-ness to highlight the benefits \nof our method. In our experimental setup, we aimed to demonstrate that similarly to the IRM\nresults ([2] Sec. 4.2) that a pair of training settings is suff...
[ -1, 5, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "49Qz1jmUlPi", "iclr_2022_QFNIpIrkANz", "veNtERDeR5v", "nTUzuTF4GQo", "H0Ika8Van2", "jaRvdt4UE7M", "1Yk348pSTUU", "osE1e77UH0e", "iclr_2022_QFNIpIrkANz", "iclr_2022_QFNIpIrkANz", "iclr_2022_QFNIpIrkANz" ]
iclr_2022_eCPCn25gat
Pretraining for Language Conditioned Imitation with Transformers
We study reinforcement learning (RL) agents which can utilize language inputs. To investigate this, we propose a new multimodal benchmark -- Text-Conditioned Frostbite -- in which an agent must complete tasks specified by text instructions in the Atari Frostbite environment. We curate and release a dataset of 5M text-labelled transitions for training and to encourage further research in this direction. On this benchmark, we evaluate Text Decision Transformer (TDT), a transformer directly operating on text, state, and action tokens, and find it improves upon other baseline architectures. Furthermore, we evaluate the effect of pretraining, finding unsupervised pretraining can yield improved results in low-data settings.
Reject
The arguments the paper makes require a stronger foundation and justification. The reviewers and AC didn't find the author response sufficient. For example, in response to ZbHJ, the authors argue that their benchmark doesn't use automatically generated trajectories and therefore the language is not synthetic in some sense. It's not clear how it's related to synthetic language, but generated trajectories does create artificial regularities in the task, so an issue, but one that the author must address accurately. This argument also seems to focus on ALFRED and R2R, and ignored many other benchmarks, like the data used in DRIF (mentioned later), RxR, Touchdown, etc. There is also mis-used of technical term (e.g., Decision Transformer). Generally, the reviewers consider the work of potential, but it requires significant refinement, which the author response did not provide.
train
[ "j3EGXQl4toi", "oAVlScziHP", "DC_rFfRzjx", "iifRg1FnY6U", "75qLaPbDvHO", "wmwmBG0Twcs", "EDtJL7rWzc", "FqoYcpEFNl", "RP0TTjrSUCN", "9h8GKvbQPOr", "ys6CSRtjya7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the response. I continue to feel that the work has limited technical novelty and needs to better justify the need for the proposed benchmark. I would like to keep my score. ", " Thank you for taking the time to respond to my questions. I have read the other reviewer's review and the auth...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "wmwmBG0Twcs", "EDtJL7rWzc", "iclr_2022_eCPCn25gat", "FqoYcpEFNl", "ys6CSRtjya7", "9h8GKvbQPOr", "RP0TTjrSUCN", "iclr_2022_eCPCn25gat", "iclr_2022_eCPCn25gat", "iclr_2022_eCPCn25gat", "iclr_2022_eCPCn25gat" ]
iclr_2022_Rx_nbGdtRQD
Coherent and Consistent Relational Transfer Learning with Autoencoders
Human defined concepts are inherently transferable, but it is not clear under what conditions they can be modelled effectively by non-symbolic artificial learners. This paper argues that for a transferable concept to be learned, the system of relations that define it must be coherent across domains. This is to say that the learned concept-specific relations ought to be consistent with respect to a theory that constrains their semantics and that such consistency must extend beyond the representations encountered in the source domain. To demonstrate this, we first present formal definitions for consistency and coherence, and a proposed Dynamic Comparator relation-decoder model designed around these principles. We then perform a proposed Partial Relation Transfer learning task on a novel data set, using a neural-symbolic autoencoder architecture that combines sub-symbolic representations with modular relation-decoders. By comparing against several existing relation-decoder models, our experiments show that relation-decoders which maintain consistency over unobserved regions of representational space retain coherence across domains, whilst achieving better transfer learning performance.
Reject
After reading the reviews and the rebuttal I unfortunately feel the paper is not ready to be accepted. The reasoning for this decision is as follows: * the empirical evaluation is somewhat weak in its current format, and even adding experiments going from BlockStacks to MNIST would have improved the results, or potentially other synthetically generated data. Or playing with which relation is used during the transfer phase. Something to give a bit more weight to the empirical section and help it connect better with the theoretical one * But maybe more importantly (and to some extend this is true for the formalism introduced as well), I think there needs to be a bit more context. After reading the reviews, I went and read the paper, and for example in results provided, it is not clearly explained what is the relationship between the proposed method and some of the baselines. I noticed that the related work section ended up in the appendix, which is fine, to the extent the main text can connect to the literature a bit. But while I agree that the introduction of the method seems good and clear, and this is a hard and important problem that lacks a proper framework and the proposal in the paper is quite interesting. It is also important to understand its relation to other frameworks, and to explain clearly what it tries to fix in other proposal. And to interpret the result, maybe justifying or providing some intuition of why the proposed model performs better. I think this is very crucial particularly for a topic that is still in a growing phase, which makes it harder to judge. I know in the appendix, the author mention domain adaptation which is also something that jumps in mind when looking at this architecture. However this point is not discussed or mentioned as much in the main paper. In current form, while the paper reads well, one is left to trying to understand whether these results are significant. I think the work is definitely very interesting and I hope the authors will resubmit it with modification. I just feel in the current format it will not have the impact it should, because of a preserved weak experimental section and not a clear grounding in the literature, making readers unsure of the significance of the work.
train
[ "8w8eQuf2Frj", "TAXbSLYnfla", "PsYFV5UtAZS", "3F_E-HGD9uZ", "ApOZ-JNa6V", "nqKHARIaMur", "Kl8aN3oK-_c", "eDjxq_M-Z1B", "KCQnpcVFFEO", "TIKFEW6KYHQ", "pUwoGW3WDVb", "4KD4MWDLkNd", "kpjp6k1wnoW", "gC6Wm4qxw7b" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **[S2] The proposed method is straightforward + short of novelty**\n\nOn the first point (regarding the model being straightforward), we are in agreement. \nOur purpose was indeed to show that a relatively standard method, critically combining a data encoder with a set of modular classifiers (relation-decoders) c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "gC6Wm4qxw7b", "gC6Wm4qxw7b", "gC6Wm4qxw7b", "kpjp6k1wnoW", "pUwoGW3WDVb", "4KD4MWDLkNd", "pUwoGW3WDVb", "iclr_2022_Rx_nbGdtRQD", "iclr_2022_Rx_nbGdtRQD", "kpjp6k1wnoW", "iclr_2022_Rx_nbGdtRQD", "iclr_2022_Rx_nbGdtRQD", "iclr_2022_Rx_nbGdtRQD", "iclr_2022_Rx_nbGdtRQD" ]
iclr_2022_bsr02xd-utn
Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation
Unsupervised domain adaptation (UDA) has become an appealing approach for knowledge transfer from a labeled source domain to an unlabeled target domain. However, when the classes in source and target domains are imbalanced, most existing UDA methods experience significant performance drop, as the decision boundary usually favors the majority classes. Some recent class-imbalanced domain adaptation (CDA) methods aim to tackle the challenge of biased label distribution by exploiting pseudo-labeled target data during training process. However, these methods may be challenged with the problem of unreliable pseudo labels and error accumulation during training. In this paper, we propose a pairwise adversarial training approach to augment training data for unsupervised class-imbalanced domain adaptation. Unlike conventional adversarial training in which the adversarial samples are obtained from the $\ell_p$ ball of the original data, we obtain the semantic adversarial samples from the interpolated line of the aligned pair-wise samples from source domain and target domain. Experimental results and ablation study show that our method can achieve considerable improvements on the CDA benchmarks compared with the state-of-art methods focusing on the same problem.
Reject
This paper aims to address the imbalanced class problem in unsupervised domain adaptation. The challenge lies in how to handle the difficulties introduced by imbalanced classes. To this end, this work proposes a new data augmentation strategy by taking the interpolation of two samples from the same class but from different domains as the augmented samples. The experiments demonstrate promising performance on the class-imbalanced domain adaptation datasets. However, there are several concerns raised by the reviewers. 1) The interpolation between a source and target sample of the same class can potentially be unreliable as the pseudo label methods. 2) Some statements are based on intuition but not well supported by either theoretical analysis or experimental evaluations. 3) The proposed method is inferior to baseline methods on some datasets, it would be helpful to have further analysis of the advantages and limitations of the proposed method. Overall, the paper provides some new and interesting ideas. However, given the above concerns, the novelty and significance of the paper will degenerate. More discussions on the principles behind the proposed method and more experimental studies are needed. Addressing the concerns needs a significant amount of work. Although we think the paper is not ready for ICLR in this round, we believe that the paper would be a strong one if the concerns can be well addressed.
train
[ "zlsQgmrGdXj", "AR-MOtuGqP", "RK-5abF1Pn", "GyHgfUTBBbz", "MGtVAVQQmJ_", "iZfIYpj_VN", "tHyB7X3YzaA", "HHqYAZQgSJn", "pXz67_hTgQ0", "hkDejGmYRol", "i4IEV1G-_SO", "BWA43wuh5h", "RtA_NlDGoLB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for reviewing our paper. We have answered all the questions and provided additional experimental results. If there is anything unclear, we will address it further.", " Thank you again for reviewing our paper. We have answered all the questions and provided additional experimental results. If the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ "HHqYAZQgSJn", "MGtVAVQQmJ_", "tHyB7X3YzaA", "iZfIYpj_VN", "hkDejGmYRol", "BWA43wuh5h", "RtA_NlDGoLB", "i4IEV1G-_SO", "hkDejGmYRol", "iclr_2022_bsr02xd-utn", "iclr_2022_bsr02xd-utn", "iclr_2022_bsr02xd-utn", "iclr_2022_bsr02xd-utn" ]
iclr_2022_SvFQBlffMB
Pseudo Knowledge Distillation: Towards Learning Optimal Instance-specific Label Smoothing Regularization
Knowledge Distillation (KD) is an algorithm that transfers the knowledge of a trained, typically larger, neural network into another model under training. Although a complete understanding of KD is elusive, a growing body of work has shown that the success of both KD and label smoothing comes from a similar regularization effect of soft targets. In this work, we propose an instance-specific label smoothing technique, Pseudo-KD, which is efficiently learnt from the data. We devise a two-stage optimization problem that leads to a deterministic and interpretable solution for the optimal label smoothing. We show that Pseudo-KD can be equivalent to an efficient variant of self-distillation techniques, without the need to store the parameters or the output of a trained model. Finally, we conduct experiments on multiple image classification (CIFAR-10 and CIFAR-100) and natural language understanding datasets (the GLUE benchmark) across various neural network architectures and demonstrate that our method is competitive against strong baselines.
Reject
This work proposes an instance-specific label smoothing method, which is formulated as a two-stage optimization problem for finding the optimal label smoothing. The authors show that the proposed approach can be equivalent to an efficient variant of self-distillation techniques (i.e. no need to store the parameters or the output of a trained model). Experiments on image classification (CIFAR-10 and CIFAR-100) and natural language understanding datasets (the GLUE benchmark) demonstrate that our method is competitive against strong baselines. The reviewers find the proposed approach reasonable, and the presentation clear. However, they all rated the paper as borderline, due to some concerns that the submission has in its current form. These include limited novelty (by CUiW) [the link between label smoothing and knowledge distillation is largely based on previous research findings (e.g., Yuan et al., 2020)], nonconvincing results on the effectiveness of the proposed method (by bwuZ) [Improvement of Pseudo-KD in practice is not significant in terms of test accuracy gains], and lack of comparison with some recent related methods (by JBRd as well as other reviewers). The authors responded to these (and other concerns), but this did not convince the reviewers about the concerns listed above. I recommend the authors to resubmit after addressing these issues.
train
[ "b0jhGBSiUKY", "X_5-0TZo5R-", "A1Rk3BvgO-", "3GKyq-mdWe", "xh6No0qzq3p", "NvK91u8jxBs", "MnKk42GZY3v", "eyJ-CczRv1F", "z79lsUBZQHp", "gWopkLxY5J", "PrmIWB_aa6W", "tSovZ-c8X2w", "9eCnjLOnm2", "XAjD0MEHWMJ" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all of the reviewers' comprehensive remarks and helpful feedback. To address the concerns from reviewers, we have made the following main changes in our draft:\n- We revise the related work to include references mentioned by reviewers. Some of these works have previously been discussed and compared in pr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "iclr_2022_SvFQBlffMB", "MnKk42GZY3v", "X_5-0TZo5R-", "XAjD0MEHWMJ", "XAjD0MEHWMJ", "PrmIWB_aa6W", "tSovZ-c8X2w", "3GKyq-mdWe", "9eCnjLOnm2", "PrmIWB_aa6W", "iclr_2022_SvFQBlffMB", "iclr_2022_SvFQBlffMB", "iclr_2022_SvFQBlffMB", "iclr_2022_SvFQBlffMB" ]
iclr_2022_fUhxuop_Q1r
Disentangling Generalization in Reinforcement Learning
Generalization in Reinforcement Learning (RL) is usually measured according to concepts from supervised learning. Unlike a supervised learning model however, an RL agent must generalize across states, actions and observations from limited reward-based feedback. We propose to measure an RL agent's capacity to generalize by evaluating it in a contextual decision process that combines a tabular environment with observations from a supervised learning dataset. The resulting environment, while simple, necessitates function approximation for state abstraction and provides ground-truth labels for optimal policies and value functions. The ground truth labels provided by our environment enable us to characterize generalization in RL across different axes: state-space, observation-space and action-space. Putting this method to work, we combine the MNIST dataset with various gridworld environments to rigorously evaluate generalization of DQN and QR-DQN in state, observation and action spaces for both online and offline learning. Contrary to previous reports about common regularization methods, we find that dropout does not improve observation generalization. We find, however, that dropout improves action generalization. Our results also corroborate recent findings that QR-DQN is able to generalize to new observations better than DQN in the offline setting. This success does not extend to state generalization, where DQN is able to generalize better than QR-DQN. These findings demonstrate the need for careful consideration of generalization in RL, and we hope that this line of research will continue to shed light on generalization claims in the literature.
Reject
This work proposes to study the generalization capabilities of RL algorithms using contextual decision processes (CDPs). CDPs allows to study generalization similar to how we are used to studying generalization in supervised learning, and can separate the generalization capabilities of a learned agent wrt observation, state and action space. This proposed measure for generalization is used in an extensive study on grid world domains to evaluate existing algorithms that aim to improve generalization. **Strengths** This manuscript is well written and the work is well motivated A novel perspective and way of measuring generalization of learned agents An empirical study that compares existing algorithms on how well they generalize in observation, state, action spaces **Weaknesses** Some clarity issues existed (missing links to existing literature, experimental details) empirical study is (out of necessity) limited to small scale grid worlds no deeper analysis of the results, why do algorithms perform the way they do from this novel perspective of generalization, which makes it hard to understand how one could choose an algorithm for larger scale settings which don't allow for this type of analysis **Rebuttal** The authors updated the paper to improve the parts that were unclear, and had an extensive discussion with reviewers on the intuition of the results and converging on take-aways. Unfortunately, this intuition and take-aways have not been added. **Summary** While I understand the authors wish to not speculate on intuition, I agree with the reviewers that without (experimentally supported) take-aways the provided analysis is incomplete. Understanding why each algorithm achieves the performance they do wrt this novel way of measuring generalization is the only way the proposed method to measure generalization and the evaluation can be used to draw conclusions about more general problem settings. Thus, although this is a very promising direction on an important problem, the manuscript is not ready yet for publication.
val
[ "O5Gk_sbD-hp", "3sA5yy59BP", "HfqaI7ijJ_z", "_tf-LBWW9Sc", "mUYRQT9EIZy", "AI0wAavvWLs", "t_xiec3S2tS", "C7EG8Fj9ZqS", "C7lPeJ0Vo9F", "Cn03fK-VpW1", "2BX_r-sgl97", "PYhKnlzeqqx", "L4YRG4w3zg9", "zoPNBr5aWb1", "hGt781-AGEc", "Tx294pAuwX", "5EE_s3YvZt3", "TXYR68DjnJC", "97E0y9bI2en...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ "This paper discusses generalization in deep RL. The key contribution of the paper, from my understanding, is that the authors argue that different from generalization in SL, in RL state, observation and action should be considered separately. A measurement (Eq.4) is proposed to evaluate generalization capacity of ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_fUhxuop_Q1r", "t_xiec3S2tS", "_tf-LBWW9Sc", "5EE_s3YvZt3", "C7lPeJ0Vo9F", "C7EG8Fj9ZqS", "97E0y9bI2en", "C7LRgztOJXC", "hGt781-AGEc", "iclr_2022_fUhxuop_Q1r", "PYhKnlzeqqx", "L4YRG4w3zg9", "zoPNBr5aWb1", "TXYR68DjnJC", "I0sH_QXrsZq", "vhJ0RzV3s1y", "vhJ0RzV3s1y", "O5Gk_s...
iclr_2022_W6BpshgRi0q
Ask2Mask: Guided Data Selection for Masked Speech Modeling
Masked speech modeling (MSM) methods such as wav2vec2 or w2v-BERT learn representations over speech frames which are randomly masked within an utterance. While these methods improve performance of Automatic Speech Recognition (ASR) systems, they have one major limitation. They treat all unsupervised speech samples with equal weight, which hinders learning as not all samples have relevant information to learn meaningful representations. In this work, we address this limitation. We propose ask2mask (ATM), a novel approach to focus on specific samples during MSM pre-training. ATM employs an external ASR model or \textit{scorer} to weight unsupervised input samples in two different ways: 1) A fine-grained data selection is performed by masking over the highly confident input frames as chosen by the scorer. This allows the model to learn meaningful representations. 2) ATM is further extended to focus at utterance-level by weighting the final MSM loss with the utterance-level confidence score. We conduct fine-tuning experiments on two well-benchmarked corpora: LibriSpeech (matching the pre-training data) and AMI (not matching the pre-training data). The results substantiate the efficacy of ATM on significantly improving the recognition performance under mismatched conditions (up to 11.6\% relative) while still yielding modest improvements under matched conditions.
Reject
This paper presents an approach that uses ASR-based scores to guide masking high-confident blocks for speech representation learning. As most of the reviewers mentioned, it is an incremental improvement over baseline systems with limited novelty. About the use of confidence scores which is a key factor of the method, it lacks enough discussion on its quality and sensitivity.
train
[ "Hu5-fzWQXb2", "n9waWZ5dO4c", "NkuhwU-4zF7", "gO3JSBY3cyX", "OwV8GcurFV", "Qk2MPVJN1zY", "5WJe8ii3oV", "vMcHdOsyF7", "4wa5TY3iHtr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, authors proposed to use an external scorer to weight the frames to be masked for the MSM loss. Idea is interesting as not all speech frames are of equal importance. I personally think that authors have a good point here, how to do better mask selection in self-supervised learning. Obviously, not al...
[ 6, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2022_W6BpshgRi0q", "4wa5TY3iHtr", "iclr_2022_W6BpshgRi0q", "vMcHdOsyF7", "Hu5-fzWQXb2", "5WJe8ii3oV", "iclr_2022_W6BpshgRi0q", "iclr_2022_W6BpshgRi0q", "iclr_2022_W6BpshgRi0q" ]
iclr_2022_PaQhL90tLmX
Robust Deep Neural Networks for Heterogeneous Tabular Data
Although deep neural networks (DNNs) constitute the state-of-the-art in many tasks based on image, audio, or text data, their performance on heterogeneous, tabular data is typically inferior to that of decision tree ensembles. To bridge the gap between the difficulty of DNNs to handle tabular data and leverage the flexibility of deep learning under input heterogeneity, we propose DeepTLF, a framework for deep tabular learning. The core idea of our method is to transform the heterogeneous input data into homogeneous data to boost the performance of DNNs considerably. For the transformation step, we develop a novel knowledge distillations approach, TreeDrivenEncoder, which exploits the structure of decision trees trained on the available heterogeneous data to map the original input vectors onto homogeneous vectors that a DNN can use to improve the predictive performance. Through extensive and challenging experiments on various real-world datasets, we demonstrate that the DeepTLF pipeline leads to higher predictive performance. On average, our framework shows 19.6\% performance improvement in comparison to DNNs. The DeepTLF code is publicly available.
Reject
The paper introduces a method called DeepTLF that handles heterogeneous tabular data by using GBDT as an encoder for a DNN. The paper is clearly written and the method works as intended. There is however the issue of novelty (raised by Q6we). The method indeed relies of the capacity of GBDT to represent the data, the internal node values are used as features to train a downstream neural network. This process is straightforward, which is good from an application perspective, though the paper offers limited insights to the community from a scientific perspective. Another reviewer concern was that of incompleteness of experiments and lack of certain details (reviewers vaip and gWeP). This was answered in the rebuttal, which the reviewers acknowledged, however, the authors did not provide a revised version of the manuscript, when ICLR in fact allowed (and actually encouraged) revised versions to be submitted by Nov 22. Without a revised version, it is difficult for the reviewers to assess whether the text in the final manuscript will actually accurately reflect the changes they suggested. This justifiably caused two of the reviewers to keep their original scores (they explicitly stated the lack of an updated manuscript as the reason). Given lack of an update, coupled with the issue of novelty, I conclude the paper is not ready to be accepted in its current form.
train
[ "qDZoeRVWPjg", "IKhSngh5nMO", "lprIm6IP1rq", "3yfklTS5NLN", "YD5ylNQh6W3", "gTgmtdwTe67", "8_uuASlp8P", "q0QzNONG_u8", "CQWXTxaTI8", "DRA9hbpHS8", "LXfygnMAj1Y" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors posting the response, but most of my main concerns remain, such as lack of novelty (I stated that the authors are training GBDT on the entire \"training set\", not the entire \"dataset\", but the author response suggests that they either misread my review, or intentionally misinterpreted ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "q0QzNONG_u8", "8_uuASlp8P", "iclr_2022_PaQhL90tLmX", "gTgmtdwTe67", "CQWXTxaTI8", "LXfygnMAj1Y", "DRA9hbpHS8", "YD5ylNQh6W3", "iclr_2022_PaQhL90tLmX", "iclr_2022_PaQhL90tLmX", "iclr_2022_PaQhL90tLmX" ]
iclr_2022_EYCm0AFjaSS
ZerO Initialization: Initializing Residual Networks with only Zeros and Ones
Deep neural networks are usually initialized with random weights, with adequately selected initial variance to ensure stable signal propagation during training. However, there is no consensus on how to select the variance, and this becomes challenging especially as the number of layers grows. In this work, we replace the widely used random weight initialization with a fully deterministic initialization scheme ZerO, which initializes residual networks with only zeros and ones. By augmenting the standard ResNet architectures with a few extra skip connections and Hadamard transforms, ZerO allows us to start the training from zeros and ones entirely. This has many benefits such as improving reproducibility (by reducing the variance over different experimental runs) and allowing network training without batch normalization. Surprisingly, we find that ZerO achieves state-of-the-art performance over various image classification datasets, including ImageNet, which suggests random weights may be unnecessary for modern network initialization.
Reject
This paper suggests an architecture with a deterministic initialization which has only 0/1 values. The reviewers were mostly (marginally) negative, mainly because of the low novelty and significance of this work. Specifically, the main novelty issues were: 1) Improving convergence speed and removing BatchNorm: was already done, in a quite similar manner, and it achieves better or similar results. (Fixup , ReZero: https://arxiv.org/abs/1901.09321, https://arxiv.org/abs/2003.04887, and few others as well) 2) Initializing a network with deterministic initialization: was also done (ConstNet, https://arxiv.org/abs/2007.01038). I think the main difference from the previous work is the additional Hadamard connections, which help break the symmetry. However, it is unclear what is the benefit of this modification, as the previous work could train without it (albeit on CIFAR). Specifically, the main significance issues were: 1) Reducing standard deviation: The authors' response confirmed there is no statistically significant benefit (p=~ 0.1) for variance reduction when comparing with Kaiming initialization for ImageNet. 2) General network performance: The results do not seem better than the baseline (Xavier init is not a proper baseline in a network with ReLUs). 3) Sparsity claims: The network appears to be losing accuracy even with 20% sparsity, which isn't even useful for efficiency. For comparison, the lottery ticket hypothesis showed you can get to 90% sparsity and get better results. So, this is a nice observation, but not a major contribution. Therefore, I recommend the authors to better distinguish themselves from previous works (What are the changes? Why are these important?), and improve their empirical results so they highlight the usefulness of the suggested method (e.g., improve the SOTA in some benchmark).
train
[ "MDnrAU1POPQ", "dNS6N-GBSYx", "O5d5wR0TZxJ", "QKx8slQXeS", "youvVs3tlWR", "r5QI5qQwiUx", "8awchmfIBHC", "slWDC3uFi8W", "YkfKQKLrC4-", "U3sEFoypAP", "aLimRcVdhV", "XNV1nDAev5f", "ARX4ObwsjFC", "g8tRcS3SHBa", "BYaFr0NtUtS", "qHTpzQxzZcL", "JGVx3D_e2q", "dZaTx6T-gfc", "iXvd7W7-4mT",...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Thanks for your reply. For point 4, we agree with you that avoiding vanishing gradients is not the main objective of the ResNet introduced by He et al [1], which is because batch normalizations alleviate the vanishing gradient problem for both plain and residual networks. However, skip connections do help avoid t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3, 3 ]
[ "dNS6N-GBSYx", "slWDC3uFi8W", "QKx8slQXeS", "youvVs3tlWR", "ARX4ObwsjFC", "8awchmfIBHC", "iclr_2022_EYCm0AFjaSS", "teP5BPTOJjx", "iXvd7W7-4mT", "dZaTx6T-gfc", "XNV1nDAev5f", "ARX4ObwsjFC", "qHTpzQxzZcL", "BYaFr0NtUtS", "JGVx3D_e2q", "iclr_2022_EYCm0AFjaSS", "iclr_2022_EYCm0AFjaSS", ...
iclr_2022_qTBC7E4c454
Recursive Construction of Stable Assemblies of Recurrent Neural Networks
Advanced applications of modern machine learning will likely involve combinations of trained networks, as are already used in spectacular systems such as DeepMind's AlphaGo. Recursively building such combinations in an effective and stable fashion while also allowing for continual refinement of the individual networks - as nature does for biological networks - will require new analysis tools. This paper takes a step in this direction by establishing contraction properties of broad classes of nonlinear recurrent networks and neural ODEs, and showing how these quantified properties allow in turn to recursively construct stable networks of networks in a systematic fashion. The results can also be used to stably combine recurrent networks and physical systems with quantified contraction properties. Similarly, they may be applied to modular computational models of cognition. We perform experiments with these combined networks on benchmark sequential tasks (e.g permuted sequential MNIST) to demonstrate their capacity for processing information across a long timescale in a provably stable manner.
Reject
In the context of recurrent neural networks, the motivation of the paper is to explore the "space" between fully trained models and almost not trained models, e.g. echo state networks, using a formal approach. In fact, a modular approach has proven to be very successful in many practical applications, and in addition brain seems to adopt this strategy as well. The addressed theoretical issue is stability of the network (i.e., the network implements a contraction map.) Specifically, it is assumed that a network is composed of a set of subnetworks that meet by construction some stability condition, and the problem is to design a mixing weight matrix, interconnecting the latent spaces of the subnetworks, able to give stability guarantees during and after training. Some novel stability conditions are proposed as well as two different approaches to design a successful mixing weight matrix. The original submitted paper was not easy to read, and after revision major problems with presentation have been resolved, although the current version looks more like an ordered collection of results/statements than a smooth and integrated flow of discourse. The revision has also addressed some concerns by reviewers on the role of size and sparsity of the modules, as well as the sensitivity of the stabilization condition on the mixing weight matrix has been experimentally assessed, obtaining interesting results. Overall the paper reports interesting results, however the novelty of the contribution seems to be a bit weak, e.g. stability conditions on recurrent networks (although different from the reported ones) were already presented in literature. Also the idea of exploiting, in one of the proposed models, the fact that the matrix exponential of a skew-symmetric matrix is orthogonal to maintain the convergence condition during training, is not novel. Moreover, the experimental assessment does not provide a direct comparison, under the same architectural/learning setting, of the novel stability results versus the ones already presented in literature. Empirical results are obtained on simple tasks (using datasets with sequences of identical length), and relatively small networks, which limits a bit the scope of the assessment, as well as it is not clear if the observed improvements (where obtained) are statistically significant (especially when compared with results obtained by networks with the same order of parameters.) The quality of the assessment would increase significantly by considering datasets with sequences of different lengths, and involving more challenging tasks that do require larger networks.
train
[ "zSSaNi9SO_4", "VbqbyCMrUnF", "pCev_-iq8W2", "daOfnJC41p", "KY5zs8sZwzQ", "qy3aVNgG6Nb", "iQkSyqYHHH3", "ItwmvP2cvK", "5c24Tk6Tf_N", "HnLQQjBqClX", "7Mgvf_TvQ5n", "LQO7txHQ0E0", "b_X8fZwZUZ", "Lk1qHbuCewV", "rUlQ3KKWJhN", "3Cs4TWBOft", "bNvm2aiVBy", "b6dKVELqYLo" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for increasing your score, we appreciate your feedback. \n\nWe would like to emphasize that our best performing architecture and the primary focus of the results section does not use the parameterization mentioned in [1]. In our Sparse Combo Net, we utilize sparse initialization to achieve sta...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, 2, 3 ]
[ "pCev_-iq8W2", "iclr_2022_qTBC7E4c454", "3Cs4TWBOft", "VbqbyCMrUnF", "ItwmvP2cvK", "iclr_2022_qTBC7E4c454", "7Mgvf_TvQ5n", "5c24Tk6Tf_N", "Lk1qHbuCewV", "rUlQ3KKWJhN", "b_X8fZwZUZ", "iclr_2022_qTBC7E4c454", "LQO7txHQ0E0", "b6dKVELqYLo", "bNvm2aiVBy", "VbqbyCMrUnF", "iclr_2022_qTBC7E4...
iclr_2022_hRVZd5g-z7
A Joint Subspace View to Convolutional Neural Networks
Motivated by the intuition that important image regions remain important across different layers and scales in a CNN, we propose in this paper a joint subspace view to convolutional filters across network layers. When we construct for each layer a filter subspace by decomposing convolutional filters over a small set of layer-specific filter atoms, we observe a low-rank structure within subspace coefficients across layers. The above observation matches widely-known cross-layer filter correlation and redundancy. Thus, we propose to jointly model filter subspace across different layers by enforcing cross-layer shared subspace coefficients. In other words, a CNN is now reduced to layers of filter atoms, typically a few hundred of parameters per layer, with a common block of subspace coefficients shared across layers. We further show that such subspace coefficient sharing can be easily extended to other network sub-structures, from sharing across the entire network to sharing within filter groups in a layer. While significantly reducing the parameter redundancy of a wide range of network architectures, the proposed joint subspace view also preserves the expressiveness of CNNs, and brings many additional advantages, such as easy model adaptation and better interpretation. We support our findings with extensive empirical evidence.
Reject
The paper eventually got 5 "marginally above the threshold" after rebuttal. Such scores testify to that the paper is a borderline one. By reading the post-rebuttal comments, it is evident that most of the reviewers still deemed that the novelty is incremental. One of the reviewer (vUb9) raised the score simply to "encourage the authors to think more important problems", rather than acknowledging the merits of the paper. The AC also read through the paper and had the following opinions: 1. The paper is actually about DNN compression, based on the "new finding" that the weights across layers are low-rank. However, the authors would not write the paper in the way of DNN compression, but put more emphasis on the "new finding", which has no theoretical support at all (only some heuristic reasoning). The AC would deem that the "new finding" is only an assumption. 2. Actually the "new finding" is not new at all. For example, [*] Zhong et al., ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition, Neural Networks, 2019, used a shared core tensor (which could be regarded as the common dictionary) across all layers for higher compression rates. More recent references that use tensors and consider shared information across layers for compression can be easily found as well. So the AC thanked the authors for preparing the rebuttals carefully, but regretfully the paper is not good enough for ICLR.
train
[ "Vr6LJ5Jm73q", "J1O1fJz40IJ", "yYmMiY8LGo3", "otcrh7WvBI2", "T-cOBiz-UiO", "96tNj1r7ht6", "juZj7bIJjhm", "Ikql7_wF9Pf", "Iqm-NzGQhnC", "J2uuxpPbTbj", "4e3MohH9W3B", "DNI9julg08R", "zanv-gG2tX-", "uBd8T5qyVUf", "ijWvZhMfdDBg", "QowW5-AenJ", "nNKy_N-Frsm", "vqhdTP5HG8u", "z_HCvqa_z...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "...
[ " Thanks for the authors' effort on clarification of my queries about dimensions. Still, I think the novelty of this paper is still in a range of incremental contribution, so I stick to my previous recommendation.", " Dear reviewer FSGC,\n\nThanks for your support for our paper. We will release the PyTorch code u...
[ -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "QowW5-AenJ", "96tNj1r7ht6", "iclr_2022_hRVZd5g-z7", "iclr_2022_hRVZd5g-z7", "iclr_2022_hRVZd5g-z7", "vqhdTP5HG8u", "iclr_2022_hRVZd5g-z7", "iclr_2022_hRVZd5g-z7", "otcrh7WvBI2", "pqqbaJeCg3I", "MTu_86zkFd1", "T-cOBiz-UiO", "iclr_2022_hRVZd5g-z7", "iclr_2022_hRVZd5g-z7", "otcrh7WvBI2", ...
iclr_2022_j8J97VgdmsT
FLAME-in-NeRF: Neural control of Radiance Fields for Free View Face Animation
This paper presents a neural rendering method for controllable portrait video synthesis.Recent advances in volumetric neural rendering, such as neural radiance fields (NeRF), have enabled the photorealistic novel view synthesis of static scenes with impressive results. However, modeling dynamic and controllable objects as part of a scene with such scene representations is still challenging. In this work, we design a system that enables 1) novel view synthesis for portrait video, of both the human subject and the scene they are in and 2) explicit control of the facial expressions through a low-dimensional expression representation. We represent the distribution of human facial expressions using the expression parameters of a 3D Morphable Model (3DMMs) and condition the NeRF volumetric function on them. Furthermore, we impose a spatial prior, brought by 3DMM fitting, to guide the network to learn disentangled control for static scene appearance and dynamic facial actions. We show the effectiveness of our method on free view synthesis of portrait videos with expression controls. To train a scene, our method only requires a short video of a subject captured by a mobile device.
Reject
This submission received 4 ratings, all below the acceptance threshold. The reviewers expressed concerns around overall novelty of contributions and quality of produced results, and also pointed out lack of comparisons with some prior works and gaps in empirical evaluation. The authors responded to most of these comments, but did not convince the reviewers to upgrade their ratings. The final recommendation is therefore to reject.
train
[ "5Tzy_3SeTv5", "k3dkNDSgP7G", "NKzaqb7lTgx", "FQkoixpOwGE", "blfiYiK-XqI", "mAuEYrLfGIG", "YxElhLShnVE", "QC-iakqNAa", "2bg83TERVv2", "c0nEn0PhJaD", "fRKq8QkelJ8", "-tRp4Zoqvh", "CyS5dcn4EbM" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification! Unlike our work, *neither* of those works jointly models the background and the dynamic object *simultaneously*. Thus, there is no evidence that using such a geometric prior in a joint modelling setting will work. In the paper, we demonstrate the problem of Expression-Appearance enta...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "k3dkNDSgP7G", "NKzaqb7lTgx", "mAuEYrLfGIG", "blfiYiK-XqI", "YxElhLShnVE", "QC-iakqNAa", "CyS5dcn4EbM", "-tRp4Zoqvh", "c0nEn0PhJaD", "iclr_2022_j8J97VgdmsT", "iclr_2022_j8J97VgdmsT", "iclr_2022_j8J97VgdmsT", "iclr_2022_j8J97VgdmsT" ]
iclr_2022_JpNH4CW_zl
Multivariate Time Series Forecasting with Latent Graph Inference
This paper introduces a new architecture for multivariate time series forecasting that simultaneously infers and leverages relations among time series. We cast our method as a modular extension to univariate architectures where relations among individual time series are dynamically inferred in the latent space obtained after encoding the whole input signal. Our approach is flexible enough to scale gracefully according to the needs of the forecasting task under consideration. In its most straight-forward and general version, we infer a potentially fully connected graph to model the interactions between time series, which allows us to obtain competitive forecast accuracy compared with the state-of-the-art in graph neural networks for forecasting. In addition, whereas previous latent graph inference methods scale O(N^2) w.r.t. the number of nodes N (representing the time series), we show how to configure our approach to cater for the scale of modern time series panels. By assuming the inferred graph to be bipartite where one partition consists of the original N nodes and we introduce K nodes (taking inspiration from low-rank-decompositions) we reduce the time complexity of our procedure to O(NK). This allows us to leverage the dependency structure with a small trade-off in forecasting accuracy. We demonstrate the effectiveness of our method for a variety of datasets where it performs better or very competitively to previous methods under both the fully connected and bipartite assumptions.
Reject
This paper studies multivariate time series forecasting by making relational inference in a latent space. It attempts to address the important issue of reducing the computational complexity of the inferred graph. This motivation is well articulated. Despite its merits, concerns have been raised regarding the relatively weak evaluation without using datasets involving more many nodes to demonstrate the scalability of the proposed method, which is a major selling point of the paper. As such, while the motivation of the work is clear, its experimental evaluation is not thorough enough to demonstrate the scalability of the proposed method. The authors made the remark in their response that they are not aware of any public time series dataset of this size (which is not agreed by another reviewer who pointed out that some much larger datasets were used in other papers). Note that it is not uncommon in other work to use synthetic datasets to evaluate the scalability as well as other properties of the proposed methods. Moreover, clarity of the presentation also has room for improvement. The paper has potential for publication in a top venue if the comments and suggestions are incorporated to revise the paper.
train
[ "0AMfyQkdcVi", "ETrqFkaJbfu", "4xd6tTfKFto", "lSEP_epJPSO", "ScnPzlyc0kR", "ukOiqhkklcN", "LHkcGDmdrz_", "QwFLm26DDsq", "V2QWia_3B4s" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes using graph neural net (GNN) operations to combine per-series embeddings, to enable multivariate forecasting.\n\nSpecifically, N individual series are separately encoded for a given time window to get representations per series. These representations are then updated with a GNN - either assumin...
[ 5, -1, 3, -1, -1, -1, -1, -1, 5 ]
[ 4, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_JpNH4CW_zl", "ScnPzlyc0kR", "iclr_2022_JpNH4CW_zl", "4xd6tTfKFto", "4xd6tTfKFto", "V2QWia_3B4s", "0AMfyQkdcVi", "0AMfyQkdcVi", "iclr_2022_JpNH4CW_zl" ]
iclr_2022_30SXt3-vvnM
Model-Efficient Deep Learning with Kernelized Classification
We investigate the possibility of using the embeddings produced by a lightweight network more effectively with a nonlinear classification layer. Although conventional deep networks use an abundance of nonlinearity for representation (embedding) learning, they almost universally use a linear classifier on the learned embeddings. This is suboptimal since better nonlinear classifiers could exist in the same embedding vector space. We advocate a nonlinear kernelized classification layer for deep networks to tackle this problem. We theoretically show that our classification layer optimizes over all possible kernel functions on the space of embeddings to learn an optimal nonlinear classifier. We then demonstrate the usefulness of this layer in learning more model-efficient classifiers in a number of computer vision and natural language processing tasks.
Reject
The paper proposes the replacement of the softmax layer in a neural network with one parametrized by a kernel. The kernel itself is learned during training from the space of radial basis kernels. The resulting models are compared against identical networks with softmax, linear kernels, second order pooling and kervolutions on several datasets, encompassing vision and NLP tasks. First, the reviewers raised questions about the novelty of the work. Theorem 4.3, based on which the method is derived, has existed in the literature and seems to be related to the uniqueness of the power series expansions for kernels. There is novelty in using this theoretical result to write an approximation of a positive definite kernel in a way which can be learned. Specifically, it is written as a finite weighted sum of existing kernels, where the coefficients are learned. Reviewer pWF3 posed a valid question about the quality of the approximation, to which the authors responded with an equally valid, and comprehensive, appendix on the error bounds of the approximation. Still, it is worth tempering the statement that the search is 'exhaustive' over the space of radial kernels or that the kernel is optimal (instead, the search appears over a large class of radial kernels, and the kernel is approximately optimal with an extremely low distance from the actual optimum). Along the same lines of rephrasing claims, reviewer WDU4 also pointed out several statements and claims which were not entirely accurate, which the authors then proceeded to resolve, resulting in notable changes from the initial version of the paper. Specifically, there was mention of a "non-parametric kernelized classifier". This has been fixed, but it did seem to have initially confused other reviewers, who suggested related work that, it turns out, are not necessarily suitable contenders. The changes made definitely improved the paper, and resolved most of the reviewer's concerns. Nevertheless, the appendix added comparing the method to non-parametric models could be improved. For instance the authors stated "Wilson et al. use Gaussian RBF and spectral mixture kernels. Our method has the capability to automatically learn any positive definite radial kernel. Note that Gaussian RBF and spectral kernels are all radial kernels." - is there any intuition, or proof, of a case when the method introduced here learns a network + classifier that the method by Wilson et al. cannot learn? Or for which deep kernel learning requires considerably more resources? (DKL has been optimized and made considerably faster since the initial paper in 2016). https://proceedings.neurips.cc/paper/2016/hash/bcc0d400288793e8bdcd7c19a8ac0c2b-Abstract.html Also, while the present work is backed by 4.3, DKL also has a theoretical grounding. https://www.jmlr.org/papers/volume20/17-621/17-621.pdf There was some discussion on the exhaustiveness of the experiments, and it was concluded that the datasets are sufficient, while the reviewers were not in agreement as to whether the authors considered sufficient contenders. A comparison against DKL, at least, appears to be warranted. Overall, the paper brings a contribution in terms of improving the performance of backbones with limited expressiveness through the use of a kernel-parametrized classifier, learned by optimizing an approximation of a formulation that spans the entire space of radial basis kernels. The paper was updated considerably during the reviewer process, to its betterment, however, an experimental comparison against deep learning with non-parametric kernelized classifiers is still missing.
train
[ "62MB9M4RQJL", "Eh7kG8TlqIS", "BlZkxLqFlqQ", "w85R_3euGa4", "LSYHSyU4er3", "XFAvWJ4sxJB", "hH9WjtwGDDg", "-tg6CNasxNM", "iMO9D63WEsh", "dSgW0FU0sS", "Umr4cwXhBpH", "-cYMz8gZyu9", "MZbORRlohlK" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. We would like to highlight that we compare our method to kervolution, second-order pooling, linear kernel, automatic bandwidth parameter tuning of the Gaussian RBF kernel, and a traditional-MKL-like setting with multiple pre-defined kernels, all of which are kernel-based baselines us...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "w85R_3euGa4", "BlZkxLqFlqQ", "iMO9D63WEsh", "hH9WjtwGDDg", "dSgW0FU0sS", "MZbORRlohlK", "-cYMz8gZyu9", "hH9WjtwGDDg", "Umr4cwXhBpH", "iclr_2022_30SXt3-vvnM", "iclr_2022_30SXt3-vvnM", "iclr_2022_30SXt3-vvnM", "iclr_2022_30SXt3-vvnM" ]
iclr_2022_5fbUEUTZEn7
Graph Kernel Neural Networks
The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this paper, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type and number of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similarly to what happens for convolutional masks in traditional convolutional neural networks. We perform an extensive ablation study to investigate the impact of the model hyper-parameters and we show that our model achieves competitive performance on standard graph classification datasets.
Reject
The paper uses graph kernels to perform local convolutions and achieve better expressiveness than classical GNNs. The paper received three borderline reviews. The area chair found the feedback to be consistent and constructive and agrees with most statements made by the reviewers. Overall, the idea has some interest (even though there are other works who also propose hybrid approaches between graph kernels and GNNs, as noted in the paper). Nevertheless, there is a lot of room for improvement regarding the experimental validation and the results are not very convincing (yet?). The datasets used in the paper have been traditionally used for evaluating GNNs but they have strong limitations due to their small size and it is often hard to draw conclusions from them. If the method does not suffer from scalablity issues, it is likely that more interesting results could be obtained by using ZINC or MOLHIV datasets, which are larger and often provide statistically significant results. Overall, these issues may require a major revision and unfortunately, the area chair believes that the paper is not ready for publication.
train
[ "dreKldVGp_r", "6q_XfahZtG7", "fQJ8c3Oopee", "872RhAgmLds", "W1Gnt4MYiPt", "-xFvMUDEd-", "8W3m1akexbk", "9YtXjlqGnb0", "NbjjczOg1jq", "RC5MRShCETp", "OJk_PSMRfg0", "pEJwxcuxKCR", "d6BgARD627A" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the author for their rebuttal, which addresses some of my concerns. I will edit my review and possibly score after a discussion stage. ", " We respect the reviewer opinion but we do think that our works gives an important contribution by introducing a fully structural model that opens the door for inter...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "RC5MRShCETp", "872RhAgmLds", "iclr_2022_5fbUEUTZEn7", "8W3m1akexbk", "iclr_2022_5fbUEUTZEn7", "d6BgARD627A", "d6BgARD627A", "pEJwxcuxKCR", "W1Gnt4MYiPt", "pEJwxcuxKCR", "W1Gnt4MYiPt", "iclr_2022_5fbUEUTZEn7", "iclr_2022_5fbUEUTZEn7" ]
iclr_2022_uut_j3UrRCg
Provable hierarchical lifelong learning with a sketch-based modular architecture
We propose a modular architecture for lifelong learning of hierarchically structured tasks. Specifically, we prove that our architecture is theoretically able to learn tasks that can be solved by functions that are learnable given access to functions for other, previously learned tasks as subroutines. We show that some tasks that we can learn in this way are not learned by standard training methods in practice; indeed, prior work suggests that some such tasks cannot be learned by \emph{any} efficient method without the aid of the simpler tasks. We also consider methods for identifying the tasks automatically, without relying on explicitly given indicators.
Reject
This paper develops an approach to modular lifelong learning over hierarchical tasks, proving the learnability of certain task classes under different modular architectures, with empirical evaluations on toy supervised tasks. The authors are to be commended for being one of the few works that develop lifelong learning theory. However, the reviewers found the theoretical contributions to be relatively minimal and that the empirical work needs to provide more substantial insight before it is ready for publication. Moreover, the reviewers had substantial concerns with the paper's overall presentation, in many cases finding the paper's organization confusing with many asides and critical details relegated to the appendices. The confusing presentation especially needs to be remedied, and the authors are advised to take the reviewers' concerns into consideration when preparing future versions of their manuscript. On a minor point, the reviewers identified several places where the paper didn't cite or develop connections to relevant current literature. The authors might also be interested in connections to some much earlier work by Utgoff and Stracuzzi on many-layered learning (Neural Computation 14.10, 2002), which shares some high-level similarities to ideas explored in this paper.
train
[ "fwfWY6uUo2", "43ZOt79yDaV", "xrtcmpNM-2w", "pjn5dVd7i8h", "pxb7cbpthWl", "T21sO63U9Rm", "RKXaVZa9wFG", "YucaNeXn6R-", "b3_Z9rjEunv", "9QKfInxSg7", "ZMu23zRnkhn", "4Umtxr-w3kl", "tUVlbjU3E6" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the comments. In the following, we give further explanations in the hope of clarifying the confusion.\n\nRegarding bullet point 2:\n\n1. For simplicity we assumed that the task distribution is iid (stationary). We actually only need that because the inputs for a higher-level task cannot all come bef...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3, 3 ]
[ "43ZOt79yDaV", "b3_Z9rjEunv", "RKXaVZa9wFG", "pxb7cbpthWl", "YucaNeXn6R-", "tUVlbjU3E6", "9QKfInxSg7", "4Umtxr-w3kl", "ZMu23zRnkhn", "iclr_2022_uut_j3UrRCg", "iclr_2022_uut_j3UrRCg", "iclr_2022_uut_j3UrRCg", "iclr_2022_uut_j3UrRCg" ]
iclr_2022_ht61oVsaya
DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
Exploring in an unknown system can place an agent in dangerous situations, exposing to potentially catastrophic hazards. Many current approaches for tackling safe learning in reinforcement learning (RL) lead to a trade-off between safe exploration and fulfilling the task. Though these methods possibly incur fewer safety violations they often also lead to reduced task performance. In this paper, we take the first step in introducing a generation of RL solvers that learn to minimise safety violations while maximising the task reward to the extend that can be tolerated by safe policies. Our approach uses a new two-player framework for safe RL called DESTA. The core of DESTA is a novel game between two RL agents: Safety Agent that is delegated the task of minimising safety violations and Task Agent whose goal is to maximise the reward set by the environment task. Safety Agent can selectively take control of the system at any given point to prevent safety violations while Task Agent is free to execute its actions at all other states. This framework enables Safety Agent to learn to take actions that minimise future safety violations (during and after training) by performing safe actions at certain states while Task Agent performs actions that maximise the task performance everywhere else. We demonstrate DESTA’s ability to tackle challenging tasks and compare against state-of-the-art RL methods in Safety Gym Benchmarks which simulate real-world physical systems and OpenAI’s Lunar Lander.
Reject
This paper investigated online safe reinforcement learning problem in the constraint MDP setting. By introducing Safety Agent and Task Agent, the authors translate the RL problem into a Markov game. The AC agrees with all reviewers that there is a lack of theoretical analysis and experimental comparisons with existing benchmarks. It has not reached the bar of ICLR papers.
val
[ "hsXPpqtXJc3", "t0uuPZ4SDUP", "JyMKXwodoBz", "E9UiD7D5d3z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors use a Markov game-based approach to study constrained Markov decision processes, propose a safety training algorithm, and show better performance than baseline methods in computational experiments. \n Strengths:\n\n(1). The paper proposes a Markov game-based method for dealing with two competing ob...
[ 5, 3, 3, 3 ]
[ 2, 2, 3, 2 ]
[ "iclr_2022_ht61oVsaya", "iclr_2022_ht61oVsaya", "iclr_2022_ht61oVsaya", "iclr_2022_ht61oVsaya" ]
iclr_2022_GthNKCqdDg
Selective Token Generation for Few-shot Language Modeling
Natural language modeling with limited training data is challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability. Among these transfer learning algorithms from PLMs, additive learning that incorporates a task-specific adapter on top of the fixed PLM has been popularly used to alleviate the severe overfitting problem in the few-shot setting. However, this added task-specific adapter is generally trained by maximum likelihood estimation that can easily suffer from the so-called exposure bias problem, especially in sequential text generation. Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) for few-shot natural language generation (NLG) tasks. In particular, we propose to use a selective token generation between the transformer-based PLM and the task-specific adapter during both training and inference. This output token selection between the two generators allows the adapter to take into account only on the task-relevant parts in sequence generation, and therefore makes it more robust to overfitting as well as more stable in RL training. In addition, in order to obtain the complementary adapter from the PLM for each few-shot task, we exploit a separate selecting module that is also simultaneously trained using RL. Experimental results on various few-shot NLG tasks including data-to-text generation and text summarization demonstrate that the proposed selective token generation significantly outperforms the previous additive learning algorithms based on the PLMs.
Reject
This paper presents a reinforcement learning inspired algorithm to train task-specific adapters to adapt pretrained language models for downstream tasks. The paper attempts to tackle an important problem. All reviewers have concerns about whether the results are strong enough to justify claims made in the paper. I appreciate revisions that have been done by the authors during the rebuttal period. However, I believe that the paper is still below the bar for ICLR. I recommend rejecting this paper.
train
[ "aFK7Bjfln95", "gr3fyAv_cqs", "XCiDkf1EhL", "zFXThSwSTkA", "qjaQ-zxxp6", "rGZZEbb3UI", "L4awVRe_lYO", "M8rb8Q56ntk", "TWmDC4d7MF", "pzGqhE3Ecz8", "dFTLrdJIQZZ", "5s28ObMu1Hv", "xGSeWcSWbqQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my question and revising the paper (especially section 4.7).\nAfter reading the comments and answers from other reviewers, I have decided to retain my original score.", " Thanks for your comments.\n\nWe would like to comment about the related works you mentioned first and then our though...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "M8rb8Q56ntk", "L4awVRe_lYO", "iclr_2022_GthNKCqdDg", "xGSeWcSWbqQ", "dFTLrdJIQZZ", "iclr_2022_GthNKCqdDg", "TWmDC4d7MF", "5s28ObMu1Hv", "pzGqhE3Ecz8", "iclr_2022_GthNKCqdDg", "iclr_2022_GthNKCqdDg", "iclr_2022_GthNKCqdDg", "iclr_2022_GthNKCqdDg" ]
iclr_2022_OkB0tlodmH
Q-learning for real time control of heterogeneous microagent collectives
The effective control of microscopic collectives has many promising applications, from environmental remediation to targeted drug delivery. A key challenge is understanding how to control these agents given their limited programmability, and in many cases heterogeneous dynamics. The ability to learn control strategies in real time could allow for the application of robotics solutions to drive collective behaviours towards desired outcomes. Here, we demonstrate Q-learning on the closed-loop Dynamic Optical Micro-Environment (DOME) platform to control the motion of light-responsive Volvox agents. The results show that Q-learning is efficient in autonomously learning how to reduce the speed of agents on an individual basis.
Reject
This paper applies and evaluates the use of Q-learning for the control of microscopic collectives of Volvox algae. While the application is indeed very cool and potentially impactful, the paper has no theoretical contribution to the field of machine learning as it consists of an empirical evaluation of an existing (and well-established) algorithm. The reviewers agree on the importance of the application, reported concerns about the current manuscript. In particular: - Reviewers QBsR and GPp7 suggested including additional comparisons to other learning algorithms - Reviewers QBsR and BtTc also suggested improving the writing Overall, I agree with the reviewers that the current manuscript has a lot of potentials, but it could benefit from additional work. Please carefully consider and incorporate the feedback received from the reviewers. Personally, I think that presenting a more sharp message and clearer insights would further increase the quality of exposition and help to make a stronger case for why this manuscript is relevant to the larger ML community.
train
[ "dsEzVkDqEf5", "TXkIjelol5g", "3un7fiVWs7F", "R3Pn5w4wgrV", "SuLEzor7c6d", "rQN3BfpauNL", "tA2RcLCNGw2" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The motivation is more clear to me. With this, I've decided to keep my score.", " It does indeed seem a good idea to formulate the state space in a more continuous manner as described. For this work in particular, a discrete formulation was chosen as it seemed to relate the most simply to the way the system had...
[ -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, 4, 2, 2 ]
[ "3un7fiVWs7F", "rQN3BfpauNL", "tA2RcLCNGw2", "SuLEzor7c6d", "iclr_2022_OkB0tlodmH", "iclr_2022_OkB0tlodmH", "iclr_2022_OkB0tlodmH" ]
iclr_2022_c8JDlJMBeyh
Towards Generic Interface for Human-Neural Network Knowledge Exchange
Neural Networks (NN) outperform humans in multiple domains. Yet they suffer from a lack of transparency and interpretability, which hinders intuitive and effective human interactions with them. Especially when NN makes mistakes, humans can hardly locate the reason for the error, and correcting it is even harder. While recent advances in explainable AI have substantially improved the explainability of NNs, effective knowledge exchange between humans and NNs is still under-explored. To fill this gap, we propose Human-NN-Interface (HNI), a framework using a structural representation of visual concepts as a ”language” for humans and NN to communicate, interact, and exchange knowledge. Take image classification as an example, HNI visualizes the reasoning logic of a NN with class-specific Structural Concept Graphs (c-SCG), which are human-interpretable. On the other hand, humans can effectively provide feedback and guidance to the NN by modifying the c-SCG, and transferring the knowledge back to NN through HNI. We demonstrate the efficacy of HNI with image classification tasks and 3 different types of interactions: (1) Explaining the reasoning logic of NNs so humans can intuitively identify and locate errors of NN; (2) human users can correct the errors and improve NN’s performance by modifying the c-SCG and distilling the knowledge back to the original NN; (3) human users can intuitively guide NN and provide a new solution for zero-shot learning.
Reject
The paper provides a way for explaining the reasoning of a neural network to humans in the form of a class-specific structural concept graph (c-SCG). The c-SCG can be modified by humans. The modified c-SCG can be incorporated in training a new student model. Experiments show that the new model performs better on classes that their corresponding c-SCG have been modified. While all the reviewers agree that the paper puts forth an interesting idea, some concerns have been raised by reviewers about the scale of experiments and the lack of theoretical guarantee on the fidelity of the SCG. The authors have added two large scale experiments which confirm their previous results as part of their rebuttal. This paper is borderline and needs to be discussed further.
train
[ "edufiQ-Mdz9", "lP8CjrsoSS", "Op-BoacQf1U", "VqFN2kxO-96", "78ezEbk9kTJ", "4lW-1M2sx6R", "wn_au4G1kcl", "anm3gOwWvz", "PDHxw31AHuA", "AC9q16RG92O", "japsTkNKD-", "eI0P8Mgo1tR", "SKI2-3AKA_r", "t200OxNCDHA", "AJJ1zNx2JoZ", "L6pq0FLN75o" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We provide further explanation about your suggestion \"*interventions from users based on the explanations, the paper would be much stronger.*\"\n\n***Interventions based on explanations***: Thanks for your suggestion! Yes, Interventions based on explanation is exactly what we want to demonstrate in our evaluatio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "lP8CjrsoSS", "L6pq0FLN75o", "eI0P8Mgo1tR", "AJJ1zNx2JoZ", "4lW-1M2sx6R", "t200OxNCDHA", "iclr_2022_c8JDlJMBeyh", "PDHxw31AHuA", "AC9q16RG92O", "SKI2-3AKA_r", "iclr_2022_c8JDlJMBeyh", "L6pq0FLN75o", "iclr_2022_c8JDlJMBeyh", "iclr_2022_c8JDlJMBeyh", "iclr_2022_c8JDlJMBeyh", "iclr_2022_c...
iclr_2022_nL2lDlsrZU
SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Tabular data underpins numerous high-impact applications of machine learning from fraud detection to genomics and healthcare. Classical approaches to solving tabular problems, such as gradient boosting and random forests, are widely used by practitioners. However, recent deep learning methods have achieved a degree of performance competitive with popular techniques. We devise a hybrid deep learning approach to solving tabular data problems. Our method, SAINT, performs attention over both rows and columns, and it includes an enhanced embedding method. We also study a new contrastive self-supervised pre-training method for use when labels are scarce. SAINT consistently improves performance over previous deep learning methods, and it even performs competitively with gradient boosting methods, including XGBoost, CatBoost, and LightGBM, on average over $30$ benchmark datasets in regression, binary classification, and multi-class classification tasks.
Reject
The paper presents a deep-learning network architecture for (semi)-supervised tabular data classification and regression problems based on a new attention mechanism between samples (rows) and features (columns). The model is compared to 10 sota methods, studied on 30 diverse datasets (10 for binary classification, 10 for multiclass classification and 10 for regression). contrastive learning approach for pre-training on unlabeled data and fine-tuning on a small number of labels Explainability capabilities are not presented in a very convincing way. While the reviewers find the problem relevant, they criticise novelty and, in particular, the experimental comparison. Concerns about hyperparameter tuning of own vs. comparison methods voiced by the reviewers. While these concerns have partially been addressed in the author response, the reviewers still doubt the fairness of comparison.
train
[ "E_j-wnfGFXZ", "Q3aatpYgM9B", "3WB4ygK4NlE", "Cb8FLym1_10", "3wXCy-WdrJ7", "HEIdMChIZJo", "t7hlwgtlUFH", "ktxNkKqmRa3", "m8ocDtPqziK", "3FZgi99753O", "soHvHimZ9rw", "ONiHtKGYgY", "WT9bIgu3kPs", "U93phjQuKXT", "aH45NlYvhsv" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As suggested by the reviewer, we further tune the two baselines with increased embedding dimensions - 64 and 128 (while SAINT’s maximum embedding dimension is 32). We also add learning_rate ∈ [1e-4, 1e-3, 1e-2, 2e-2] to our grid-search. We observe that the performance of TabNet and TabTransformer do not improve. ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "Q3aatpYgM9B", "3WB4ygK4NlE", "Cb8FLym1_10", "t7hlwgtlUFH", "iclr_2022_nL2lDlsrZU", "t7hlwgtlUFH", "aH45NlYvhsv", "m8ocDtPqziK", "U93phjQuKXT", "WT9bIgu3kPs", "ONiHtKGYgY", "iclr_2022_nL2lDlsrZU", "iclr_2022_nL2lDlsrZU", "iclr_2022_nL2lDlsrZU", "iclr_2022_nL2lDlsrZU" ]
iclr_2022_-cII-Vju5C
Orthogonalising gradients to speedup neural network optimisation
The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations. We hypothesize that components in the same layer learn the same representations at the beginning of learning. To prevent this we orthogonalise the gradients of the components with respect to each other. Our method of orthogonalisation allows the weights to be used more flexibly, in contrast to restricting the weights to an orthogonalised sub-space. We tested this method on ImageNet and CIFAR-10 resulting in a large decrease in learning time, and also obtain a speed-up on the semi-supervised learning BarlowTwins. We obtain similar accuracy to SGD without fine-tuning and better accuracy for naïvely chosen hyper-parameters.
Reject
This paper proposes orthogonalising loss gradients with respect to neural network parameters to speed up optimization and improve performance. The reviewers are unanimous in recommending rejection of the paper. They highlight the following issues: * weak baselines, which make it difficult to judge the contribution of this paper empirically * lack of discussion of relevant literature and existing techniques * arbitrary choices in the design of the algorithm, not backed up by theory or convincing arguments The reviewers acknowledge the author response, but remain largely unconvinced of the merit of the proposed approach. I see no special reasons to disregard the reviewer assessments, and I therefore recommend not accepting this paper.
train
[ "QqYnCzAilCe", "HO2zfEeLBBh", "j5UE1qP2_LV", "TuEFVJFZzAx", "2k8QsDBWFY_", "tH-_WJGExDR", "SLDslliRIE2", "RgwZVqjlIv", "N9hj6GI6zVl", "JTqTTPSU3Z6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their response. I have read the response and reviewed the paper and have decided to keep my rating the same.", "The paper proposes inserting a gradient orthogonalisation step before each update for first-order optimisation methods. The orthogonalisation is accomplished via SVD, and the...
[ -1, 3, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, 4, 5, -1, -1, -1, -1, -1, 5, 4 ]
[ "2k8QsDBWFY_", "iclr_2022_-cII-Vju5C", "iclr_2022_-cII-Vju5C", "tH-_WJGExDR", "N9hj6GI6zVl", "JTqTTPSU3Z6", "j5UE1qP2_LV", "HO2zfEeLBBh", "iclr_2022_-cII-Vju5C", "iclr_2022_-cII-Vju5C" ]
iclr_2022_yV4_fWe4nM
Deep Fair Discriminative Clustering
Deep clustering has the potential to learn a strong representation and hence better clustering performance than traditional clustering methods such as $k$-means and spectral clustering. However, this strong representation learning ability may make the clustering unfair by discovering surrogates for protected information which our experiments empirically show. This work studies a general notion of group-level fairness for both binary and multi-state protected status variables (PSVs). We begin by formulating the group-level fairness problem as an integer linear programming whose totally unimodular constraint matrix means it can be efficiently solved via linear programming. We then show how to inject this solver into a discriminative deep clustering backbone and hence propose a refinement learning algorithm to combine the clustering goal with the fairness objective to learn fair clusters adaptively. Experimental results on real-world datasets demonstrate that our model consistently outperforms state-of-the-art fair clustering algorithms. Furthermore, our framework shows promising results for novel fair clustering tasks including flexible fairness constraints, multi-state PSVs, and predictive clustering.
Reject
This paper received a majority voting of rejection. In the internal discussion, no reviewer would like to change the score according to the author response. I have read all the materials of this paper including manuscript, appendix, comments and response. Based on collected information from all reviewers and my personal judgement, I can make the recommendation on this paper, *rejection*. Here are the comments that I summarized, which include my opinion and evidence. **Motivation** The motivation of this paper is not strong. In this paper, the authors claimed that the fairness level of deep clustering methods is relatively poorly compared with the traditional fair clustering methods. The traditional fair clustering methods employ the hard constraints to achieve fairness by scarifying the cluster utility. Instead, deep fair clustering methods seek the trade-off balance between fairness level and cluster utility; therefore, the deep fair clustering can be regarded to use the soft constraints. There is no necessary to compare two different fairness constraints. Even the proposed method is a trade-off balance between fairness level and cluster utility. **Self-augmented Training** The relationship between self-augmented learning and fairness learning is unclear. I guess that the authors added this modular simply to enhance the cluster utility. However, such a loss or an operator can be also applied to other (fair) clustering algorithms. The experimental comparisons in Section 5 is unfair. No ablation study on this is provided. **Novelty** One reviewer pointed out there existed some work that plugs integer linear programming into a probabilistic discriminative clustering model proposed in 2017. **Experiments** (1) ScFC and DFCV release their codes; no results of these two methods were reported on HAR. (2) No standard deviation. (3) The Initial ILP Results (Ours) and Ours Result in Table 1 on HAR dataset is 0.653 and 0.468, both higher than the Ground Truth (Optimal) 0.458. **Presentation** A few statements are not well-supported, or require small changes to be made correct. No objection from reviewers was raised to again this recommendation.
test
[ "TsQ8G06EyHg", "DmqVVcfoU63", "0Y_anLiKqrT", "8M9aYbiNQB", "rNulI5TNypf", "_oushF68EO", "X133EsWHBz", "2eXdECsOq5k", "WKp-U6lwST5" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer's helpful suggestions for our work and updated our paper based on comments. We understand the reviewer's concerns and hope the following answers provide a satisfying response.\n\n**As Section 4.1 is simply the introduction of the backbone network, maybe it should be an individual section. In...
[ -1, -1, -1, -1, -1, 6, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "X133EsWHBz", "2eXdECsOq5k", "8M9aYbiNQB", "_oushF68EO", "WKp-U6lwST5", "iclr_2022_yV4_fWe4nM", "iclr_2022_yV4_fWe4nM", "iclr_2022_yV4_fWe4nM", "iclr_2022_yV4_fWe4nM" ]
iclr_2022_HO_LL-oqBzW
FCause: Flow-based Causal Discovery
Current causal discovery methods either fail to scale, model only limited forms of functional relationships, or cannot handle missing values. This limits their reliability and applicability. We propose FCause, a new flow-based causal discovery method that addresses these drawbacks. Our method is scalable to both high dimensional as well as large volume of data, is able to model complex nonlinear relationships between variables, and can perform causal discovery under partially observed data. Furthermore, our formulation generalizes existing continuous optimization based causal discovery methods, providing a unified view of such models. We perform an extensive empirical evaluation, and show that FCause achieves state of the art results in several causal discovery benchmarks under different conditions reflecting real-world application needs.
Reject
This paper proposed a flow-based approach FCause to Bayesian causal discovery that is scalable, flexible, and adaptive to missing data. Reviewers were split on this paper and could not reach a consensus during the discussion, and no reviewer pushed for acceptance. After taking a closer look myself, I agree with several of the reviewers that while the core ideas here are interesting and novel, there remain too many unresolved issues that require another round of revision. I encourage the authors to carefully take in account the reviewers' comments and re-submit this promising work to another ML venue.
train
[ "IQW9UbQY_-z", "2gIDBS796_H", "ID65ihHrHh2", "seuNIrL4tUm", "rZ_7go9AmTu", "3aXOIhsLJBy", "R81edh0439", "VcTk2-cJadt", "f7LbEcAIqYv", "YTLBvUjwYig", "FRBdY6cXmZV", "yYRAw4ZDC9z", "xBqQXyYepr-", "0DDuCj91LvK", "NTclBs180ZG", "RM5ZSpq6Omc" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer DG4z for your support again based on the strong perfromance of the proposed method and the contribution of unified view. Dear reviewer q8GH, Du7p and 2kV5, please let us know if you have any other question after reading our responce. ", " Thank you for addressing my questions. It's good to kn...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 3, 8, 5, 3 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "iclr_2022_HO_LL-oqBzW", "YTLBvUjwYig", "seuNIrL4tUm", "rZ_7go9AmTu", "VcTk2-cJadt", "iclr_2022_HO_LL-oqBzW", "RM5ZSpq6Omc", "3aXOIhsLJBy", "NTclBs180ZG", "0DDuCj91LvK", "xBqQXyYepr-", "iclr_2022_HO_LL-oqBzW", "iclr_2022_HO_LL-oqBzW", "iclr_2022_HO_LL-oqBzW", "iclr_2022_HO_LL-oqBzW", "...
iclr_2022_FFM_oJeqZx
Adaptive Pseudo-labeling for Quantum Calculations
Machine learning models have recently shown promise in predicting molecular quantum chemical properties. However, the path to real-life adoption requires (1) learning under low-resource constraint and (2) out-of-distribution generalization to unseen, structurally diverse molecules. We observe that these two challenges can be alleviated via abundant labels, which are often not the case in quantum chemistry. We hypothesize that pseudo-labeling on vast array of unlabeled molecules can serve as gold-label proxies to greatly expand the training labeled dataset. The challenge in pseudo-labeling is to prevent the bad pseudo-labels from biasing the model. We develop a simple and effective strategy Pseudo that can assign pseudo-labels, detect bad pseud-labels through evidential uncertainty, and then prevent them from biasing the model using adaptive weighting. Empirically, Pseudo improves quantum calculations accuracy across full data, low data and out-of-distribution settings.
Reject
The reviewers were split on this paper: the positive review appreciated (a) how adaptive weighing can be viewed as part of energy minimization, (b) the flexibility of the model to work with different model backbones, (c) the demonstration that even in no-noise settings the method generates noticeable improvements. However, all reviews saw important shortcomings in the (a) few out-of-distribution results, (b) limited ablation studies, (c) clarity of the writing, particularly in notation, (d) explanations of experimental results (e.g., why using pseudolabels sometimes deteriorates performance), (e) assumptions behind the proposed method, (f) lack of self-training baselines, (g) limited technical novelty. Ultimately, the number and severity of the shortcomings outweigh the positive parts of the paper. If the authors take the reviewer’s recommendations into account the paper will be a much stronger submission.
train
[ "Y2c4CmmL2Mi", "SUliHORLcpi", "HNhmoOZ_JXw", "PJ2blansqrZ", "iyQ6gmpLL_6", "UtPrZoQAQK8", "e_APFRHGRJD", "Gqy5g0WGSX0", "5MtjvMsFMtW", "PapZcY6qtlf" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThanks a lot for detailed explanation, updated version of the paper and additional experiments I asked about. Please find below some discussion and comments regarding your updated paper and last comments.\n\n> Yes, we randomly sample a batch from the joint set of supervised and unsupervised data....
[ -1, -1, -1, -1, -1, -1, 3, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "SUliHORLcpi", "PapZcY6qtlf", "5MtjvMsFMtW", "Gqy5g0WGSX0", "UtPrZoQAQK8", "e_APFRHGRJD", "iclr_2022_FFM_oJeqZx", "iclr_2022_FFM_oJeqZx", "iclr_2022_FFM_oJeqZx", "iclr_2022_FFM_oJeqZx" ]
iclr_2022_6Qvjzr2VGLl
Towards Generative Latent Variable Models for Speech
While stochastic latent variable models (LVMs) now achieve state-of-the-art performance on natural image generation, they are still inferior to deterministic models on speech. On natural images, these models have been parameterised with very deep hierarchies of latent variables, but research shows that these model constructs are not directly applicable to sequence data. In this paper, we benchmark popular temporal LVMs against state-of-the-art deterministic models on speech. We report the likelihood, which is a much used metric in the image domain but rarely, and often incomparably, reported for speech models. This is prerequisite work needed for the research community to improve LVMs on speech. We adapt Clockwork VAE, a state-of-the-art temporal LVM for video generation, to the speech domain, similar to how WaveNet adapted PixelCNN from images to speech. Despite being autoregressive only in latent space, we find that the Clockwork VAE outperforms previous LVMs and reduces the gap to deterministic models by using a hierarchy of latent variables.
Reject
This paper presents the application of the hierarchical latent variable model, CW-VAE which is originally developed in the vision community, to the speech domain with meaningful modifications, and provide empirical analysis of the likelihood as well as discussions on the likelihood metrics. The reviewers tend to agree that it is a promising direction to study hierarchically structured LVMs for speech, and the introduction/adaptation of CW-VAE is useful. There were some discussion on the suitability of the likelihood evaluation, and it appears a fair comparison with wavenet shall take place at s=1 (single sample), a resolution level the proposed method does not yet scale up to. On the other hand, an important potential use case of the model is representation learning for speech, as it is a common belief that at suitable resolution the features shall discover units like phoneme. But I find the current evaluation of latent representations by LDA and KNN to be somewhat limited, and in fact there is no comparison with suitable baselines in Sec 3.2 in terms of feature quality. A task closer to modern speech recognition (e.g., with end-to-end models) would be preferred.
train
[ "4AcIJKjbruK", "yqb52jDwflu", "3LRb6s9KAuia", "oAJyYTp4v2n", "5dYujgKO3wB8", "aLBD7aR-btB", "YSuXmieecCL", "iBgM_b7ngvx", "nF9Ya4t9wwF" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " With this comment we supply a revised version of our paper to reflect the discussion in the comments below. Besides the changes listed in the comments, we have made the following additions to the experimental work:\n- Improved previously reported VRNN and SRNN baselines for $s=1$ on TIMIT in table 1.\n- Added Wav...
[ -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_6Qvjzr2VGLl", "YSuXmieecCL", "nF9Ya4t9wwF", "YSuXmieecCL", "iBgM_b7ngvx", "iclr_2022_6Qvjzr2VGLl", "iclr_2022_6Qvjzr2VGLl", "iclr_2022_6Qvjzr2VGLl", "iclr_2022_6Qvjzr2VGLl" ]
iclr_2022_CNY9h3uyfiO
Reward Shifting for Optimistic Exploration and Conservative Exploitation
In this work, we study the simple yet universally applicable case of reward shaping, the linear transformation, in value-based Deep Reinforcement Learning. We show that reward shifting, as the simplest linear reward transformation, is equivalent to changing initialization of the $Q$-function in function approximation. Based on such an equivalence, we bring the key insight that a positive reward shifting leads to conservative exploitation, while a negative reward shifting leads to curiosity-driven exploration. In this case, a conservative exploitation improves offline RL value estimation, and the optimistic value estimation benefits the exploration of online RL. We verify our insight on a range of tasks: (1) In offline RL, the conservative exploitation leads to improved learning performance based on off-the-shelf algorithms; (2) In online continuous control, multiple value functions with different shifting constants can be used to trade-off between exploration and exploitation thus improving learning efficiency; (3) In online RL with discrete action space, a negative reward shifting brings an improvement over the previous curiosity-based exploration method.
Reject
I thank the authors for their submission and active participation in the discussions. The majority of reviewers have concers with this paper, in particular, regarding the motivation of the method [dgHr], clarity [Mgm9], and theorethical support [4ENc]. I side with reviewers 4ENc, dgHr and fFaW, and recommend rejection of this paper. I want to encourage the authors to use the feedback by the reviewers to improve their paper.
train
[ "iE4x8Ox3M82", "sP2ZWHngZ7j", "UP2HMwx8C6S", "JVNQDXqUQCL", "-n8E6_3r7DkV", "8MDayzD13C", "I2ghGhNed_L3", "PJ33fBb7KH6", "ddJzx0knRQZ", "8FPiSChes7m", "urQRzGjOUaS", "a3Z2vKT0_jH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the effectiveness of reward shifting in value-based deep reinforcement learning. Particularly, it points out that (1) a positive reward shifting is equivalent to pessimistic initialization of Q values, thus, leading to conservative exploitation; and (2) a negative reward shifting equals to optim...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2022_CNY9h3uyfiO", "-n8E6_3r7DkV", "JVNQDXqUQCL", "PJ33fBb7KH6", "8FPiSChes7m", "a3Z2vKT0_jH", "urQRzGjOUaS", "iE4x8Ox3M82", "-n8E6_3r7DkV", "iclr_2022_CNY9h3uyfiO", "iclr_2022_CNY9h3uyfiO", "iclr_2022_CNY9h3uyfiO" ]
iclr_2022_-9ffJ9NQmal
VICE: Variational Inference for Concept Embeddings
In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
Reject
The authors propose Variational Inference for Concept Embeddings (VICE), a method to learn representations such that an odd object can be detected given a triplet (i.e. the odd-one-out task). The authors build on Sparse Positive object Similarity Embedding (SPoSE) which learns sparse, non-negative embeddings for images by placing a zero-mean Laplace prior. Claimed contributions include replacing it with a spike-and-slab Gaussian mixture prior, and a principled approach to choosing the subset of the dimensions of the learned embeddings. The empirical results show improvements over the SPoSE baseline. The reviewers appreciated the empirical improvements over SPoSE and accept that a more informative prior might lead to improved results. However, the **motivation, novelty and significance** of the proposed method doesn’t meet the acceptance criteria for ICLR. After the rebuttal and the discussion phase the reviewers felt that the work necessitates a major revision (notwithstanding the remaining issue with limited novelty), and raised the following as the main improvement points: - Clarifying the motivation and significance. - Stronger empirical validation and generalization beyond the THINGS dataset. - Address the discrepancy with analyzing GMM priors, but using unimodal Gaussians in the implementation. - Comparing the chosen prior to other prior distributions and justifying the design choices.
train
[ "XAlc3XVkHFd", "eE5Bj8Xgbt6", "O7ubvBuJnE6", "UPyyeWV1OAm", "xHG9ZbvUMb", "OCnxwjQjndv", "q-526vN1WHx", "YepZ4di_Hs", "LYAd797m2v3", "DKRK7jP2FpP", "X2tzehRLAzh", "IwsEhcGh4xW", "pLNLazw1iCw", "eFHLZTiRGyW" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed response. While the revision certainly improves the quality, I still see the unclear motivation that why improvement over SPoSE is scientifically important to the whole community. If the work is going to prove the effectiveness beyond a single benchmark, I would expect multiple benchmarks....
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "OCnxwjQjndv", "iclr_2022_-9ffJ9NQmal", "X2tzehRLAzh", "xHG9ZbvUMb", "eFHLZTiRGyW", "q-526vN1WHx", "pLNLazw1iCw", "LYAd797m2v3", "IwsEhcGh4xW", "O7ubvBuJnE6", "iclr_2022_-9ffJ9NQmal", "iclr_2022_-9ffJ9NQmal", "iclr_2022_-9ffJ9NQmal", "iclr_2022_-9ffJ9NQmal" ]
iclr_2022_-uPIaaZdMLF
Attentional meta-learners for few-shot polythetic classification
Polythetic classifications, based on shared patterns of features that need neither be universal nor constant among members of a class, are common in the natural world and greatly outnumber monothetic classifications over a set of features. We show that threshold meta-learners, such as Prototypical Networks, require an embedding dimension that is exponential in the number of features to emulate these functions. In contrast, attentional classifiers, such as Matching Networks, are polythetic by default and able to solve these problems with a linear embedding dimension. However, we find that in the presence of task-irrelevant features, inherent to meta-learning problems, attentional models are susceptible to misclassification. To address this challenge, we propose a self-attention feature-selection mechanism that adaptively dilutes non-discriminative features. We demonstrate the effectiveness of our approach in meta-learning Boolean functions, and synthetic and real-world few-shot learning tasks.
Reject
This paper analyzes problems of existing threshold meta-learners and attentional meta-learners for few-shot learning in polythetic classifications. The threshold meta-learners such as prototypical networks require exponential number of embedding dimensionality, and the attentional meta-learners are susceptible to misclassification. The authors proposed a simple yet effective method to address these problems, and demonstrated its effectiveness in their experiments. This paper discusses meta-learning from a very unique perspective as commented by a reviewer, and clearly explained problems of widely-used meta-learning methods. However, this paper focus on prototypical networks and matching networks even though there have been proposed many meta-learning methods. Some existing methods seem not to have the problems of prototypical networks and/or matching networks. In addition, the practical benefits of the proposed approach are not well demonstrated. Although the additional experiments in the author response addressed some concerns of the reviewers, they are not enough to demonstrate the effectiveness of the proposed method.
train
[ "Wmir7yL0ROR", "6b-MsBLhgOw", "Hh-eaecbZdA", "qN_kajIiqxh", "pYrVe1dM0o", "jsG5eirqtw-", "BxfC-NGWmGx", "v3JQCIr0ce", "FrBqF_z1-VN", "DmEURC4jN9p", "ycjGii5bJXA", "O7-8fQ_JhIF", "KobyDByNFjc", "IPDmXGe33vq", "rbBn5cOA-HL", "Eerg6Fxp38n", "iF4ZPGSGvFn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper first considers the limitations of threshold and attentional classifiers. They proposed an attention-based method for feature selection to address the problems of threshold classifiers and attentional classifiers. The experiments on several synthetic and real-world few-shot learning tasks seem good. ...
[ 6, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_-uPIaaZdMLF", "IPDmXGe33vq", "iclr_2022_-uPIaaZdMLF", "iclr_2022_-uPIaaZdMLF", "DmEURC4jN9p", "O7-8fQ_JhIF", "iF4ZPGSGvFn", "Wmir7yL0ROR", "iclr_2022_-uPIaaZdMLF", "ycjGii5bJXA", "Eerg6Fxp38n", "rbBn5cOA-HL", "iF4ZPGSGvFn", "Wmir7yL0ROR", "qN_kajIiqxh", "Hh-eaecbZdA", "icl...
iclr_2022_ErsRrojuPzw
Fast and Efficient Once-For-All Networks for Diverse Hardware Deployment
Convolutional neural networks are widely used in practical application in many diverse environments. Each different environment requires a different optimized network to maximize accuracy under its unique hardware constraints and latency requirements. To find models for this varied array of potential deployment targets, once-for-all (OFA) was introduced as a way to simultaneously co-train many models at once, while keeping the total training cost constant. However, the total training cost is very high, requiring up to 1200 GPU-hours. Compound OFA (compOFA) decreased the training cost of OFA by 2$\times$ by coupling model dimensions to reduce the search space of possible models by orders of magnitude, while also simplifying the training procedure. In this work, we continue the effort to reduce the training cost of OFA methods. While both OFA and compOFA use a pre-trained teacher network, we propose an in-place knowledge distillation procedure to train the super-network simultaneously with the sub-networks. Within this in-place distillation framework, we develop an upper-attentive sample technique that reduces the training cost per epoch while maintaining accuracy. Through experiments on ImageNet, we demonstrate that, we can achieve a $2\times$ - $3\times$ ($1.5\times$ - $1.8\times$) reduction in training time compared to the state of the art OFA and compOFA, respectively, without loss of optimality.
Reject
This paper exposes a method to reduce the training cost of once-for-all networks. Overall this paper is well written and easy to follow, and the experimental section shows a clear reduction of training time on the examples used. However, the reviewers point out that the experimental section could benefit from adding more design spaces, and have a better explanation of the results. More importantly, three out of four reviewers agree that the novelty of this work is too low for the submission to be accepted, with the fourth one only giving a score of 6 (and also noting the lack of novelty). I therefore recommend reject for this paper.
train
[ "x8yexWIWoz2", "6SnDpD7ibq2", "XTK_LuhXmeT", "7pa2Ohe2zB", "jeRpxuoPTxx", "yCt9-DLx0FA", "XmFjijGQ2dx", "Pj72hxNKoiH", "hwutAuaS7k" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a method to train a once-for-all network, where one network can run at different resource constraints. The method is based on previous methods, and the author further improved the training speed by around 1.5x - 1.8x without loss of performance. The method is evaluated on ImageNet classificatio...
[ 3, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "iclr_2022_ErsRrojuPzw", "x8yexWIWoz2", "hwutAuaS7k", "Pj72hxNKoiH", "XmFjijGQ2dx", "iclr_2022_ErsRrojuPzw", "iclr_2022_ErsRrojuPzw", "iclr_2022_ErsRrojuPzw", "iclr_2022_ErsRrojuPzw" ]
iclr_2022_zeGpMIt6Pfq
BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks
Recent studies have shown that convolutional neural networks (CNNs) are not the only feasible solution for image classification. Furthermore, weight sharing and backpropagation used in CNNs do not correspond to the mechanisms present in the primate visual system. To propose a more biologically plausible solution, we designed a locally connected spiking neural network (SNN) trained using spike-timing-dependent plasticity (STDP) and its reward-modulated variant (R-STDP) learning rules. The use of spiking neurons and local connections along with reinforcement learning (RL) led us to the nomenclature BioLCNet for our proposed architecture. Our network consists of a rate-coded input layer followed by a locally connected hidden layer and a decoding output layer. A spike population-based voting scheme is adopted for decoding in the output layer. We used the MNIST dataset to obtain image classification accuracy and to assess the robustness of our rewarding system to varying target responses.
Reject
This paper presents a locally connected spiking neural network model trained to do classification of MNIST using spike-timing-dependent plasticity (STDP) and reward-modulated STDP. The authors show that this model can learn to classify MNIST images (though not at a very high accuracy) and that it can engage in classical conditioning. The reviews were initially all in the reject range. The common theme in the reviews was concerns about the weak and limited nature of the results. After a good amount of author response and reviewer replies to the authors, one reviewer increased their score to a borderline accept, but the other reviewers did not change their scores, producing scores of 3,3, and 6. Given these scores, and the reviewers' remaining concerns, a reject decision was reached.
train
[ "Hj12ozaRSWS", "9N8DOCjfFAe", "3T0h_ccXyoQ", "inc-r7fG5Yr", "qa2JJrYDc0R", "nv6k5Uch6OG", "QAd3_JkTdhJ", "NrfuEfvHV7R", "sepg7BDTWsa", "N5qHlNdlir0", "iBhvOdLkInX" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback. \nRegarding your suggestion about comparing the STDP-based results with the RSTDP-based and the mixed one, we totally agree with you, thanks for suggesting, and we'll consider it in our future research. In the latter case, I think it was a poor choice of words in my comment that cause...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, 4 ]
[ "9N8DOCjfFAe", "NrfuEfvHV7R", "inc-r7fG5Yr", "sepg7BDTWsa", "iclr_2022_zeGpMIt6Pfq", "QAd3_JkTdhJ", "qa2JJrYDc0R", "iBhvOdLkInX", "N5qHlNdlir0", "iclr_2022_zeGpMIt6Pfq", "iclr_2022_zeGpMIt6Pfq" ]
iclr_2022_UjynxfqnGWG
Inductive Biases and Variable Creation in Self-Attention Mechanisms
Self-attention, an architectural motif designed to model long-range interactions in sequential data, has driven numerous recent breakthroughs in natural language processing and beyond. This work provides a theoretical analysis of the inductive biases of self-attention modules, where our focus is to rigorously establish which functions and long-range dependencies self-attention blocks prefer to represent. We show that bounded-norm Transformer layers create sparse variables: they can represent sparse Lipschitz functions of the input sequence, with sample complexity scaling only logarithmically with the context length. We propose new experimental protocols to support the analysis and guide the practice of training Transformers, built around the rich theory of learning sparse Boolean functions.
Reject
This paper presents a theoretical analysis of self-attention modules, using Lipschitz conditions. It suffers from two main weaknesses: the clarity of the presentation, and the weak experimental section.
train
[ "2jLKnF23dvK", "zfAEh7BG13X", "mLkz8uNzmo9", "jbeJIe_x9k", "n1_aMz2CZRd", "qmCRvWydiT4", "sou5_m3xCtQ", "wOUR7C7b_dm", "apDyPhTvMDZ", "RJL5UKo1rY0", "ZzeNBJYzQ4o", "2I6AsqACjah" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the thoughtful response.\n\n- Note that the proposed \"information + distractor\" setup is, in fact, very close to the setup of our synthetic experiments-- the only difference is that the sparse \"copy\" operation is replaced with a Boolean function (which can be thought of as a hash of the relevant bi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "zfAEh7BG13X", "sou5_m3xCtQ", "qmCRvWydiT4", "iclr_2022_UjynxfqnGWG", "2I6AsqACjah", "apDyPhTvMDZ", "ZzeNBJYzQ4o", "RJL5UKo1rY0", "iclr_2022_UjynxfqnGWG", "iclr_2022_UjynxfqnGWG", "iclr_2022_UjynxfqnGWG", "iclr_2022_UjynxfqnGWG" ]
iclr_2022_E0zOKxQsZhN
Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Many problems in RL, such as meta RL, robust RL, and generalization in RL can be cast as POMDPs. In theory, simply augmenting model-free RL with memory, such as recurrent neural networks, provides a general approach to solving all types of POMDPs. However, prior work has found that such recurrent model-free RL methods tend to perform worse than more specialized algorithms that are designed for specific types of POMDPs. This paper revisits this claim. We find that a careful architecture and hyperparameter decisions yield a recurrent model-free implementation that performs on par with (and occasionally substantially better than) more sophisticated recent techniques in their respective domains. We also release a simple and efficient implementation of recurrent model-free RL for future work to use as a baseline for POMDPs.
Reject
This paper recognizes that several common sub-problems studied in RL, such as meta RL and generalization in RL, can be cast as POMDPs. Using this observation, the authors evaluate how a straightforward approach to deal with POMDPs---using a recurrent neural network---compares to more specialized approaches. The reviewers agree that the research question studied in this paper is very interesting. However, after careful deliberation, I share the view of reviewer 2WFY that the results insufficiently support the claims made in the paper. In particular, I view the main claim from the abstract "We find that a careful architecture and hyperparameter decisions yield a recurrent model-free implementation that performs on par with (and occasionally substantially better than) more sophisticated recent techniques in their respective domains." as insufficiently supported. The main issue with the experiments is that only a small number of simple domains are considered. As Luisa points out in the public comments, variBAD dominates recurrent baselines when more complex tasks are considered, while on simpler domains such as the Cheetah-Vel domain considered in this paper, it performs similar to a recurrent model-free baseline. In the rebuttal the authors have added a more complex domain to address this, showing that a recurrent model-free baseline outperforms an off-policy version of variBAD. However, I view these results as inconclusive, as only a single complex domain is considered and they appear to contradict previous results with on-policy variBAD. For these reasons, I don't think the work in its current form is ready for publication at ICLR. But I want to encourage the authors to work out this direction further. In particular, adding more complex domains and also considering the on-policy variBAD method, can make this work stronger.
train
[ "kJraWfaXuzh", "ny_q-3eq5w_", "WPsI5VHhSr-", "A7RnwXCuuTH", "6f_SahjhGz", "ZQuJBKr0QFl", "S5HQW4JMViy", "v6KykwNsZHN", "S-GEY7TEcc5", "5x3_57dS5zV", "yKZnoMC256n", "cGZrbj9RbjV", "YTPBXPOuwH_", "lSOsWyInYDv", "LM5Y2Fe4Y6L", "UUj2arR8vf3", "mmdNWnJ1uw", "rquJRmzFoxQ", "v0x0ABroh0_...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_...
[ " Dear Reviewer,\n\nThank you for clarifying the concerns. While we believe that the current claims in the paper are already quite narrow (e.g., we don't make any claims about sparse rewards or long horizons, only about performance on standard benchmarks) and supported by substantial empirical evidence (17 environm...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "S-GEY7TEcc5", "WPsI5VHhSr-", "cGZrbj9RbjV", "ZQuJBKr0QFl", "iclr_2022_E0zOKxQsZhN", "S5HQW4JMViy", "5x3_57dS5zV", "5x3_57dS5zV", "LM5Y2Fe4Y6L", "yKZnoMC256n", "UUj2arR8vf3", "mmdNWnJ1uw", "LM5Y2Fe4Y6L", "v0x0ABroh0_", "3OAvZjnWGvo", "6f_SahjhGz", "28TKrkSsEDm", "iclr_2022_E0zOKxQs...
iclr_2022_HiHWMiLP035
E$^2$CM: Early Exit via Class Means for Efficient Supervised and Unsupervised Learning
State-of-the-art neural networks with early exit mechanisms often need considerable amount of training and fine-tuning to achieve good performance with low computational cost. We propose a novel early exit technique, E$^2$CM, based on the class means of samples. Unlike most existing schemes, E$^2$CM does not require gradient-based training of internal classifiers. This makes it particularly useful for neural network training in low-power devices, as in wireless edge networks. In particular, given a fixed training time budget, E$^2$CM achieves higher accuracy as compared to existing early exit mechanisms. Moreover, if there are no limitations on the training time budget, E$^2$CM can be combined with an existing early exit scheme to boost the latter's performance, achieving a better trade-off between computational cost and network accuracy. We also show that E$^2$CM can be used to decrease the computational cost in unsupervised learning tasks.
Reject
This paper proposes an early exit method that uses class means of samples that is gradient free and is aimed for low compute cases such as mobile and edge data. The idea is novel in this setting (though class means have been used for other settings such as few shot classification) and empirical results show that it works well. There are two main concerns from reviewer concerns that were not addressed by the author rebuttal. First, applicability of the model in real world due to its memory requirements and two, experiments that show performance on more realistic datasets such as Imagenet. The reason the latter is required is the promise of mobile application for the proposed method. I suggest the authors explain the first concern more and add the requested experiments in the upcoming version of the paper.
test
[ "W-wsgz3Xioe", "EM9RHbeCim7", "B8AHFWkerMq", "yYXzxuYYIIM", "aSOr3Uk6mNK", "LxIEOxEJxO", "s-uue5D-QpL", "GxLMsOrZR4o", "WnuXxRkRon4", "4NqPuJDGw5o", "93eHvIaFNJh", "-orFhdrr6s8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for their comments.\n\nThe revised version of the paper contains experiments for CIFAR-100, which is a 100-class example. The example shows that the low computational complexity or other benefits of E2CM remain. We have not yet performed experiments on a 1000-class dataset, but...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, 3, 5, 3 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, 5, 4, 5 ]
[ "B8AHFWkerMq", "yYXzxuYYIIM", "WnuXxRkRon4", "LxIEOxEJxO", "iclr_2022_HiHWMiLP035", "-orFhdrr6s8", "93eHvIaFNJh", "aSOr3Uk6mNK", "4NqPuJDGw5o", "iclr_2022_HiHWMiLP035", "iclr_2022_HiHWMiLP035", "iclr_2022_HiHWMiLP035" ]
iclr_2022_xKZ4K0lTj_
Hierarchical Few-Shot Imitation with Skill Transition Models
A desirable property of autonomous agents is the ability to both solve long-horizon problems and generalize to unseen tasks. Recent advances in data-driven skill learning have shown that extracting behavioral priors from offline data can enable agents to solve challenging long-horizon tasks with reinforcement learning. However, generalization to tasks unseen during behavioral prior training remains an outstanding challenge. To this end, we present Few-shot Imitation with Skill Transition Models (FIST), an algorithm that extracts skills from offline data and utilizes them to generalize to unseen tasks given a few downstream demonstrations. FIST learns an inverse skill dynamics model, a distance function, and utilizes a semi-parametric approach for imitation. We show that FIST is capable of generalizing to new tasks and substantially outperforms prior baselines in navigation experiments requiring traversing unseen parts of a large maze and 7-DoF robotic arm experiments requiring manipulating previously unseen objects in a kitchen.
Accept (Poster)
The reviewers agree that addressing long-horizon tasks with off-line learning and fine tuning afterwards from demonstrations is an interesting and relevant topic. The technical ideas about learning a relevance metric to select relevant off-line data, and to learn an inverse skill dynamics models. The experimental results are convincing, even if success rates are sometimes lower than expected. All reviewers recommend acceptance of the paper.
train
[ "zXXSWqKUfE", "RZEaiRVTOCu", "JDCcjAXfNEs", "ZLPjzcZQeUj" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an approach for skill learning and fine-tuning. The main idea of the manuscript is to combine offline skill learning with few online demonstrations to provide efficient learning experience in out of distribution (OOD) tasks. There are two main novelties in the paper:\n\n- Learning of an inverse ...
[ 6, 8, 6, 6 ]
[ 5, 5, 4, 3 ]
[ "iclr_2022_xKZ4K0lTj_", "iclr_2022_xKZ4K0lTj_", "iclr_2022_xKZ4K0lTj_", "iclr_2022_xKZ4K0lTj_" ]
iclr_2022_on54StZqGQ_
Degradation Attacks on Certifiably Robust Neural Networks
Certifiably robust neural networks employ provable run-time defenses against adversarial examples by checking if the model is locally robust at the input under evaluation. We show through examples and experiments that these defenses are inherently over-cautious. Specifically, they flag inputs for which local robustness checks fail, but yet that are not adversarial; i.e., they are classified consistently with all valid inputs within a distance of $\epsilon$. As a result, while a norm-bounded adversary cannot change the classification of an input, it can use norm-bounded changes to degrade the utility of certifiably robust networks by forcing them to reject otherwise correctly classifiable inputs. We empirically demonstrate the efficacy of such attacks against state-of-the-art certifiable defenses.
Reject
The work focuses on the observation that, given a certified epsilon-robust model and a certified clean input x, many inputs within the epsilon ball around x are themselves not epsilon-certifiable although they are correctly classified. The authors argue that an adversary can exploit this property to produce inputs which are correctly classified by the model yet are not certifiably robust. Reviewers agreed that the paper was overall well written, the methods were clear and overall evaluated thoroughly, and many felt that the main idea was interesting. There were some concerns regarding the significance of the contribution, the primary observation itself is arguably novel but somewhat obvious, and the proposed algorithm for finding non-certifiable points isn't a significant contribution when standard techniques like PGD are sufficient. Much of the reviewer discussion concerned whether or not the proposed attack made sense as a threat model. It is the AC's opinion that this discussion did not reach any meaningful conclusions. It is important to remember that the lp threat model is intended as an abstract toy game so that a formal theory of neural network certification can be developed under idealized settings. It is not intended to model any realistic security scenarios, and even more generalized notions of "imperceptible" or "subtle" attacks aren't realistic when for the bulk of applied settings real adversaries are not restricted to small perturbations in the first place [1]. The example provided by the authors regarding small perturbations of a stop sign isn't a relevant example when the adversary has more effective options, e.g. knocking over stop signs [1, Figure 3]. For the sake of discussion, one could consider whether or not a degradation attack would make sense under more general threat models such as content-preserving perturbations. An example discussed in [1] concerns adversaries uploading copyrighted content to public streaming services—this attacker defender game is being actively played in the wild where defenders produce statistical models which attempt to flag content as semantically matching existing copyrighted content in a private database, while attackers make large semantically-preserving modifications in order to evade statistical detection. An example attack would be cropping 20% of the boundary pixels of a movie and replacing the cropped portion with arbitrary adversarially constructed backgrounds. Epsilon perturbations are possible, but are almost a measure 0 subset of the full attacker action space. Suppose in the far future neural network certification advanced to the point where we could certify that a classifier was robust to all possible content-preserving perturbations of a specific movie. In this case the defender would be using the certification method on their private database of copyrighted content, they would not be running the certifier on any content uploaded by users. If a movie in the private database is certified, then we already know that an attacker cannot successfully upload an adversarial version of it, it would be unnecessary to certify whether or not user uploaded content could be further perturbed in a way to become adversarial. Perhaps degradation attacks could be possible as a training poisoning attack, but this seems a bit far-fetched when more traditional training poisoning attacks would be preferred. Given this, at least in this example the AC does not see how a degradation attack would make sense as a threat model. Given that the primary contribution of this work is a novel threat model for ML security, it is crucial that the authors rewrite their work to make more realistic assumptions of the capabilities of realistic adversaries. Starting with some of the examples discussed in [1] may be useful to the authors. Although the example of adversarial attacks on copyright detection classifiers doesn't seem to fit the degradation attack threat model, perhaps other scenarios would. 1. Gilmer et. al, Motivating the Rules of the Game for Adversarial Example Research, https://arxiv.org/pdf/1807.06732.pdf
train
[ "BzbJsZaacUG", "AB5kcZPpMU6", "cBQGBnttbOM", "Axko-n5wN6", "RB9Huly4JEd", "RcM-KhVipyH", "XOz-Mt4pYra", "IYaNiWf4Rws", "U2gyHjVhEEz", "QGKlpPF_8wl", "9DtxjubjE8", "XqTZAJrBDiW", "2ZwScJf6jSB", "-CsoL4X6ham", "bOMFdhR9227", "RFWmhJ8H_Rt" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ " We thank the reviewer for pointing out related work [A-G]. We will discuss all of them in a revised version.\n\nWith respect to SOTA techniques, we would like to point out that our proposed attack would work even for *\"perfectly robust\"* networks. By this we mean the following. Suppose we use [F] or [G] to trai...
[ -1, 5, 6, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 5, 5, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "AB5kcZPpMU6", "iclr_2022_on54StZqGQ_", "iclr_2022_on54StZqGQ_", "iclr_2022_on54StZqGQ_", "IYaNiWf4Rws", "iclr_2022_on54StZqGQ_", "-CsoL4X6ham", "iclr_2022_on54StZqGQ_", "QGKlpPF_8wl", "9DtxjubjE8", "Axko-n5wN6", "AB5kcZPpMU6", "cBQGBnttbOM", "bOMFdhR9227", "RcM-KhVipyH", "iclr_2022_on...
iclr_2022_hq7vLjZTJPk
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
In distributed training of deep neural networks or Federated Learning (FL), people usually run Stochastic Gradient Descent (SGD) or its variants on each machine and communicate with other machines periodically. However, SGD might converge slowly in training some deep neural networks (e.g., RNN, LSTM) because of the exploding gradient issue. Gradient clipping is usually employed to address this issue in the single machine setting, but exploring this technique in the FL setting is still in its infancy: it remains mysterious whether the gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup in the FL setting. The main technical difficulty lies at dealing with nonconvex loss function, non-Lipschitz continuous gradient, and skipping communication rounds simultaneously. In this paper, we explore a relaxed-smoothness assumption of the loss landscape which LSTM was shown to satisfy in previous works, and design a communication-efficient gradient clipping algorithm. This algorithm can be run on multiple machines, where each machine employs a gradient clipping scheme and communicate with other machines after multiple steps of gradient-based updates. Our algorithm is proved to have $O\left(\frac{1}{N\epsilon^4}\right)$ iteration complexity for finding an $\epsilon$-stationary point, where $N$ is the number of machines. This indicates that our algorithm enjoys linear speedup. Our experiments on several benchmark datasets demonstrate that our algorithm indeed exhibits fast convergence speed in practice and validate our theory.
Reject
This paper made a solid contribution studying the convergence rate of a simple distributed gradient clipping algorithm. The proposed algorithm simply clips the gradients on each local machine and then do simple distributed update of the parameters. The result, if correct, is quite strong and significant: The proposed algorithm is simple, and shows some benefit comparing to previously proposed algorithms -- The strongest part of the paper is that it comes with a convergence rate bound (which is typically hard to prove for gradient clipping methods). However, during the rebuttal period it was discovered that a number of places in the proofs are not well-supported, the paper has to go through major revision in order to meet the publication standard.
train
[ "EnTUmSEKhNW", "8Gce4ewhzj1", "bWaGoQcrKIa", "ucKLJTr3oES", "5_CMrGWnRk", "AiDwpeNdASm", "kRMlHENkttq", "9__gGhDCZo", "oJI-l0UGFbc", "O_lXtjMd5OL", "_g6TMi_xEA", "tEtJaIXy9u", "krpy0lJntD_", "lGHq0YGiBr3", "EMf_9SqWWuw", "6o4OqnWu4KH", "GEwJZnoYtcz", "Knxgf1OMRP", "wY8_OnbaZo-", ...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " I thank the authors for the detailed replies to my criticism. I acknowledge their efforts and time to address my concerns.\n\nHowever, during the discussion with the authors, a number of places were indicated that should be fixed. To make a justified decision on the paper, it is crucial to see the final version o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "8Gce4ewhzj1", "ucKLJTr3oES", "ucKLJTr3oES", "5_CMrGWnRk", "kRMlHENkttq", "EMf_9SqWWuw", "EMf_9SqWWuw", "Knxgf1OMRP", "97U0OOuDTFH", "iclr_2022_hq7vLjZTJPk", "iclr_2022_hq7vLjZTJPk", "97U0OOuDTFH", "Knxgf1OMRP", "wY8_OnbaZo-", "wY8_OnbaZo-", "GEwJZnoYtcz", "iclr_2022_hq7vLjZTJPk", ...
iclr_2022_mRc_t2b3l1-
Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion
In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). We find empirically that long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuous-time model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD.
Reject
The authors study the limiting dynamics of a simple linear regression model. They use an underdamped Langevin equation which is quite common in the literature. Although the reviewers welcome the direction and the attempt to understand the dynamics of a simple model, the novelty of the paper is limited. As an example, the paper shows that the key ingredient driving these dynamics is not the original training loss, but a modified loss. As pointed out by the reviewers, this has already been pointed out in multiple papers. One important problem is the tendency of the paper to oversell the results (including the title), which makes it difficult to clearly separate the contributions made in the paper. After a discussion with the reviewers, the overall feeling did not change. I can therefore not recommend acceptance. I strongly recommend the authors do a significant rewrite of the paper in order to clearly separate what contributions are truly novel and also improve the discussion of prior work.
train
[ "oxvil8ov4Xi", "R86OIU-mEak", "3PG5T5G9LCG", "2nNvzJisrVg", "R3h6vsqnCIB", "e6qqLpTy8wR", "VqGxiiJ17MH", "6ndJvDE9NTv", "B4PeGmWcktE", "hvkLgUFvNSh", "AP2aq9R7Sij", "h8aAtpT3vBB", "pA6BMi4YmE8", "TY_YMJ4qrGE", "5chwkVh6BeR", "DisBEhZsGP", "dYGVVJvXVS6", "z2KNYZ1DUd8M", "LFMQII2Xj...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "...
[ "The paper studies the long-time dynamics of deep neural networks. The author(s) (1) show some empirical findings related to the mean square displacement, (2) model SGD as an underdamped Langevin Equation, relate it to an Ornstein Uhlenbeck process in a linear regression setting, and use it to study the limiting dy...
[ 6, -1, 5, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_mRc_t2b3l1-", "iclr_2022_mRc_t2b3l1-", "iclr_2022_mRc_t2b3l1-", "R3h6vsqnCIB", "VqGxiiJ17MH", "h8aAtpT3vBB", "hvkLgUFvNSh", "iclr_2022_mRc_t2b3l1-", "TY_YMJ4qrGE", "oxvil8ov4Xi", "h8aAtpT3vBB", "pA6BMi4YmE8", "dYGVVJvXVS6", "5chwkVh6BeR", "DisBEhZsGP", "6ndJvDE9NTv", "3PG5...
iclr_2022_oOuPVoT1kA5
FEVERLESS: Fast and Secure Vertical Federated Learning based on XGBoost for Decentralized Labels
Vertical Federated Learning (VFL) enables multiple clients to collaboratively train a global model over vertically partitioned data without revealing private local information. Tree-based models, like XGBoost and LightGBM, have been widely used in VFL to enhance the interpretation and efficiency of training. However, there is a fundamental lack of research on how to conduct VFL securely over distributed labels. This work is the first to fill this gap by designing a novel protocol, called FEVERLESS, based on XGBoost. FEVERLESS leverages secure aggregation via information masking technique and global differential privacy provided by a fairly and randomly selected noise leader to prevent private information from being leaked in the training process. Furthermore, it provides label and data privacy against honest-but-curious adversary even in the case of collusion of $n-2$ out of $n$ clients. We present a comprehensive security and efficiency analysis for our design, and the empirical experiment results demonstrate that FEVERLESS is fast and secure. In particular, it outperforms the solution based on additive homomorphic encryption in runtime cost and provides better accuracy than the local differential privacy approach.
Reject
The reviewers agree that the problem tackled is important but raise several substantial issues that justify not to accept the paper in its current form. I would encourage the authors to clarify further the crypto part of the paper (dHiP, 2., 4.) and work on how to relax or improve the model assumptions (NGrb). Also, the author's reply to chCc, point 2. becomes more disputable as federated learning is further developed. The argument can be refined. On a personal note, the statement of Theorem 3.3 could be made clearer, in particular in simplifying (while weakening a bit) the probability bound. AC.
train
[ "ITa8pAKPgnj", "4IcpR65JRgL", "7NuUfhHa52u", "HTQPES0wpN9", "kBwfYHvdspP", "diHo_3KPVFV", "YJnzfQfZk10", "G78HE0OOlpe", "Rd6JSeSz0rq", "rCb4yaAucy", "QicKyAt88g4" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comments. We note that existing works of VFL only consider the scenario where all labels are held by one and only client. In this work, we have extended the scenario and made it much closer to reality. We use the assumption - i.e. a label of a data instance is owned by a client (1-to-1 case) – as ...
[ -1, -1, -1, -1, -1, -1, -1, 8, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "4IcpR65JRgL", "7NuUfhHa52u", "QicKyAt88g4", "rCb4yaAucy", "Rd6JSeSz0rq", "G78HE0OOlpe", "iclr_2022_oOuPVoT1kA5", "iclr_2022_oOuPVoT1kA5", "iclr_2022_oOuPVoT1kA5", "iclr_2022_oOuPVoT1kA5", "iclr_2022_oOuPVoT1kA5" ]
iclr_2022_92awwjGxIZI
Self-GenomeNet: Self-supervised Learning with Reverse-Complement Context Prediction for Nucleotide-level Genomics Data
We introduce Self-GenomeNet, a novel contrastive self-supervised learning method for nucleotide-level genomic data, which substantially improves the quality of the learned representations and performance compared to the current state-of-the-art deep learning frameworks. To the best of our knowledge, Self-GenomeNet is the first self-supervised framework that learns a representation of nucleotide-level genome data, using domain-specific characteristics. Our proposed method learns and parametrizes the latent space by leveraging the reverse-complement of genomic sequences. During the training procedure, we force our framework to capture semantic representations with a novel context network on top of intermediate features extracted by an encoder network. The network is trained with an unsupervised contrastive loss. Extensive experiments show that our method with self-supervised and semi-supervised settings is able to considerably outperform previous deep learning methods on different datasets and a public bioinformatics benchmark. Moreover, the learned representations generalize well when transferred to new datasets and tasks. The source code of the method and all the experiments are available at supplementary.
Reject
Based on the contrastive learning loss wildly used in the NLP and computer vision domains, this paper presents Self-GenomeNet, a contrastive learning method for representation learning of genomic sequences. As shown in the experiment section, the improvement compared to baselines CPC, Language model, and even supervised learning method is considerable, on three benchmark datasets in both self-supervised and semi-supervised evaluation. Even after the discussion phase, there exists disagreement among the reviewers. AC considered all reviews, author responses, and the discussions, as well as read the paper. While the paper has some merit such as an effective Self-GenomeNet model for the particular problem setup, reviewers still have several reservations to directly accepting it: + Questionable impact. The proposed framework is overall a simple combination of existing methods and beyond genome datasets, the impact of this proposed method is questionable. + Limited inspiration. The proposed method is mainly constructed on the previously-proposed contrastive learning loss wildly-used in the NLP and computer vision domains, the benefits of the proposed method may be limited on the genome data (especially the domain-specific data augmentation e.g., reverse complement). How can the insights foster future research? + Lack of justification. The innovations introduced by the paper seem ad-hoc, and the reasons for the large observed improvement are not entirely intuitive. Meanwhile, even with the provided response from the authors, the connection between motivation and the proposed method is still not crystal clear. Given the above reservations, AC could not accept the paper for now but encourage the authors to fully revise the paper and strengthen their work.
train
[ "tPGY_VtJPN", "9YmqXSIyrkK", "LwQL8T_t_b", "9bACL6DM-m", "6Zt4G1aiJUR", "mnACuTbydIE", "4fcvWuq_iu", "_unWMe1MiZZ", "daIbJwyxBl8", "Cf7Nu5ZZnqq", "btXVr0WViJk", "xjvk63OvPRt", "Dqrch-hK2k-", "kF-SZxTc5o_", "mKWWF-qbwSY", "6DXbdFqW0eT" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Recently several works have proposed semi-supervised learning methods to leverage unlabeled biological sequences for learning their general-purpose representations. In this work, the authors proposed the Self-GenomeNet, a novel contrastive learning method for nucleotides based on the reverse-complement (RC) contex...
[ 5, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_92awwjGxIZI", "LwQL8T_t_b", "xjvk63OvPRt", "_unWMe1MiZZ", "Dqrch-hK2k-", "iclr_2022_92awwjGxIZI", "btXVr0WViJk", "daIbJwyxBl8", "tPGY_VtJPN", "kF-SZxTc5o_", "mKWWF-qbwSY", "6DXbdFqW0eT", "Cf7Nu5ZZnqq", "mnACuTbydIE", "iclr_2022_92awwjGxIZI", "iclr_2022_92awwjGxIZI" ]
iclr_2022_JsfFpJhI4BV
Learning Identity-Preserving Transformations on Data Manifolds
Many machine learning techniques incorporate identity-preserving transformations into their models to generalize their performance to previously unseen data. These transformations are typically selected from a set of functions that are known to maintain the identity of an input when applied (e.g., rotation, translation, flipping, and scaling). However, there are many natural variations that cannot be labeled for supervision or defined through examination of the data. As suggested by the manifold hypothesis, many of these natural variations live on or near a low-dimensional, nonlinear manifold. Several techniques represent manifold variations through a set of learned Lie group operators that define directions of motion on the manifold. However theses approaches are limited because they require transformation labels when training their models and they lack a method for determining which regions of the manifold are appropriate for applying each specific operator. We address these limitations by introducing a learning strategy that does not require transformation labels and developing a method that learns the local regions where each operator is likely to be used while preserving the identity of inputs. Experiments on MNIST and Fashion MNIST highlight our model's ability to learn identity-preserving transformations on multi-class datasets. Additionally, we train on CelebA to showcase our model's ability to learn semantically meaningful transformations on complex datasets in an unsupervised manner.
Reject
The paper proposes a method for learning identity-preserving transformations through a set of learned Lie group operators. It builds upon previous work ( (Connor & Rozell, 2020; Connor et al., 2021) addressing two points: (i) how to select semantically related pairs of points, (ii) how to identify which operators are appropriate for a given local region of the manifold. Authors use nearest neighbors computed via the penultimate layers of a pretrained network to address (i), and learn a separate network q(c|z) that predicts the coefficients given a latent input z. Reviewers have two main concerns with the paper -- limited novelty over earlier work that learns the Lie group operators; and the complicated nature of the method which needs training in three stages and uses a pretrained ResNet for finding nearest neighbors. Lack of comparison with relevant baselines is also pointed out by the reviewers. Given these issues, the paper is unfortunately not suitable for publication in ICLR at this point.
train
[ "ACmqgAP2Y8", "fuxGwU8xBr1", "he2guQZQzbM", "YU2kL7DO_p", "No8r7rZJXbk", "dctImdd2vAI", "sVwKjHivnXZ" ]
[ "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for saying that our proposed methods are interesting. We read the reviewer’s question as asking whether our proposed methods provide any guarantees in learning the data manifold given the decomposition inside of a deep neural network. Although our results are mostly empirical, we present sev...
[ -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, 3, 4, 4 ]
[ "sVwKjHivnXZ", "dctImdd2vAI", "No8r7rZJXbk", "iclr_2022_JsfFpJhI4BV", "iclr_2022_JsfFpJhI4BV", "iclr_2022_JsfFpJhI4BV", "iclr_2022_JsfFpJhI4BV" ]
iclr_2022_YVa8X_2I1b
INFERNO: Inferring Object-Centric 3D Scene Representations without Supervision
We propose INFERNO, a method to infer object-centric representations of visual scenes without relying on annotations. Our method learns to decompose a scene into multiple objects, each object having a structured representation that disentangles its shape, appearance and 3D pose. To impose this structure we rely on recent advances in neural 3D rendering. Each object representation defines a localized neural radiance field that is used to generate 2D views of the scene through a differentiable rendering process. Our model is subsequently trained by minimizing a reconstruction loss between inputs and corresponding rendered scenes. We empirically show that INFERNO discovers objects in a scene without supervision. We also validate the interpretability of the learned representations by manipulating inferred scenes and showing the corresponding effect in the rendered output. Finally, we demonstrate the usefulness of our 3D object representations in a visual reasoning task using the CATER dataset.
Reject
The paper proposes an approach for learning a decomposition of a scene into 3D objects using single images without pose annotations as training data. The model is based on Slot Attention and NeRF. Results are demonstrated on CLEVR and its variants. The reviewers point out that the method is reasonable and the paper is quite good, but even after considering the authors' feedback agree that the paper is not ready for acceptance. In particular, the key concern is around experimental evaluation - that it is performed on one dataset (and variants thereof) and that the evaluation of the 3D properties of the model is not sufficiently convincing: it does not outperform 2D object learning methods on segmentation and is not compared to those on "snitch localization". Overall, this is a reasonable paper, and the results are promising but somewhat inconclusive, so I recommend rejection at this point, but encourage the authors to improve the paper and resubmit to a different venue. (One remark. The paper makes a point of not using any annotation. It is technically true, but in practice on CLEVR unsupervised segmentation works so well that it's basically as if segmentation masks were provided. If the authors could demonstrate that their method - possibly with provided coarse segmentation masks - works on more complex datasets, it would be a nice additional experiment)
train
[ "qe7h164Xe4", "5YuYMBvb5bF", "wRi1dlaKYQY", "BaHtwGIk3iR", "CfMzkDdbBCX", "yuFFZH3QLA2", "TV0mtNZlelk", "NUJ4tPEVevV", "vu-048_wUNK", "hLRuUVwYJ_V", "crxPbxHy7P", "59SiWHb1LI", "B7uvpZ9G0WR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel unsupervised scene decomposition model that infers object shapes, appearances and 3D poses. The benefits over existing models are the structured, 3D object representations which allows to manipulate objects in the scenes such as moving and replacing objects. This paper also shows that t...
[ 5, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_YVa8X_2I1b", "hLRuUVwYJ_V", "iclr_2022_YVa8X_2I1b", "vu-048_wUNK", "yuFFZH3QLA2", "NUJ4tPEVevV", "B7uvpZ9G0WR", "59SiWHb1LI", "wRi1dlaKYQY", "qe7h164Xe4", "iclr_2022_YVa8X_2I1b", "iclr_2022_YVa8X_2I1b", "iclr_2022_YVa8X_2I1b" ]
iclr_2022_mQDpmgFKu1P
Language Modeling using LMUs: 10x Better Data Efficiency or Improved Scaling Compared to Transformers
Recent studies have demonstrated that the performance of transformers on the task of language modeling obeys a power-law relationship with model size over six orders of magnitude. While transformers exhibit impressive scaling, their performance hinges on processing large amounts of data, and their computational and memory requirements grow quadratically with sequence length. Motivated by these considerations, we construct a Legendre Memory Unit based model that introduces a general prior for sequence processing and exhibits an $O(n)$ and $O(n \ln n)$ (or better) dependency for memory and computation respectively. Over three orders of magnitude, we show that our new architecture attains the same accuracy as transformers with 10x fewer tokens. We also show that for the same amount of training our model improves the loss over transformers about as much as transformers improve over LSTMs. Additionally, we demonstrate that adding global self-attention complements our architecture and the augmented model improves performance even further.
Reject
The paper proposes a language modeling architecture based on the RNN cells leveraging Legendre memory units. The proposal is interesting, but as all the reviewers notice, the paper is not ready for the presentation in the top ML conference for several reasons: comparison with weak baselines, shallow or weak analysis of the presented results, insufficient discussion of the related work, etc. Looking forward for all the comments to be addressed by the authors. In the rebuttal the authors addressed some of the questions but all the reviewers think that the paper is not ready for acceptance and careful rewriting is needed. Recent research on the improved RNN mechanisms suggests that Legendre memory units and related mechanisms might be a gateway to solving several standard issues of training regular RNNs so the topic is definitely of great importance. Thus the authors are highly encouraged to resubmit the paper after making all suggested corrections.
test
[ "nmfC1_Ne7_u", "6Tzn0TiuVc6", "Njhu4VJYSfD", "17JfQ5Ne_LD", "JPv0LVnfUXW", "x90C-HQ6-Td", "aQMHJd7ULKe", "bXzwmGeH9u2", "JGSt2_zC2m_", "byM2oVZvEeg", "wpB5YVFNhK" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reply and clarifications! Discussing the empirical rationale more would help but I think that, overall, the claims made in this paper are not well supported and leave open questions. \n\nAlso, demonstrating improvements where existing efficient transformers do not is not a reason to avoid compariso...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "17JfQ5Ne_LD", "x90C-HQ6-Td", "wpB5YVFNhK", "byM2oVZvEeg", "JGSt2_zC2m_", "bXzwmGeH9u2", "iclr_2022_mQDpmgFKu1P", "iclr_2022_mQDpmgFKu1P", "iclr_2022_mQDpmgFKu1P", "iclr_2022_mQDpmgFKu1P", "iclr_2022_mQDpmgFKu1P" ]
iclr_2022_gijKplIZ2Y-
Mistill: Distilling Distributed Network Protocols from Examples
New applications and use-cases in data center networks require the design of Traffic Engineering (TE) algorithms that account for application-specific traffic patterns. TE makes forwarding decisions from the global state of the network. Thus, new TE algorithms require the design and implementation of effective information exchange and efficient algorithms to compute forwarding decisions. This is a challenging and labor and time-intensive process. To automate and simplify this process, we propose MISTILL. MISTILL distills the forwarding behavior of TE policies from exemplary forwarding decisions into a Neural Network. MISTILL learns which network devices must exchange state with each other, how to process local state to send it over the network, and how to map the exchanged state into forwarding decisions. We show the ability of MISTILL to learn distributed protocols with three examples and verify their performance in simulations. We show that the learned protocols closely implement the desired policies.
Reject
Most of the reviewers thought this paper has issues where it could be improved. There was a range of concerns. Most importantly, several reviewers felt the novelty in the paper was unclear as well as the requirement for more details in the experimental evaluations.
train
[ "6tP_dncf4t", "9mXtSR-NL3c", "I3JJ5uAdHos", "4kicc-KZpaA", "cAXEZ3Y3m7", "lM7RkiYZ_y0", "DhShtFGTa9A", "PWHudSKXpD", "eFR3g14nIS", "-_oF4qyGAN5", "pU66yrRm8FN", "PPvms3m0j1Z", "iKuVNfWoCmz", "Ah7jDGPxlmT", "asa5LpcNSLd" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and clarification of your question. If the paper gets accepted, we will emphasize this point more in the camera-ready version.\n\nIndeed, it is possible to interpret what the NN has learned: The NN correctly learned whose HNSA's are needed to cover the edges between the switch making a...
[ -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "I3JJ5uAdHos", "iclr_2022_gijKplIZ2Y-", "-_oF4qyGAN5", "iclr_2022_gijKplIZ2Y-", "pU66yrRm8FN", "iclr_2022_gijKplIZ2Y-", "iclr_2022_gijKplIZ2Y-", "Ah7jDGPxlmT", "iclr_2022_gijKplIZ2Y-", "9mXtSR-NL3c", "4kicc-KZpaA", "asa5LpcNSLd", "iclr_2022_gijKplIZ2Y-", "iclr_2022_gijKplIZ2Y-", "iclr_20...
iclr_2022_UeRmyymo3kb
GARNET: A Spectral Approach to Robust and Scalable Graph Neural Networks
Graph neural networks (GNNs) have been increasingly deployed in various applications that involve learning on non-Euclidean data. However, recent studies show that GNNs are vulnerable to graph adversarial attacks. Although there are several defense methods to improve GNN adversarial robustness, they fail to perform well on low homophily graphs. In addition, few of those defense models can scale to large graphs due to their high computational complexity and memory usage. In this paper, we propose GARNET, a scalable spectral method to boost the adversarial robustness of GNN models for both homophilic and heterophilic graphs. GARNET first computes a reduced-rank yet sparse approximation of the adversarial graph by exploiting an efficient spectral graph embedding and sparsification scheme. Next, GARNET trains an adaptive graph filter on the reduced-rank graph for node representation refinement, which is subsequently leveraged to guide label propagation for further enhancing the quality of node embeddings. GARNET has been evaluated on both homophilic and heterophilic datasets, including a large graph with millions of nodes. Our extensive experiment results show that GARNET increases adversarial accuracy over state-of-the-art GNN (defense) models by up to $9.96\%$ and $15.17\%$ on homophilic and heterophilic graphs, respectively.
Reject
The paper proposes a method to change the graph structure for better robustness against adversarial attacks. The reviewers commend the authors for a clearly written paper and promising results. Several reviewers expressed concerns about experimental validation (specifically, comparison to truncated SVD and choice of baselines), complexity, and novelty. The rebuttal and follow-up discussion alleviated some of the concerns, but the reviewers still have outstanding issues, therefore the AC does not recommend accepting the paper.
train
[ "05PS3Ee6Ijl", "osvMI1V1AOr", "c3UxhljxNxr", "LfBwao-YRHw", "NoS0FW3fgTJ", "LbuaMaGDFk", "tRBsONsR8b7", "Q3ofVmyjMh5", "aNIImKmWW2c", "6x5SkB6fzcha", "TtclRXgBJtW", "oh1Q7g6IvmH", "p1GF1SUdBP_", "18lcLgWQNG", "FjjJnVS0Oly", "x5Vh0M38EWd" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed responses. I have read them carefully. In summary, the response has partially addressed my concerns, but some of my major concerns still remain. I would like to keep my original score. \n\nSpecifically, \n\nC1: thanks for the clarification. Now I understand that k-NN step is...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "aNIImKmWW2c", "tRBsONsR8b7", "LfBwao-YRHw", "NoS0FW3fgTJ", "LbuaMaGDFk", "Q3ofVmyjMh5", "x5Vh0M38EWd", "FjjJnVS0Oly", "TtclRXgBJtW", "iclr_2022_UeRmyymo3kb", "18lcLgWQNG", "p1GF1SUdBP_", "iclr_2022_UeRmyymo3kb", "iclr_2022_UeRmyymo3kb", "iclr_2022_UeRmyymo3kb", "iclr_2022_UeRmyymo3kb"...
iclr_2022_Mo9R9oqzPo
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic and Sound
Saliency methods seek to provide human-interpretable explanations for the output of machine learning model on a given input. A plethora of saliency methods exist, as well as an extensive literature on their justifications/criticisms/evaluations. This paper focuses on heat maps based saliency methods that often provide explanations that look best to humans. It tries to introduce methods and evaluations for masked-based saliency methods that are {\em intrinsic} --- use just the training dataset and the trained net, and do not use separately trained nets, distractor distributions, human evaluations or annotations. Since a mask can be seen as a "certificate" justifying the net's answer, we introduce notions of {\em completeness} and {\em soundness} (the latter being the new contribution) motivated by logical proof systems. These notions allow a new evaluation of saliency methods, that experimentally provides a novel and stronger justification for several heuristic tricks in the field (T.V. regularization, upscaling).
Reject
This submission tackles the problem of model explainability from the perspective of masking-based saliency methods. Several metrics are proposed for evaluating saliency methods including a new « soundness » concept. Experiments using a consistency score to simultaneously evaluate completeness and soundness are provided. Most of the reviewers were not convinced by the approach and have raised several issues. After rebuttal and despite the interest in the introduction of the concept of « soundness » to better explain model decision, the current proposition needs to be improved. In particular, the interest of this soundness concept does not bring out, many details of the method are not clear enough and the effectiveness of the proposed measure is still questionable. It would be interesting that the authors consider the R’s comments as the ones for additional experiments to demonstrate the relevancy of their contribution.
train
[ "j-oEDcS6cYy", "sq-JJsh9nRW", "UhGoXmiBAmw", "pJTzLOLib8H", "nku0vdnD8TM", "Hv2PZAhPW1a", "8-FfxXbLVPh", "r4PiSg1LFCF", "GEQi8-yo6lN", "DzojFOJYYn", "Vu0_S9jLDx", "-SaU4f96Iuz", "S46SPeAstKu", "J-bkMCLaHKk", "-14HAI3CCO3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors made a comprehensive response to all my comments. However, some of these explanations are still not very convincing to me. Moreover, from the comments of other reviewers and their responses, the paper still has a long way to go before it can be accepted for publication. Thus I would keep my initial ra...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "GEQi8-yo6lN", "r4PiSg1LFCF", "nku0vdnD8TM", "Vu0_S9jLDx", "Hv2PZAhPW1a", "DzojFOJYYn", "iclr_2022_Mo9R9oqzPo", "-14HAI3CCO3", "J-bkMCLaHKk", "8-FfxXbLVPh", "S46SPeAstKu", "iclr_2022_Mo9R9oqzPo", "iclr_2022_Mo9R9oqzPo", "iclr_2022_Mo9R9oqzPo", "iclr_2022_Mo9R9oqzPo" ]
iclr_2022_RVdN1-eDZ1b
Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations
Existing techniques for model inversion typically rely on hard-to-tune regularizers, such as total variation or feature regularization, which must be individually calibrated for each network in order to produce adequate images. In this work, we introduce Plug-In Inversion, which relies on a simple set of augmentations and does not require excessive hyper-parameter tuning. Under our proposed augmentation-based scheme, the same set of augmentation hyper-parameters can be used for inverting a wide range of image classification models, regardless of input dimensions or the architecture. We illustrate the practicality of our approach by inverting Vision Transformers (ViTs) and Multi-Layer Perceptrons (MLPs) trained on the ImageNet dataset, tasks which to the best of our knowledge have not been successfully accomplished by any previous works.
Reject
This paper presents a new method for solving the problem of inverting image classifier models. The authors introduce three new augmentation-based techniques to do this. The techniques are validated using Vision Transformer and MLP models and compared against previous methods. The reviewers appreciate the problem that the paper aims to solve. However, the reviewers are not satisfied with the presentation and evaluation of the proposed approach. The main contribution of the paper is not presented clearly enough, according to the reviewers, and it remains unclear to them what aspect of model inversion the authors most want to improve on, and whether their proposed technique indeed achieves such an improvement. In their response, the authors do provide Inception scores that show that their inversion method improves the perceptual quality of generated images compared to previous approaches. The reviewers acknowledge the author response, but indicate that it does not fully resolve their concerns. I recommend that the authors update their paper to more clearly present their main contributions and conclusions, and to provide a more thorough comparison against previous methods, before submitting to another conference.
train
[ "u3w8Ektghc1", "qWJl7Q9ryah", "2UjSoxcudB0", "igu5IbZ8Sne", "P3VAbN1CnP", "EgleSTnFGEE", "tmFhSrmuKPi", "Yc8YmnErvqi", "d-gPwwUa7mg", "KU4tc_ulfza", "T80GIeaOFe", "8pQ9WK1GdJY", "p1vcYMY3i7z", "NOQ4ee3nv_L", "2zKVhvKLiqN", "PYotF3PEX1Y" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to look over our response and the updated draft! We appreciate the additional feedback.\n\nWe agree with your statement that we are viewing class inversion in the context of what models learn. This perspective is consistent with other work in this area. For example, data-free methods...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "qWJl7Q9ryah", "iclr_2022_RVdN1-eDZ1b", "iclr_2022_RVdN1-eDZ1b", "NOQ4ee3nv_L", "tmFhSrmuKPi", "iclr_2022_RVdN1-eDZ1b", "8pQ9WK1GdJY", "d-gPwwUa7mg", "KU4tc_ulfza", "PYotF3PEX1Y", "2zKVhvKLiqN", "EgleSTnFGEE", "NOQ4ee3nv_L", "iclr_2022_RVdN1-eDZ1b", "iclr_2022_RVdN1-eDZ1b", "iclr_2022_...
iclr_2022_eo1barn2Xmd
SLIM-QN: A Stochastic, Light, Momentumized Quasi-Newton Optimizer for Deep Neural Networks
We propose SLIM-QN, a light stochastic quasi-Newton optimizer for training large-scale deep neural networks (DNNs). SLIM-QN addresses two key barriers in existing second-order methods for large-scale DNNs: 1) the high computational cost of obtaining the Hessian matrix and its inverse in every iteration (e.g. KFAC); 2) convergence instability due to stochastic training (e.g. L-BFGS). To tackle the first challenge,SLIM-QN directly approximates the Hessian inverse using past parameters and gradients, without explicitly constructing the Hessian matrix and then computing its inverse. To achieve stable convergence, SLIM-QN introduces momentum in Hessian updates together with an adaptive damping mechanism. We provide rigorous theoretical results on the convergence of SLIM-QN in a stochastic setting. We also demonstrate that SLIM-QN has much less compute and memory overhead compared to existing second-order methods. To better understand the limitations and benefits of SLIM-QN, we evaluate its performance on various datasets and network architectures. For instance on large datasets such as ImageNet, we show that SLIM-QN achieves near optimal accuracy $1.5\times$ faster when compared with SGD ($1.36\times$ faster in wall-clock time) using the same compute resources. We also show that SLIM-QN can readily be applied to other contemporary non-convolutional architectures such as Transformers.
Reject
Although the reviewers acknowledge that the paper is well-written and easy to follow, they found that the contributions of the paper are not enough to be accepted at ICLR. Some concerns from the reviewers are as follows: 1. Assumption 3 is very strong and uncommon. It is not easy to verified even for over-parameterized setting. 2. Both the theoretical and experimental results are not sufficient. No improvement in theoretical results compared to the previous work. Moreover, the performance of the method is no better than the baselines, which are themselves much weaker than state-of-the-art results. 3. Motivation for small-batch training, advantages over K-FAC, the practicality of SLIM-QN, and novelty compared to L-BFGS are questionable. 4. The method is essentially LBFGS with momentum and damping of the hessian, hence its novelty is questionable. 5. The authors emphasize that "we are trying to design a practical QN method with light compute/memory cost, especially when applied to large-scale NNs". Any method that has 20-40 times as much memory requirement as SGD cannot be said to have light memory cost. Based on the above concerns, the paper is not ready for the publication at this moment. The authors should consider to improve the paper by addressing the reviewers' comments and implementing their suggestions and resubmit this paper in the future venues.
train
[ "brwU_QETIvt", "a0sOU-aad8R", "4nnet41Gtz4", "F8u1gcfgTN2", "Y9hRhHrrN6W", "h1Vr6bbL4Kz", "5Ti7blOkla3", "BilUU3MoZbB", "gTblwSB0QLm", "5X42shGrhJH", "BqE3WaTuJ8D", "eIzR2_tTztw", "g0qh7GT8x", "XjEmACi-3As", "LZzYmaWAy5M", "kTnugTGDcxY", "vWjM8JCtQN9", "uLawUEBZZY", "1-lhe_QIJ0" ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "...
[ " To simplify the notation, suppose $z_i=\\nabla \\ell_i(\\theta_t, x_i)$, $z_j=\\nabla \\ell_j(\\theta_t,x_j)$, and $z=\\nabla\\mathcal{L}(\\theta_t)$.\n\nThen, the above equation is reduced as $E_{x_i, x_j} [\\left \\langle z_i, z_j \\right \\rangle] = E_{x_i,x_j}[\\sum_{k=1}^n z_i(k)z_j(k)] = \\sum_{k=1}^n E_{x_...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3, 4 ]
[ "a0sOU-aad8R", "4nnet41Gtz4", "F8u1gcfgTN2", "Y9hRhHrrN6W", "5Ti7blOkla3", "BilUU3MoZbB", "5X42shGrhJH", "gTblwSB0QLm", "eIzR2_tTztw", "kTnugTGDcxY", "LZzYmaWAy5M", "vWjM8JCtQN9", "1-lhe_QIJ0", "uLawUEBZZY", "iclr_2022_eo1barn2Xmd", "iclr_2022_eo1barn2Xmd", "iclr_2022_eo1barn2Xmd", ...
iclr_2022_S5qdnMhf7R
Lightweight Convolutional Neural Networks By Hypercomplex Parameterization
Hypercomplex neural networks have proved to reduce the overall number of parameters while ensuring valuable performances by leveraging the properties of Clifford algebras. Recently, hypercomplex linear layers have been further improved by involving efficient parameterized Kronecker products. In this paper, we define the parameterization of hypercomplex convolutional layers to develop lightweight and efficient large-scale convolutional models. Our method grasps the convolution rules and the filters organization directly from data without requiring a rigidly predefined domain structure to follow. The proposed approach is flexible to operate in any user-defined or tuned domain, from 1D to $n$D regardless of whether the algebra rules are preset. Such a malleability allows processing multidimensional inputs in their natural domain without annexing further dimensions, as done, instead, in quaternion neural networks for 3D inputs like color images. As a result, the proposed method operates with $1/n$ free parameters as regards its analog in the real domain. We demonstrate the versatility of this approach to multiple domains of application by performing experiments on various image as well as audio datasets in which our method outperforms real and quaternion-valued counterparts.
Reject
In general, the reviewers appreciated the elegant concept behind the paper and the good results. However, they also raised considerable reservations about the significance of a method that decreases the parameter count but not necessarily computational efficiency (FLOPS) or memory. While the additional analysis that the authors provided definitely helps to understand the limitations of the method, the reviewers were in the end quite divided on the significance of the results. In addition, all reviewers agreed that the writing was in somewhat rough shape and needed improvement. In summary, this is definitely a borderline paper, but given the current reviewer assessment, I would recommend that it is not quite ready for publication.
train
[ "6P0WXwCMEL", "YKUzdh3diTZ", "XVg7MNnrTEj", "o9kLQvXJcWy", "guwdGIRMSMn", "Pq9YTBVYIYv", "bosZqPmuNVt", "RNgcsnNbg-4", "JkCRiZbi9zd", "BcQI2P3ZAoX", "gREokK6zJpB", "Crd3XjJZQ46", "PKKL2I9MKuT", "xNh8L2C8GiR", "2VIQ6Z31Nbu", "rk3oj0oFVBn", "JO4FvRbZbns", "zXOK8QfiSF", "by9JPoZ-1tq...
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " We would like to thank the Reviewer for the valuable discussion. \n\nRegarding the details on the inference time, to be consistent with the literature, we consider inference time as the time required by the model to compute the output on the test set, as specified in Table 1 of the paper.\nSo, we compute the infe...
[ -1, -1, -1, 5, -1, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "YKUzdh3diTZ", "Pq9YTBVYIYv", "guwdGIRMSMn", "iclr_2022_S5qdnMhf7R", "CF6bBpySeA3", "bosZqPmuNVt", "xsFie0GEZBS", "iclr_2022_S5qdnMhf7R", "gREokK6zJpB", "iclr_2022_S5qdnMhf7R", "JO4FvRbZbns", "iclr_2022_S5qdnMhf7R", "RNgcsnNbg-4", "BcQI2P3ZAoX", "BcQI2P3ZAoX", "BcQI2P3ZAoX", "BcQI2P3...
iclr_2022_M5hiCgL7qt
The NTK Adversary: An Approach to Adversarial Attacks without any Model Access
Adversarial attacks carefully perturb natural inputs, so that a machine learning algorithm produces erroneous decisions on them. Most successful attacks on neural networks exploit gradient information of the model (either directly or by estimating it through querying the model). Harnessing recent advances in Deep Learning theory, we propose a radically different attack that eliminates that need. In particular, in the regime where the Neural Tangent Kernel theory holds, we derive a simple, but powerful strategy for attacking models, which in contrast to prior work, does not require any access to the model under attack, or any trained replica of it for that matter. Instead, we leverage the explicit description afforded by the NTK to maximally perturb the output of the model, using solely information about the model structure and the training data. We experimentally verify the efficacy of our approach, first on models that lie close to the theoretical assumptions (large width, proper initialization, etc.) and, further, on more practical scenarios, with those assumptions relaxed. In addition, we show that our perturbations exhibit strong transferability between models.
Reject
The paper relies on the analytical tools afforded by on the NTK theory to proposes an adversarial attack that uses the information of the model structure and training data, without the need to access the model under attack. While the reviewers found the problem interesting and well motivated, they feel that the theoretical analysis and the experimental results can be significantly improved. In particular, some of the points that the reviewers did not find convincing during the discussion include: (1) the technical novelty of the work, i.e., applying adversarial attack on NTK at inference time seems a trivial extension of PGD attack; (2) authors' argument that knowing the model is strictly stronger than knowing the original training data; (3) scalability and generalization of the proposed method to settings without training and test set; and (4) comparison to existing sota transfer attacks in the same setting, like no-box attack. Addressing the above points will significantly improve the manuscript.
train
[ "olZKHcxtVd", "35Xb9Idr2CN", "3vCvdD6rtLH", "Bjl3N6k28F9", "Oo1CvdYltVh", "4RfVMGwoqXJ", "PdWTa9c1895", "FmJRPrhbFYf", "epbjesLDE6p", "P_kbSTHaXRz", "bejllh-rWJX", "hqfVS2uPDQ5", "FyBk5cX5b_P", "RZ2U0JFarv5", "JwTHhrsfWD" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes the use of Neural Tangent Kernel (NTK) to generate transferrable adversarial examples without access to the target model.\n\nThe main contributions are:\n\n1. The derivation of adversarial perturbation under NTK setting.\n\n2. Illustration of different attack scenarios using NTK-attack (e.g. no...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, 3, 5, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, 3, 4, 4 ]
[ "iclr_2022_M5hiCgL7qt", "iclr_2022_M5hiCgL7qt", "bejllh-rWJX", "hqfVS2uPDQ5", "FyBk5cX5b_P", "FyBk5cX5b_P", "RZ2U0JFarv5", "RZ2U0JFarv5", "olZKHcxtVd", "JwTHhrsfWD", "iclr_2022_M5hiCgL7qt", "iclr_2022_M5hiCgL7qt", "iclr_2022_M5hiCgL7qt", "iclr_2022_M5hiCgL7qt", "iclr_2022_M5hiCgL7qt" ]
iclr_2022_G1J5OYjoiWb
An Attempt to Model Human Trust with Reinforcement Learning
Existing works to compute trust as a numerical value mainly rely on ranking, rating or assessments of agents by other agents. However, the concept of trust is manifold, and should not be limited to reputation. Recent research in neuroscience converges with Berg's hypothesis in economics that trust is an encoded function in the human brain. Based on this new assumption, we propose an approach where a trust level is learned by an overlay of any model-free off-policy reinforcement learning algorithm. The main issues were i) to use recent findings on dopaminergic system and reward circuit to simulate trust, ii) to assess our model with reliable and unbiased real life models. In this work, we address these problems by extending Q-Learning to trust evaluation, and comparing our results to a social science case study. Our main contributions are threefold. (1) We model the trust-decision making process with a reinforcement learning algorithm. (2) We propose a dynamic reinforcement of the trust reward inspired by recent findings of neuroscience. (3) We propose a method to explore and exploit the trust space. The experiments reveal that it is possible to find a set of hyperparameters of our algorithm to reproduce recent findings on overconfidence effect in social psychology research.
Reject
The paper presents a methodology for modeling, and learning, trust in a multi-agent reinforcement learning system. The reviewers considered this to be an interesting and important question to answer. Nevertheless, they maintained concerns on multiple fronts. The paper could benefit from being more focused. Authors are strongly encouraged to further scale down the claims in the introduction, and ensure that claims made there and later in the paper are matched with experiments that quantify, and validate, the notions introduced. Model choices made, as well assumptions introduced should be clearly motivated/mapped to reality, in light of their strength. Extending experiments to broader example settings, as outlined in the reviews, would also strengthen the work.
train
[ "WoQmPIfhq3j", "FcniKfO3Wxc", "LOICKI1jNaI", "lI2MPAyVBj9", "bvHbRRRgIKq", "x-RzViLYWk4", "JOpnt9h551", "0YOlwWDE48x", "41bSYpG8ukL", "WxDrF0bcQi", "FGzlhTMBhVK", "IWftdY77vRj", "rj20s_SLnxr", "2GvT-CGtw5t", "n5v6K_dPSrs", "Rzki_K3Qx6s" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for carefully updating the paper, I appreciate your efforts and believe the idea has merits. \n\nHowever, the execution and experiments need further work despite the revisions. Specifically, I would encourage the others to better organise the introduction and have more appropriate experiments to evaluat...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "WxDrF0bcQi", "iclr_2022_G1J5OYjoiWb", "FcniKfO3Wxc", "n5v6K_dPSrs", "iclr_2022_G1J5OYjoiWb", "Rzki_K3Qx6s", "FcniKfO3Wxc", "Rzki_K3Qx6s", "Rzki_K3Qx6s", "n5v6K_dPSrs", "2GvT-CGtw5t", "2GvT-CGtw5t", "2GvT-CGtw5t", "iclr_2022_G1J5OYjoiWb", "iclr_2022_G1J5OYjoiWb", "iclr_2022_G1J5OYjoiWb...
iclr_2022_xxU6qGx-2ew
Gaussian Differential Privacy Transformation: from identification to application
Gaussian differential privacy (GDP) is a single-parameter family of privacy notions that provides coherent guarantees to avoid the exposure of individuals from machine learning models. Relative to traditional $(\epsilon,\delta)$-differential privacy (DP), GDP is more interpretable and tightens the bounds given by standard DP composition theorems. In this paper, we start with an exact privacy profile characterization of $(\epsilon,\delta)$-DP and then define an efficient, tractable, and visualizable tool, called the Gaussian differential privacy transformation (GDPT). With theoretical property of the GDPT, we develop an easy-to-verify criterion to characterize and identify GDP algorithms. Based on our criterion, an algorithm is GDP if and only if an asymptotic condition on its privacy profile is met. By development of numerical properties of the GDPT, we give a method to narrow down possible values of an optimal privacy measurement $\mu$ with an arbitrarily small and quantifiable margin of error. As applications of our newly developed tools, we revisit some established \ed-DP algorithms and find that their utility can be improved. We additionally make a comparison between two single-parameter families of privacy notions, $\epsilon$-DP and $\mu$-GDP. Lastly, we use the GDPT to examine the effect of subsampling under the GDP framework.
Reject
We thank the authors for their response. The reviewers agree that this paper provides contributions in automating privacy analyses under the Gaussian differential privacy (GDP) framework. The reviewers also pointed out several drawbacks of the paper. Most importantly, the reviewers do not find the presented applications to be convincing. In particular, the presented result can be much strengthened if the proposed method can lead to improved privacy analysis for more sophisticated algorithms such as DP-SGD across a wide regime of epsilon and delta. (In general, the privacy guarantee is very weak with delta bigger than 1/n.) Overall, the paper does not seem to provide enough evidence to showcase the usefulness of their proposed method.
train
[ "GVtlLbGyvOK", "3nadT44v7m5", "PTpw9aY5KSr", "jRMpOmylwdR", "0p686tTSoe", "z2LjM4e94dU", "P94RObpojy", "bFZ34EPNnr", "zUQZ-zDusCF", "b30j0JPXHmq", "9AOt9rqe4e", "Gb47HlvhhZJ", "oC1Vs_va_g_", "MzcNm-3Q8DJ", "gukBh4sbrv0", "Q12sZsyLHED", "QSQxEEMEREm" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your valuable suggestions and consideration.\n\nAs for your concern of evaluations, the goal of our work is to bridge the gap between old algorithms and the framework of GDP. The analysis about the GDP itself is beyond the scope of this paper. For more detailed discussion of GDP itself, we encourage...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "PTpw9aY5KSr", "0p686tTSoe", "P94RObpojy", "iclr_2022_xxU6qGx-2ew", "oC1Vs_va_g_", "zUQZ-zDusCF", "jRMpOmylwdR", "P94RObpojy", "QSQxEEMEREm", "9AOt9rqe4e", "Q12sZsyLHED", "oC1Vs_va_g_", "gukBh4sbrv0", "zUQZ-zDusCF", "iclr_2022_xxU6qGx-2ew", "iclr_2022_xxU6qGx-2ew", "iclr_2022_xxU6qGx...
iclr_2022_7ADMMyZpeY
A theoretically grounded characterization of feature representations
A large body of work has explored how learned feature representations can be useful for a variety of downstream tasks. This is true even when the downstream tasks differ greatly from the actual objective used to (pre)train the feature representation. This observation underlies the success of, e.g., few-shot learning, transfer learning and self-supervised learning, among others. However, very little is understood about why such transfer is successful, and more importantly, how one should choose the pre-training task. As a first step towards this understanding, we ask: what makes a feature representation good for a target task? We present simple, intuitive measurements of the feature space that are good predictors of downstream task performance. We present theoretical results showing how these measurements can be used to bound the error of the downstream classifiers, and show empirically that these bounds correlate well with actual downstream performance. Finally, we show that our bounds are practically useful for choosing the right pre-trained representation for a target task.
Reject
In this paper, authors introduce two properties of feature representations, namely local alignment and local congregation, and show how these properties can be predictive of downstream performance. The paper has a heavier focus on providing theoretical statements using these properties but authors also empirically evaluate their suggested method. **Strong Points**: - The paper is well-written and easy to follow. - The proposed concepts (local alignment and local congregation) are intuitive. - The theoretical statements and their proofs are correct. - The proposed metric shows some advantage against a few baselines. - Prior work on feature representations and transferability are discussed. **Weak Points**: - **The connections to prior work on K-nearest neighbors and linear classifiers are not properly discussed.** This is very important because authors assume that the network that outputs the feature representations is trained on a different data and they reduced the analysis to that of a binary linear classifier. Hence, all classical learning theory results on binary classifiers apply in this setting. Furthermore, KNN methods and analysis can be simply applied on the features as well. In light of this and the lack of discussion on this matter, the significance of the theoretical and empirical results are not clear. - **The main proposed properties could be improved further**. It looks like the defined properties (local alignment and local congregation) could be improved by merging them into one property about separability of data? The current properties are sensitive to scaling which is undesirable given that classification performance is invariant to scaling of the features. It seems like local congregation is mostly capturing the scale so some normalized version of local alignment might be able to capture the main property of interest. - **The theoretical results in their current form are not very significant.** One limiting factor on the theoretical results is that since the analysis is done only on the classification layer, it does not say anything about the relationship of the upstream and downstream tasks. But perhaps the most important limitation is that the properties are defined based on the downstream task distribution as opposed to downstream training data. That makes it difficult to measure them in practical settings where we have a limited number of data points. Classical results on learning theory avoid this and only use measures that depend on the given training set. - **The empirical evaluation could benefit from stronger baselines** Authors mentioned "We therefore consider only baselines that make minimal assumptions about the pre-trained feature representation and the target task" and hence avoided comparing to many prior methods. However, I think the appropriate approach would be to compare the performance of the proposed method to strong baselines but then explain how they differ in terms of their assumptions, etc. Moreover, there are other simple heuristic baselines to consider, eg. K-NN (which is not computationally expensive in the few-shot settings) or a classifier that is trained by initializing it to be the sum of feature vectors in the first class (assuming binary classification) minus sum of feature vectors in the second class and doing a few SGD updates on it. Therefore, I believe authors could improve the empirical section significantly by taking these suggestions into account. **Final Decision Rationale**: This is a borderline paper. While the paper has a nice combination of theoretical and empirical contributions, both theoretical and empirical contributions have a lot of room for improvement (and a clear path to get there) as pointed above. In particular, I believe having either strong theoretical contributions or strong empirical contributions would have been enough for acceptance and I hope authors would take the above suggestions into account and submit the improved version of this work again!
val
[ "pPhhxfrfP-9", "3-X8hs-yuN_", "1VUpeTzugId", "sCC96kH21i", "fpSd365WGyq", "M5LNCUWvVw2", "6BVK83f9CpZ", "hP3zXHnAurL", "1BMg-uqDCla", "tVzwHU-0n5N", "hJCINj3Xbjf", "E0_U4iiKGbS", "xG4diWfR5Ed", "HHZQHj3Js5z", "oAm96E9S5r5", "oEP36Ic2hic", "ctyugJ7UBt9", "LJs3pypvMLx", "Lxfc3-i-n_...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_re...
[ "The paper presents two properties, \"local alignment\" (eq 3) and \"degree of congregation\" (eq 4) which are claimed to be good predictors of downstream (classification) task performance. These properties are used to derive bounds on the error of downstream classifiers (under a number of assumptions, including a ...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2022_7ADMMyZpeY", "iclr_2022_7ADMMyZpeY", "sCC96kH21i", "6BVK83f9CpZ", "oAm96E9S5r5", "hP3zXHnAurL", "1BMg-uqDCla", "ctyugJ7UBt9", "oEP36Ic2hic", "hJCINj3Xbjf", "E0_U4iiKGbS", "foBu6yq2XX", "pPhhxfrfP-9", "iclr_2022_7ADMMyZpeY", "LJs3pypvMLx", "3-X8hs-yuN_", "Lxfc3-i-n_", "ic...
iclr_2022_-BTmxCddppP
Revisiting Out-of-Distribution Detection: A Simple Baseline is Surprisingly Effective
It is an important problem in trustworthy machine learning to recognize out-of-distribution (OOD) inputs which are inputs unrelated to the in-distribution task. Many out-of-distribution detection methods have been suggested in recent years. The goal of this paper is to recognize common objectives as well as to identify the implicit scoring functions of different OOD detection methods. In particular, we show that binary discrimination between in- and (different) out-distributions is equivalent to several different formulations of the OOD detection problem. When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure. Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, but is similar to the one used when training with an extra background class. In practice, when trained in exactly the same way, all these methods perform similarly and reach state-of-the-art OOD detection performance.
Reject
The paper contributes to the understanding of out-of-distribution detection by showing that binary discrimination between in- and out-distribution examples 'is equivalent to several different formulations of the out-of-distribution detection problem'. The paper shows this in an asymptotic setup based on studying likelihood ratios for distinguishing in-distribution examples from out-of-distribution examples. The paper also provides numerical results showing that a simple baseline based on binary classification works well. The paper got very mixed responses ranging from strong accept to reject: - Reviewer YhZ7 (recommending 3: reject) raises several important concerns, specifically that the paper doesn't explain the significance of its contributions adequately, that experiments are not thorough enough (for example that only one out-of-distribution dataset is considered), and that to train a binary classifier one needs to have sufficiently many out-of-distribution examples. The authors argued in response that the purpose of the paper is to provide an understanding of existing methods that are often empirically driven, made revisions to the exposition, and point out that they actually evaluate on six/seven out-of-distribution test sets. After discussion, the reviewer is still concerned that the paper states 'We show that when training the binary discriminator between in- and out-distribution together with a standard classifier on the in-distribution in a shared fashion, the binary discriminator reaches state-of-the-art OOD detection performance' as a contribution and that this claim is not supported by the results in the paper. The authors say they are happy to drop this particular statement and emphasize that their contribution is that that a binary classifier can be a useful tool for OOD detection. The reviewer is not satisfied by this response, as the reviewer feels that this makes the contribution much less impactful. - Reviewer iH61 (recommending 6: marginally above, initially reject) pointed out that the significance of one of the contributions is limited, since the claims resemble the ones by Thulasidasan et al. [2021] and Mohseni et al. [2020], and initially recommends to reject. The authors respond that those two papers only aim at good performance, but do not unify existing approaches, as the paper under review does. The reviewer slightly raised their score, but again points out that the previous works already show that a binary discriminator performs well. - Reviewer Lwwq (recommending 10: strong accept) appreciates the unification of different methods and votes for strong acceptance. The reviewer also points out that he/she is not an expert in the field, and thus this reviewer's rating should be taken with care. - Reviewer YRfA (recommending 8: accept) points out that the authors make notable progress towards a better understanding of OOD methods, but is concerned about what problem the authors are trying to solve and its significance, and states that he/she cannot judge the importance of the paper. - Reviewer vYWv (recommending 6: marginally above, initially recommending reject) finds that the paper provides helpful insights to connect methods for OOD detection tasks, and weakly recommends acceptance. The reviewer's opinions on this paper vary significantly. Initially, a major selling point of the paper was that 'the binary discriminator reaches state-of-the-art OOD detection performance', but after discussion, the authors and reviewers agree that this statement is not supported by experiments, and the idea of using a binary discriminator is also not new, and thus everyone agrees that this statement should be removed. This leaves as the major contribution an improved understanding of a variety of methods, and casting them as versions of a binary classifier. This by itself would be sufficient to carry a paper, however the stated equivalence is rather weak as it is based on an asymptotic analysis, and in the asymptotic regime, out-of-distribution detection is rather trivial because the distributions are given. This also explains why in the paper's experiments all the methods that are asymptotically related behave quite differently in experiments. I do not recommend this paper for acceptance. I've read the paper and I've thought quite a while it and its reviews. I have also discussed the paper with a colleague who works actively on out-of-distribution detection, since I'm not an expert on this topic myself. While in general I find it very valuable to unify and to understand existing out-of-distribution algorithms better, I don't see how the particular interpretation provided by the paper is impactful, since it is unclear how the connection drawn in an asymptotic setup for Bayes classifiers actually extend to concrete OOD detection algorithms, which operate in the finite sample regime.
train
[ "bShsagVOtaq", "ub7ribjNA4W", "d6NVS5Ms1Rc", "_18GvM00n5V", "JsQZDnj4jQ8", "XNkQPwhA2Cl", "j8WWCCUAkzv", "mfPtaCftrQ8", "_g9Qf1Zo9sX", "nfg62Y9b3rp", "H9geJQkOgAL", "9Wq0Yg8SvA1", "7c_97yJAGN", "3Smu-PuDdeQ", "_MFqeh1Pb-o", "iJ5tOboQrz8", "jV5s7YOYnM5", "PpcV4jgVUFE", "kCMRvOBV3b...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We much appreciate the increased score and would like to resolve the remaining concern.\n\nAs also mentioned in the discussion with Reviewer YhZ7, we agree that the third contribution indeed needs to be revised.\nWe would like to still emphasize in the contributions the important observation that a straightforwar...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 10, 8 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 2 ]
[ "j8WWCCUAkzv", "_18GvM00n5V", "JsQZDnj4jQ8", "3Smu-PuDdeQ", "7c_97yJAGN", "iclr_2022_-BTmxCddppP", "XNkQPwhA2Cl", "iclr_2022_-BTmxCddppP", "kCMRvOBV3b1", "PpcV4jgVUFE", "XNkQPwhA2Cl", "XNkQPwhA2Cl", "jV5s7YOYnM5", "jV5s7YOYnM5", "iJ5tOboQrz8", "iclr_2022_-BTmxCddppP", "iclr_2022_-BTm...
iclr_2022_8gX3bY78aCb
Molecular Graph Representation Learning via Heterogeneous Motif Graph Construction
We consider feature representation learning of molecular graphs. Graph Neural Networks have been widely used in feature representation learning of molecular graphs. However, most proposed methods focus on the individual molecular graph while neglecting their connections, such as motif-level relationships. We propose a novel molecular graph representation learning method by constructing a Heterogeneous Motif graph (HM-graph) to address this issue. In particular, we build an HM-graph that contains motif nodes and molecular nodes. Each motif node corresponds to a motif extracted from molecules. Then, we propose a Heterogeneous Motif Graph Neural Network (HM-GNN) to learn feature representations for each node in the HM-graph. Our HM-graph also enables effective multi-task learning, especially for small molecular datasets. To address the potential efficiency issue, we propose an edge sampler, which significantly reduces computational resources usage. The experimental results show that our model consistently outperforms previous state-of-the-art models. Under multi-task settings, the promising performances of our methods on combined datasets shed light on a new learning paradigm for small molecular datasets. Finally, we show that our model achieves similar performances with significantly less computational resources by using our edge sampler.
Reject
The paper introduces a graph neural network for molecules which takes into account motif-level relationships. The paper received borderline reviews, with three reviewers voting for reject, and one for accept. After the rebuttal, the reviewers did not change their scores. Overall, it seems that the paper has some merit, with good experimental results. Nevertheless, it suffers from two issues (i) the positioning with respect to other motif-based approaches is not clear enough, making the novelty hard to assess; (ii) there is a lot of room for improvement in terms of clarity. Therefore, the area chair follows the majority of the reviewers' recommendations and recommends a reject.
train
[ "1iNnBMWYyV5", "XM7xnTduhNq", "jcmxOAQxgrC", "-BxTB0R3K94", "SrWivroLSt", "M7hlcivUy1S", "9LPtAWsCqsD", "d3cyCPe86tu", "qmGzrpBsgBh", "7Gy3ryIgWL" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors' rebuttal. Though the authors emphasized their contribution of this paper, I don't think the difference between their work and the literature using motifs is significant. Moreover, the authors did not clearly address my concern 3. For some of my concerns, the authors said that they will ad...
[ -1, -1, -1, -1, -1, -1, 5, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "-BxTB0R3K94", "d3cyCPe86tu", "7Gy3ryIgWL", "7Gy3ryIgWL", "qmGzrpBsgBh", "9LPtAWsCqsD", "iclr_2022_8gX3bY78aCb", "iclr_2022_8gX3bY78aCb", "iclr_2022_8gX3bY78aCb", "iclr_2022_8gX3bY78aCb" ]
iclr_2022_bYfk8y7BXS
Pessimistic Model Selection for Offline Deep Reinforcement Learning
Deep Reinforcement Learning (DRL) has demonstrated great potentials in solving sequential decision making problems in many applications. Despite its promising performance, practical gaps exist when deploying DRL in real-world scenarios. One main barrier is the over-fitting issue that leads to poor generalizability of the policy learned by DRL. In particular, for offline DRL with observational data, model selection is a challenging task as there is no ground truth available for performance demonstration, in contrast with the online setting with simulated environments. In this work, we propose a pessimistic model selection (PMS) approach for offline DRL with a theoretical guarantee, which features a tuning-free framework for finding the best policy among a set of candidate models. Two refined approaches are also proposed to address the potential bias of DRL model in identifying the optimal policy. Numerical studies demonstrated the superior performance of our approach over existing methods.
Reject
The paper proposes a new approach called pessimistic model selection (PMS) for model selection in offline RL and tests it in 6 different environments. Under certain assumptions this allows theoretical results that the best model is recovered with high probability. Several points were raised by the reviewers and maintained after the rebuttal: - Theoretical results were considered weak as they only hold asymptotically. - Experimental results limited (potentially different regret scales, no sufficient comparison to other baselines). - Exposition of the paper that needs to be improved. Given the strong consensus among the reviewer I recommend rejecting this paper.
train
[ "gFSrsmnYHtX", "LKnoAJiL4ox", "Z-w71J8j8bG", "7gDbMs-spWN", "_irQixSE6ME", "vxQcH27J7aA", "j-OpjypHUrt", "kJk1rjVI8rQ", "nIiP2jYJ7ua", "JCeRarEgy9U", "oKmy59lZ77g", "fcC4tus_eD6", "uUlfA9tJDwQ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Wrcc,\n\nThank you for your follow-up suggestion. We provide related feedback and clarification below. \n\n- Value of $\\alpha$ and $O$ \n\nThe main focus of our work is on the theoretical approaches and findings. For our experiments, we fixed the $\\alpha$ to $0.01$ and $O$ to $20$ for all environ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "7gDbMs-spWN", "Z-w71J8j8bG", "kJk1rjVI8rQ", "nIiP2jYJ7ua", "uUlfA9tJDwQ", "j-OpjypHUrt", "fcC4tus_eD6", "oKmy59lZ77g", "JCeRarEgy9U", "iclr_2022_bYfk8y7BXS", "iclr_2022_bYfk8y7BXS", "iclr_2022_bYfk8y7BXS", "iclr_2022_bYfk8y7BXS" ]
iclr_2022__7YnfGdDVML
DCoM: A Deep Column Mapper for Semantic Data Type Detection
Detection of semantic data types is a very crucial task in data science for automated data cleaning, schema matching, data discovery, semantic data type normalization and sensitive data identification. Existing methods include regular expression-based or dictionary lookup-based methods that are not robust to dirty as well unseen data and are limited to a very less number of semantic data types to predict. Existing Machine Learning methods extract a large number of engineered features from data and build logistic regression, random forest or feedforward neural network for this purpose. In this paper, we introduce DCoM, a collection of multi-input NLP-based deep neural networks to detect semantic data types where instead of extracting a large number of features from the data, we feed the raw values of columns (or instances) to the model as texts. We train DCoM on 686,765 data columns extracted from the VizNet corpus with 78 different semantic data types. DCoM outperforms other contemporary results with a quite significant margin on the same dataset achieving a support-weighted F1 score of 0.925.
Reject
The paper studies semantic type detection. The problem is of practical significance to i tabular data. However, in its current form, there are concerns about the scope of novelty and technical significance.
train
[ "MmcqEehi49x", "zfR11t7U9ZY", "jcYrhe_cXga", "Me5-HDVn9h", "wNq0QdwSS6c", "ZhtMCE6eYLm", "k75nYXKXCGP", "DOjIs5_ryOU" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the author response. \n\n> 1 \nTable 2 in the revised version shows inconsistent results compared to the results in the Sato (Zhang et al. 2019), TURL (Deng et al. 2020), and Doduo (Suhara et al. 2021) papers. It is shown that \n\n- Sato >> Sherlock (90.2 vs 86.7 F1 on the VizNet dataset) in (Zhang ...
[ -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "jcYrhe_cXga", "wNq0QdwSS6c", "k75nYXKXCGP", "ZhtMCE6eYLm", "DOjIs5_ryOU", "iclr_2022__7YnfGdDVML", "iclr_2022__7YnfGdDVML", "iclr_2022__7YnfGdDVML" ]
iclr_2022_IPy3URgH47U
ACTIVE REFINEMENT OF WEAKLY SUPERVISED MODELS
Supervised machine learning (ML) has fueled major advances in several domains such as health, education and governance. However, most modern ML methods rely on vast quantities of point-by-point hand-labeled training data. In domains such as clinical research, where data collection and its careful characterization is particularly expensive and tedious, this reliance on pointillisticaly labeled data is one of the biggest roadblocks to the adoption of modern data-hungry ML algorithms. Data programming, a framework for learning from weak supervision, attempts to overcome this bottleneck by generating probabilistic training labels from simple yet imperfect heuristics obtained a priori from domain experts. We present WARM, Active Refinement of Weakly Supervised Models, a principled approach to iterative and interactive improvement of weakly supervised models via active learning. WARM directs domain experts' attention on a few selected data points that, when annotated, would improve the label model's probabilistic output in terms of accuracy the most. Gradient backpropagation is then used to iteratively update decision parameters of the heuristics of the label model. Experiments on multiple real-world medical classification datasets reveal that WARM can substantially improve the accuracy of probabilistic labels, a direct measure of training data quality, with as few as 30 queries to clinicians. Additional experiments with domain shift and artificial noise in the LFs, demonstrate WARM's ability to adapt heuristics and the end model to changing population characteristics as well as its robustness to mis-specification of domain-expert-acquired LFs. These capabilities make WARM a potentially useful tool for deploying, maintaining, and auditing weakly supervised systems in practice.
Reject
The authors propose WARM, a novel method that actively queries a small set of true labels to improve the label function in weak supervision. In particular, the authors propose a methodology that converts the label function to "soft" versions that are differentiable, which are in term learnable with true labels using proper updates of parameters. Empirical results on several real-world data sets demonstrate that the method yields a pretty strong performance. The reviewers generally agree that the idea of making the labeling functions differentiable is conceptually interesting. They are also positive about the simplicity and the promising performance. They share joint concerns on whether the idea has been sufficiently studied in terms of the design choices and completeness of the experiments. For instance, the authors can conduct deeper exploration of the trade-off for differentiable LFs. They can also study active learning strategies that are beyond basic uncertainty sampling. While the authors have provided more studies about those exploration and ablation studies during the rebuttal, generally the results are not sufficient to convince most of the reviewers. In future revisions, the authors are encouraged to clarify its position with respect to existing works that combine active learning and weakly-supervised learning. The authors position the paper as more empirical than theoretical. So the suggestion from some reviewers about more theoretical study is viewed as nice-to-have but not a must.
train
[ "yz4noJflEX2", "lXD7sebtxNW", "SqE4zqEiGgX", "HXHZyrWBceV", "XHi4d1302Om", "ltdg0R9BYf5", "lROjwVby-F5", "_L1HNrUAGF1", "mXwmHX4o3tu", "5Ig1AHj2rU", "bTe_sloBxnv", "ygNI_oGHEs-", "SovRYRQyyCR" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking time to go through our work and for their constructive comments. \n> \"*(W1) Simple method*\"\n\nWe agree with the reviewer that the method is simple, but firmly believe WARM's simplicity is its asset. We also agree with Reviewer 379M \nthat our method's design enables users of th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 3, 3, 4 ]
[ "5Ig1AHj2rU", "HXHZyrWBceV", "ygNI_oGHEs-", "SqE4zqEiGgX", "SovRYRQyyCR", "iclr_2022_IPy3URgH47U", "bTe_sloBxnv", "mXwmHX4o3tu", "iclr_2022_IPy3URgH47U", "iclr_2022_IPy3URgH47U", "iclr_2022_IPy3URgH47U", "iclr_2022_IPy3URgH47U", "iclr_2022_IPy3URgH47U" ]
iclr_2022_AlPBx2zq7Jt
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the $Q$-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER's LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
Reject
## A Brief Summary This paper proposes two critical modifications to the original RUDDER algorithm: 1. Proposes the Align-RUDDER method that assumes that the episodes with high rewards can be used as demonstrations. 2. Uses a profile model from the Multiple sequence alignment approach to align the demonstrations and redistribute the rewards according to how frequently events in the demos are shared across different demonstrators. MSA is being used as a profile model instead of LSTM. The paper uses successor features to represent state-action pairs, which is then used to compute the similarity matrix used for MSA afterward. The paper shows promising results in the Minecraft environment (ObtainDiamond task,) as well as synthetic grid-world environments. ## Reviewer bJbP *Strengths:* - Empirical evaluation is well-done. *Weaknesses:* - The writing requires more work. - Limited experiments: Mostly on toy-grid world/navigation environments, it is not clear if the results will generalize to the control problems. ## Reviewer mK3T *Strengths:* - Simple and effective technique for identifying sub-goals. - Large improvements over original RUDDER. - Impressive results on Minecraft. *Weaknesses:* - More through ablations on the importance of different elements of Align-RUDDER. - Presentation and writing need more improvements. - Assumption of a single underlying successful strategy is an important limitation. - Figure 1 is problematic and confusing because of the way it explains the RUDDER algorithm. ## Reviewer nk2L *Strengths:* - Impressive results on Minecraft. *Weaknesses:* - Poor justification and motivation. - RUDDER vs Align-RUDDER comparisons are only done on two grid-world environments. - More ablations are required to justify the approach. - Writing requires more work, some important concepts require more clarity. Some undefined concepts... - Incorrect claims such as: > Q-function of an optimal policy resembles a step function ## Reviewer YcqX *Strengths:* - Strong motivation. - MSA for demos is novel. - Strong experimental results. *Weaknesses:* - Several grammatical errors. - The method is not explained well in the paper, the writing needs more work to improve the clarity. - Lack of sufficient analysis and ablations on the Align-RUDDER approach. ## Key Takeaways and Thoughts Overall, the result provided in this paper in the Minecraft environment is impressive. The motivation for the Align-RUDDER is clear for me. I like the paper; in particular, the application of the MSA for the alignments across the demos is novel. However, as all the reviewers of this paper agreed that the paper is unclear, especially the method description requires more work. The paper needs to present more ablations and analysis to justify which components of Align-RUDDER algorithm are essential. I agree with both insights, the authors have made improvements in the paper to improve the exposition of the algorithms, but still, the paper feels a bit rushed. I would recommend the authors reconsider the paper's current structure and improve the writing further, especially the description of the method can be further improved. I would recommend that the authors fix those essential issues with the paper and the other comments reviewers made in a future resubmission.
train
[ "-vYt-EmjdY9", "SkZIb72M_BK", "meFhKin1ns6", "esmmg_lTEXZ", "glM3VGMOt0", "ng9FsB0WhO5", "X62QZIhSKeX", "rQa1LIRsDpP", "L131HgPgkFH", "kgWxr-8Ciin", "X-yMflaTztf", "lyd4yXJsCXE", "CQayWYUI9Li", "eqMaJWZqM40", "IS0IUU7saP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the revised version. While the readability of the draft has improved, the presentation of the paper leaves scope for further improvement. Additionally, the current version of the paper focuses primarily on Minecraft tasks. To assess the general performance of the algorithm, it would be useful to hav...
[ -1, -1, 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "lyd4yXJsCXE", "L131HgPgkFH", "iclr_2022_AlPBx2zq7Jt", "ng9FsB0WhO5", "X62QZIhSKeX", "rQa1LIRsDpP", "iclr_2022_AlPBx2zq7Jt", "kgWxr-8Ciin", "IS0IUU7saP", "meFhKin1ns6", "X62QZIhSKeX", "eqMaJWZqM40", "iclr_2022_AlPBx2zq7Jt", "iclr_2022_AlPBx2zq7Jt", "iclr_2022_AlPBx2zq7Jt" ]
iclr_2022_f9JwVXMJ1Up
The Needle in the haystack: Out-distribution aware Self-training in an Open-World Setting
Traditional semi-supervised learning (SSL) has focused on the closed world assumption where all unlabeled samples are task-related. In practice, this assumption is often violated when leveraging data from very large image databases that contain mostly non-task-relevant samples. While standard self-training and other established methods fail in this open-world setting, we demonstrate that our out-distribution-aware self-learning (ODST) with a careful sample selection strategy can leverage unlabeled datasets with millions of samples, more than 1600 times larger than the labeled datasets, and which contain only about $2\%$ task-relevant inputs. Standard and open world SSL techniques degrade in performance when the ratio of task-relevant sample decreases and show a significant distribution shift which is problematic regarding AI safety while ODST outperforms them with respect to test performance, corruption robustness and out-of-distribution detection.
Reject
This paper proposes a method for self-training in an open-world setting where a significant portion of unlabeled data might include examples that are not task related. The proposed method (ODST) uses a more accurate OOD detection technique which allows an improved sample selection leading to higher accuracy. Strong Points: - This paper studies a very important and impactful problem. - The paper is well-written. - The empirical results show that the proposed method improves over prior work. - To better understand the iterative scheme, authors provide theoretical analysis using Bayesian decision theory. Weak Points: - Novelty: Given prior work on different variants of noisy students, this work has limited novelty. - Dataset diversity: The main results are provided for CIFAR-10 and CIFAR-100 datasets which are very similar to each other. During the discussion period, authors added results on SVHN datasets but the accuracy gap between the proposed method and FixMatch is insignificant (FPR gap is higher but since the main goal is improving performance, I think showing accuracy is a more important measure here). - Connecting theoretical results to the rest of the paper: The paper can be improved significantly if the theoretical results are more connected to the rest of the paper and in particular with the proposed algorithm. While 4 out of 5 reviewers are recommending rejection, I think this was a very close decision. Most reviewers were concerned with novelty which I think is a valid point. Given that and the fact that the theoretical results are very limited, showing strong empirical results are required to accept this paper. Even though the provided results on CIFAR datasets are strong, the result on SVHN does not show a significant improvement. I understand that running experiments on ImageNet might not be budget-friendly. However, it is possible to run similar experiments or other datasets to show the robustness of the proposed method to the choice of the dataset. Consequently, I recommend rejecting the paper and propose authors to resubmit after adding more datasets as part of their evaluation.
train
[ "WV4T7-4vzb1", "uy-pZMoIqVJ", "dPXdX8x6ivT", "oP-DueZM6WX", "-1gun9cWoez", "QXYWSkOEYl", "YUzEqDS5pEl", "qirQI2bAMNC", "oCm8U7rTTi", "1HmfvKGzceK", "y9Qf8uCFDz", "_JvirAYZZ8r", "K8yWHpFCAIV", "uUPgJn6WQxv", "EyUr9qQrc2e", "TR8HEpXZ8Pl", "JX5G5vidv2b", "FWazKsZMI0d", "tlLuUL0g20",...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Thanks for your kind reply. After careful reviews, I think given the current form of presentation along with the novelty, it is still hard for me to recommend the acceptance. Firstly, I did not see any attached proofs in your supplementary material that support the Lemmas. Secondly, Lemma 3.2 seems a bit too magi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2, 4 ]
[ "dPXdX8x6ivT", "K8yWHpFCAIV", "-1gun9cWoez", "QXYWSkOEYl", "y9Qf8uCFDz", "uUPgJn6WQxv", "qirQI2bAMNC", "oCm8U7rTTi", "4X-eAcU6z8z", "tlLuUL0g20", "_JvirAYZZ8r", "FWazKsZMI0d", "JX5G5vidv2b", "EyUr9qQrc2e", "TR8HEpXZ8Pl", "iclr_2022_f9JwVXMJ1Up", "iclr_2022_f9JwVXMJ1Up", "iclr_2022_...
iclr_2022_buSCIu6izBY
Occupy & Specify: Investigations into a Maximum Credit Assignment Occupancy Objective for Data-efficient Reinforcement Learning
The capability to widely sample the state and action spaces is a key ingredient toward building effective reinforcement learning algorithms. The trade-off between exploration and exploitation generally requires the use of a data model, from which novelty bonuses are estimated and used to bias the return toward wider exploration. Surprisingly, little is known about the optimization objective followed when novelty (or entropy) bonuses are considered. Following the ``probability matching'' principle, we interpret here returns (cumulative rewards) as set points that fixate the occupancy of the state space, that is the frequency at which the different states are expected to be visited during trials. The circular dependence of the rewards sampling on the occupancy/policy makes it difficult to evaluate. We provide here a variational formulation for the matching objective, named MaCAO (Maximal Credit Assignment Occupancy) that interprets rewards as a log-likelihood on occupancy, that operates anticausally from the effects toward the causes. It is, broadly speaking, an estimation of the contribution of a state toward reaching a (future) goal. It is constructed so as to provide better convergence guaranties, with a complementary term serving as a regularizer, that, in principle, may reduce the greediness. In the absence of an explicit target occupancy, a uniform prior is used, making the regularizer consistent with a MaxEnt (Maximum Entropy) objective on states. Optimizing the entropy on states in known to be more tricky than optimizing the entropy on actions, because of an external sampling through the (unknown) environment, that prevents the propagation of a gradient. In our practical implementations, the MaxEnt regularizer is interpreted as a TD-error rather than a reward, making it possible to define an update in both the discrete and continuous cases. It is implemented on an actor-critic off-policy setup with a replay buffer, using gradient descent on a multi-layered neural network, and shown to provide significant increase in the sampling efficacy, that reflects in a reduced training time and higher returns on a set of classical motor learning benchmarks, in both the dense and the sparse rewards cases.
Reject
While the main idea of the paper (using a Max-Ent objective on the states of an MDP) was considered interesting, all reviewers raised the problem of clarity of the paper which needs to be drastically improved. While the writing could be improved by the revsion, these concerns could also not be fully alleviated by the rebuttal of the authors. The reviewers agreed that the paper needs rewritting in order to clarify the contribution before the paper can be published.
train
[ "y79vqGzgdOF", "RyBUY2qRFL", "1hc1Y4jmL_y", "yS9YF1yNOPE", "LeHF7HZLBYE", "yDGhKahusb8", "898reFF-8Ce", "2HFCEuomMAy", "XXsOfrnfHB-" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Summary & Contributions\n* This paper examines a variational approach to reinforcement learning, leveraging occupancy measures over previously visited and future state-action pairs in order to address the exploration challenge.\n* The author propose a variational approximation to the so-called \"conditional occu...
[ 1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_buSCIu6izBY", "898reFF-8Ce", "yS9YF1yNOPE", "LeHF7HZLBYE", "XXsOfrnfHB-", "y79vqGzgdOF", "2HFCEuomMAy", "iclr_2022_buSCIu6izBY", "iclr_2022_buSCIu6izBY" ]
iclr_2022_g5odb-gVVZY
Multilevel physics informed neural networks (MPINNs)
In this paper we introduce multilevel physics informed neural networks (MPINNs). Inspired by classical multigrid methods for the solution of linear systems arising from the discretization of PDEs, our MPINNs are based on the classical correction scheme, which represents the solution as the sum of a fine and a coarse term that are optimized in an alternate way. We show that the proposed approach allows to reproduce in the neural network training the classical acceleration effect observed for classical multigrid methods, thus providing a PINN that shows improved performance compared to the state-of-the-art. Thanks to the support of the coarse model, MPINNs provide indeed a faster and improved decrease of the approximation error in the case both of elliptic and nonlinear equations.
Reject
The paper develops an instance of physics informed neural network inspired from multigrid methods for solving PDEs. The proposed framework describes the solution of a PDE problem as the sum of terms operating at different resolutions. Training is performed by an iterative optimization algorithm that alternates between the different resolution models. Experiments are performed on 1D and 2D problems. All the reviewers agree on the originality and the potential of the proposed method. They however all consider that the current version of the work is too preliminary both in the form and in the content. The experimental contribution should be developed further with tests performed on more complex problems and complementary analyses. Some of the claims should be given more evidence or moderated. It also appeared during the discussion that the models are not well tuned, making the results inconclusive. The authors are encouraged to develop and strengthen their work.
train
[ "Jhiov988d5k", "n8L2YISnKSJ", "q1S4zSzM3OA", "xKhfT6Qskki", "JhgksPiQIp", "7iTLLHB4QF8", "K-A1LliBGsY", "_XlD4QYdG7a", "zmPDUSo9VYj", "8YUOMFl7Cb", "l7dPuEhkaK5", "gCoe9kW5Fgj", "DfCfhaSizsh", "890mE_rmELb" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We run some additional tests for the case h=50, H=25, for which we did a grid-search for the parameters gamma and step_size in scheduler = StepLR(optimizer, step_size, gamma), the results can be found here: https://www.overleaf.com/read/cgkydxpdkqdr. MPINN reaches the same accuracy as PINN but in less iterations....
[ -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "q1S4zSzM3OA", "JhgksPiQIp", "xKhfT6Qskki", "gCoe9kW5Fgj", "8YUOMFl7Cb", "iclr_2022_g5odb-gVVZY", "zmPDUSo9VYj", "890mE_rmELb", "7iTLLHB4QF8", "DfCfhaSizsh", "iclr_2022_g5odb-gVVZY", "iclr_2022_g5odb-gVVZY", "iclr_2022_g5odb-gVVZY", "iclr_2022_g5odb-gVVZY" ]
iclr_2022_uHv20yi8saL
Monotonic Improvement Guarantees under Non-stationarity for Decentralized PPO
We present a new monotonic improvement guarantee for optimizing decentralized policies in cooperative Multi-Agent Reinforcement Learning (MARL), which holds even when the transition dynamics are non-stationary. This new analysis provides a theoretical understanding of the strong performance of two recent actor-critic methods for MARL, i.e., Independent Proximal Policy Optimization (IPPO) and Multi-Agent PPO (MAPPO), which both rely on independent ratios, i.e., computing probability ratios separately for each agent's policy. We show that, despite the non-stationarity that independent ratios cause, a monotonic improvement guarantee still arises as a result of enforcing the trust region constraint over joint policies. We also show this trust region constraint can be effectively enforced in a principled way by bounding independent ratios based on the number of agents in training, providing a theoretical foundation for proximal ratio clipping. Moreover, we show that the surrogate objectives optimized in IPPO and MAPPO are essentially equivalent when their critics converge to a fixed point. Finally, our empirical results support the hypothesis that the strong performance of IPPO and MAPPO is a direct result of enforcing such a trust region constraint via clipping in centralized training, and the good values of the hyperparameters for this enforcement are highly sensitive to the number of agents, as predicted by our theoretical analysis.
Reject
The submission cannot be accepted as there seems to be a mistake in the proof of the main contribution (Theorem 2).
train
[ "CTJPZ3NomOB", "pvRxk6GS6IQ", "drkUjYZ_5RP", "GOmfVpHoXGh", "dNNT_8LZa4n", "hIht6m8KQ78", "OgzvN8Hm1-u", "GK8aLS0tVIL", "2lYM15sRPuf", "Or710EnAtkt", "9sV3bFhN2YT", "uY-edYzWZcw", "PsiXQeM8IQO", "UdUSiVEdEAG" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper tries to provide a theoretical monotonic improvement guarantees for IPPO and MAPPO and shows enforcing independent trust region constraints could enforce the trust region constraint over joint policies. The empirical results are also provided to support the hypothesis. **Strengths**\n\n- Decentralized ...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "iclr_2022_uHv20yi8saL", "drkUjYZ_5RP", "GOmfVpHoXGh", "dNNT_8LZa4n", "hIht6m8KQ78", "OgzvN8Hm1-u", "2lYM15sRPuf", "PsiXQeM8IQO", "CTJPZ3NomOB", "UdUSiVEdEAG", "uY-edYzWZcw", "iclr_2022_uHv20yi8saL", "iclr_2022_uHv20yi8saL", "iclr_2022_uHv20yi8saL" ]
iclr_2022_jgAl403zfau
HALP: Hardware-Aware Latency Pruning
Structural pruning can simplify network architecture and improve the inference speed. We propose Hardware-Aware Latency Pruning (HALP) that formulates structural pruning as a global resource allocation optimization problem, aiming at maximizing the accuracy while constraining latency under a predefined budget. For filter importance ranking, HALP leverages latency lookup table to track latency reduction potential and global saliency score to gauge on accuracy drop. Both metrics can be evaluated very efficiently during pruning, allowing us to reformulate global structural pruning under a reward maximization problem given target constraint. This makes the problem solvable via our augmented knapsack solver, enabling HALP to surpass prior work in pruning efficacy and accuracy-efficiency trade-off. We examine HALP on both classification and detection tasks, over varying networks, on ImageNet1K and VOC datasets. In particular for ResNet-50/-101 pruning on ImageNet1K, HALP improves network speed by $1.60\times$/$1.90\times$ with $+0.3\%$/$-0.2\%$ top-1 accuracy changes, respectively. For SSD pruning on VOC, HALP improves throughput by $1.94\times$ with only a $0.56$ mAP drop. HALP consistently outperforms prior art, sometimes by large margins.
Reject
This paper proposes a hardware-aware pruning method which structurally prunes the given deep neural networks to retain their accuracy while satisfying the latency constraints. Specifically, the authors formulate the latency-constrained pruning problem as a combinatorial optimization problem to find the optimal combination of neurons to maximize the sum of the importance scores, and propose an augmented knapsack solver to solve it, as well as a neuron grouping technique to speed up the training. The proposed method is validated for its classification tasks on two devices, namely Titan V and Jetson TX2, and for object detection performance on Titan V, and is shown to achieve superior accuracy/latency tradeoff compared to existing pruning methods, including latency-aware ones. The paper received split reviews initially, and the following is the summary of the pros and cons mentioned by the reviewers. Pros - The proposed formulation of the latency-constrained pruning problem as a constrained knapsack problem is novel. - The method achieves competitive performance against existing latency-constrained pruning methods. - The paper is written well, with clear motivation and descriptions of the proposed method. Cons - The idea is not very exciting since posing pruning as a combinatorial optimization problem, or a knapsack problem is not new, and the proposed method only adds in additional latency constraints. - The title “hardware-aware” is vague and misleading since what the authors do are latency-constrained pruning. - The experimental validation is only done on two devices, which makes the method less convincing as a “hardware-aware” method and how it generalizes to other devices (e.g. CPU, FPGA) - Use of lookup tables to obtain the latency constraints is not novel, has a limited scalability, and is inefficient. - Missing discussion of design choices. During the discussion period, the authors cleared away some of the concerns, which resulted in two of the reviewers increasing their scores. However, one reviewer maintained the negative rating of 5, and the positive reviewers were still concerned with limited novelty. I believe that this is a good paper that proposes a neat solution for latency pruning, which may have some practical impact. However, the novelty of the idea is limited, as pointed out by the reviewers. The use of lookup tables also does not seem to be an efficient solution for adapting to edge devices for which the collection of latency measurements could be slow. The experimental validation on only two devices of the same type (GPU) also seems insufficient, as how the method generalizes to diverse devices is uncertain. It would be worthwhile to consider using a latency predictor (e.g. BRP-NAS [Dudziak et al. 20]), and perform experimental validation on diverse hardware platforms (e.g. CPU and FPGA). Comparing against recently proposed hardware-aware NAS methods could be also interesting, as there has been a rapid progress on the topic recently. Thus, despite the overall practicality and the quality of the paper, the paper may benefit from another round of revision, since both the method and the experimental validation part could be improved. [Dudziak et al. 20] BRP-NAS: Prediction-based NAS using GCNs, NeurIPS 2020
val
[ "9-fT2CZnnON", "fcXAeejBHVA", "NflCj0dTuRb", "_-iy2XtEGAp", "fwo6uPcNL8q", "wbR-ifQf_7G", "dCcvCqDZq7P", "gkwAGGa5XcG", "TP80W-9YOME", "rpPQ9j_hnFt", "A9HhfpARUGQ", "b60sW6pMebf", "ZVw-6WTrLBo", "1rOosSNprtm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers and ACs, \n\nWe sincerely thank all the reviewers for their valuable comments and suggestions to help improving the paper. We have integrated all the comments into our main paper and additional appendixes. The main changes of the paper are summarized as below:\n\n- Results of pruning a MobileNet-V2...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2022_jgAl403zfau", "dCcvCqDZq7P", "rpPQ9j_hnFt", "iclr_2022_jgAl403zfau", "A9HhfpARUGQ", "_-iy2XtEGAp", "TP80W-9YOME", "1rOosSNprtm", "ZVw-6WTrLBo", "b60sW6pMebf", "_-iy2XtEGAp", "iclr_2022_jgAl403zfau", "iclr_2022_jgAl403zfau", "iclr_2022_jgAl403zfau" ]
iclr_2022_JVsvIuMDE0Z
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning
Offline reinforcement learning, by learning from a fixed dataset, makes it possible to learn agent behaviors without interacting with the environment. However, depending on the quality of the offline dataset, such pre-trained agents may have limited performance and would further need to be fine-tuned online by interacting with the environment. During online fine-tuning, the performance of the pre-trained agent may collapse quickly due to the sudden distribution shift from offline to online data. While constraints enforced by offline RL methods such as a behaviour cloning loss prevent this to an extent, these constraints also significantly slow down online fine-tuning by forcing the agent to stay close to the behavior policy. We propose to adaptively weigh the behavior cloning loss during online fine-tuning based on the agent's performance and training stability. Moreover, we use a randomized ensemble of Q functions to further increase the sample efficiency of online fine-tuning by performing a large number of learning updates. Experiments show that the proposed method yields state-of-the-art offline-to-online reinforcement learning performance on the popular D4RL benchmark.
Reject
The paper proposes an approach that allows online finetuning of an offline RL policy by adaptively changing a BC regularization term. Even after discussions with the authors, the reviewers had several concerns. First, the paper seems to be limited in novelty as the "REDQ+AdaptiveBC seems incremental on top of TD3+BC". Second, there were concerns that the adaptive regularization term was insufficient as a contribution given its heuristic nature. Given the consensus among reviewers of this paper, I recommend rejecting this paper.
train
[ "EQQ-UpouSYj", "kJF2VeZTvR6", "kVnPkfW2QpX", "tn0P7MYMyXj", "b6N1lcpps-v", "Txxna2lPmI", "xR8pUjsKONq", "u26yNvBG6zX", "lAuJczyfrjV", "sOoz5_0zHrM", "HNJdhjBeBu", "p6j7ZJYxFh", "PJEI2lBTwYX" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " 1. “ in other settings such as medium and medium-replay, the method is worse than or similar to Balanced Replay especially on hopper and walker2d tasks ” \n\nSince the submission, we have fine-tuned the hyperparameters of the PID controller and now on halfcheetah-medium, halfcheetah-medium-replay, walker2d-mediu...
[ -1, -1, -1, 5, -1, -1, 5, -1, -1, -1, -1, 3, 5 ]
[ -1, -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, 5, 3 ]
[ "Txxna2lPmI", "b6N1lcpps-v", "HNJdhjBeBu", "iclr_2022_JVsvIuMDE0Z", "lAuJczyfrjV", "sOoz5_0zHrM", "iclr_2022_JVsvIuMDE0Z", "PJEI2lBTwYX", "p6j7ZJYxFh", "xR8pUjsKONq", "iclr_2022_JVsvIuMDE0Z", "iclr_2022_JVsvIuMDE0Z", "iclr_2022_JVsvIuMDE0Z" ]
iclr_2022_Py8WbvKH_wv
DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck
Deep reinforcement learning (DRL) agents are often sensitive to visual changes that were unseen in their training environments. To address this problem, we leverage the sequential nature of RL to learn robust representations that encode only task-relevant information from observations based on the unsupervised multi-view setting. Specifically, we introduce a novel contrastive version of Multi-View Information Bottleneck (MIB) objective for temporal data. We train RL agents from pixels with this auxiliary objective to learn robust representations that can compress away task-irrelevant information and are predictive of task-relevant dynamics. This approach enables us to train high-performance policies that are robust to visual distractions and can generalize well to unseen environments. We demonstrate that our approach can achieve SOTA performance on diverse visual control tasks on the DeepMind Control Suite when the background is replaced with natural videos. In addition, we show that our approach outperforms well-established baselines for generalization to unseen environments on the Procgen benchmark.
Reject
The authors introduce a method that improves the representation learned by RL agents, making them more robust to visual distractions. In particular, their approach proposes to use mutual information between two views as a proxy for that objective. This is clearly a borderline paper that required many discussions among the reviewers and the authors. The reviewers mention that the approach is novel, addresses an important problem of robustness in RL and some of the experiments provided are impressive. On the other hand, the reviewers point out that the baselines seem to achieve lower results than previously reported, writing could be improved and some of the results don't show significant improvement over baselines. Given that some of the results cause confusion around the evaluation protocol (it's still not 100% clear why the performance of baselines is lower than expected) and other doubts expressed by the reviewers, I encourage the authors to continue working on the paper and resubmit. I believe that with a little bit of extra work and clarifications this can be a very strong submission.
train
[ "4sj8Xz3rPaa", "MkJr1JXflcp", "lC6oIDzYRp1", "dfObVTcyAs", "bX1kQc3ySwn", "MzE6rzMOCe8", "gZdqtaU2L9", "-aTpguNtZIn", "nIOso_eqFhv", "zT5vJuqQi4C", "PA-20ydgaa", "DUjON1xe_vR", "46d63rAXqk", "s3QP4Mp83x9", "oPCD8tE2-_j" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the suggestions and clarifications!\n\n1) Figure 1\n\nWe will revise Figure 1 to avoid confusion.\n\n2) More general settings\n\nDRIBO can be applied to settings beyond background changes. As long as the multi-view observations share the same task-relevant information, we c...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "MkJr1JXflcp", "zT5vJuqQi4C", "dfObVTcyAs", "nIOso_eqFhv", "iclr_2022_Py8WbvKH_wv", "gZdqtaU2L9", "PA-20ydgaa", "iclr_2022_Py8WbvKH_wv", "bX1kQc3ySwn", "s3QP4Mp83x9", "oPCD8tE2-_j", "46d63rAXqk", "iclr_2022_Py8WbvKH_wv", "iclr_2022_Py8WbvKH_wv", "iclr_2022_Py8WbvKH_wv" ]
iclr_2022_66kgCIYQW3
Automatic Concept Extraction for Concept Bottleneck-based Video Classification
Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-species classification. However, these concept bottleneck models rely on a domain expert providing a necessary and sufficient set of concepts--which is intractable for complex tasks such as video classification. For complex tasks, the labels and the relationship between visual elements span many frames, e.g., identifying a bird flying or catching prey--necessitating concepts with various levels of abstraction. To this end, we present CoDEx, an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification. CoDEx identifies a rich set of complex concept abstractions from natural language explanations of videos--obviating the need to predefine the amorphous set of concepts. To demonstrate our method’s viability, we construct two new public datasets that combine existing complex video classification datasets with short, crowd-sourced natural language explanations for their labels. Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
Reject
The authors consider the task of interpretable video classification. First, a set of binary “concepts'' is predicted, and these concept features are then used for classifying a video. The set itself is automatically generated from natural language descriptions, instead of relying on expert annotations. The authors collect two datasets to validate the proposed approach and show that the model can match the performance of a standard video classification model, while being interpretable. The reviewers felt that the paper was well written and that the method and empirical results were clearly outlined. They also appreciated the empirical results whereby interpretability doesn’t necessarily come at the expense of accuracy and consider interpretability as a desirable property. The main reason for the borderline results is the heuristic nature of the proposed automatic concept labeling and the empirical evaluation against alternative baselines. In particular, one needs to **show that the proposed method generalises to other datasets**. Secondly, one of the main contributions, namely the automatic **concept extraction, still ends up requiring human annotation in the form of narrations**, and this cost should be quantified and contextualised. I suggest the authors address these points and resubmit.
val
[ "15fd7hhDat0", "d8-3NV6YtvL", "NhnjMmIHNvj", "yWYnRMX1Wq", "Aty-Ny4oJwW", "sSqwOKGahkJ", "eu99x0hf0uX", "W39nL6ekOFo", "TjK7JAMMykH", "8E6h5ZfPPXR", "lkebN0S53yj", "FDkVpO3tWkM", "tcZoJ05vG-4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper works on the interpretability of video understanding problem. With a set of textual descriptions, the authors propose a pipeline called CoDex to extract the key concepts for explaining the classification, in contrast to previous methods which use the predefined classes. The CoDex method contains clearni...
[ 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_66kgCIYQW3", "sSqwOKGahkJ", "iclr_2022_66kgCIYQW3", "W39nL6ekOFo", "iclr_2022_66kgCIYQW3", "tcZoJ05vG-4", "NhnjMmIHNvj", "FDkVpO3tWkM", "15fd7hhDat0", "lkebN0S53yj", "iclr_2022_66kgCIYQW3", "iclr_2022_66kgCIYQW3", "iclr_2022_66kgCIYQW3" ]
iclr_2022_GDUfz1phf06
AutoNF: Automated Architecture Optimization of Normalizing Flows Using a Mixture Distribution Formulation
Although various flow models based on different transformations have been proposed, there still lacks a quantitative analysis of performance-cost trade-offs between different flows as well as a systematic way of constructing the best flow architecture. To tackle this challenge, we present an automated normalizing flow (NF) architecture search method. Our method aims to find the optimal sequence of transformation layers from a given set of unique transformations with three folds. First, a mixed distribution is formulated to enable efficient architecture optimization originally on the discrete space without violating the invertibility of the resulting NF architecture. Second, the mixture NF is optimized with an approximate upper bound which has a more preferable global minimum. Third, a block-wise alternating optimization algorithm is proposed to ensure efficient architecture optimization of deep flow models.
Reject
This paper proposes two methods to learn the architecture of normalizing flows models; Their framework is inspired by (Liu et al., 2019) which uses ensembles/mixtures with learnable weights for architecture search. The application of these ideas to NFs requires a trivial modification to respect the invertibility constraint; which consists in building a mixture model over all possible sequences of compositions of transformations from a fixed set. The paper proposes to use an upper-bound to the forward KL instead of the fKL directly. The reasoning is that this will lead to a "pure" model after optimization, that is, the mixture weights will be in {0, 1}. Mathematically, this simply corresponds to treating the mixture as a latent-variable model and performing MAP-inference over discrete latent variables, assuming that all mixture components have the same prior weights in the mixture. The experimental results across various datasets are very mixed, and the family of transformations considered in the experiments is quite restricted.
train
[ "c5B1ge9Dxfh", "_LBvAgWk6Px", "ll0DuPMKIeG", "FzDKafMF0qR", "rc3HM9QRlZ4", "3ytXb9RTA3a", "Y1HUeKLo6-O", "JSgRZKcFZqS", "c_zuiuLloXP", "GG558wScuFJ", "7IUEQWB8Vqt", "v2AIBV2tNX6", "GOUVo8DqYsS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer, we appreciate for your time and comments on the previous reply. We must apologize that the previous version is not clear and difficult to follow. Based on your suggestions, we've made a revision on the proof, with a more clear definition of the optimization problems. We've attached a revised versio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 4 ]
[ "_LBvAgWk6Px", "ll0DuPMKIeG", "FzDKafMF0qR", "Y1HUeKLo6-O", "iclr_2022_GDUfz1phf06", "GOUVo8DqYsS", "v2AIBV2tNX6", "7IUEQWB8Vqt", "GG558wScuFJ", "iclr_2022_GDUfz1phf06", "iclr_2022_GDUfz1phf06", "iclr_2022_GDUfz1phf06", "iclr_2022_GDUfz1phf06" ]
iclr_2022_IEsx-jwFk3g
Deep Representations for Time-varying Brain Datasets
Finding an appropriate representation of dynamic activities in the brain is crucial for many downstream applications. Due to its highly dynamic nature, temporally averaged fMRI (functional magnetic resonance imaging) cannot capture the whole picture of underlying brain activities, and previous works lack the ability to learn and interpret the latent dynamics in brain architectures. In this paper, we build an efficient graph neural network model that incorporates both region-mapped fMRI sequences and structural connectivities obtained from DWI (diffusion-weighted imaging) as inputs. Through novel sample-level adaptive adjacency matrix learning and multi-resolution inner cluster smoothing, we find good representations of the latent brain dynamics. We also attribute inputs with integrated gradients, which enables us to infer (1) highly involved brain connections and subnetworks for each task (2) keyframes of imaging sequences along the temporal axis, and (3) subnetworks that discriminate between individual subjects. This ability to identify critical subnetworks that characterize brain states across heterogeneous tasks and individuals is of great importance to neuroscience research. Extensive experiments and ablation studies demonstrate our proposed method's superiority and efficiency in spatial-temporal graph signal modeling with insightful interpretations of brain dynamics.
Reject
This paper proposes a Graph Neural Network model to estimate latent dynamics in the human brain using functional Magnetic Resonance Imaging (fMRI) and Diffusion Weighted Imaging (DWI). The representation is tested on a classification task. While reviewers acknowledge the importance of this application, various concerns have been raised and partially addressed. The work focuses on graph deep learning and offers limited evidence of its superiority over more traditional ML or non graph based deep learning. Besides the methodological novelty is unclearly argued, which is not ideal for the audience of a conference like ICLR. For all these reasons, this work cannot be endorsed for publication at ICLR 2022.
val
[ "hX8ydlJaQ4B", "xlPHJOIUquv", "tE11joPPSMM", "hb6-aPAOSqo", "llVgnmKzx5", "D_MpCKlXl9", "J_GFDHQDWod", "UFQTOf4Dn7y", "nKb__RUmh2", "IbCSjiB1qVd", "zyD7IJyk3rP", "e0yTD6xbOmv", "sTU_JCi7Nw1", "P3pxqEkXx9t", "pimSlMRCZNk", "h0-mxSF77MS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. \nI have raised my score, and I hope these comments get sufficiently reflected in the final manuscript. \n", "This paper proposes a deep learning method for temporal data on graph nodes, specifically designed for brain imaging data. It can be deployed for classification of data whe...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "tE11joPPSMM", "iclr_2022_IEsx-jwFk3g", "hb6-aPAOSqo", "sTU_JCi7Nw1", "iclr_2022_IEsx-jwFk3g", "J_GFDHQDWod", "UFQTOf4Dn7y", "IbCSjiB1qVd", "h0-mxSF77MS", "llVgnmKzx5", "pimSlMRCZNk", "llVgnmKzx5", "xlPHJOIUquv", "pimSlMRCZNk", "iclr_2022_IEsx-jwFk3g", "iclr_2022_IEsx-jwFk3g" ]
iclr_2022_3kTt_W1_tgw
$f$-Mutual Information Contrastive Learning
Self-supervised contrastive learning is an emerging field due to its power in providing good data representations. Such learning paradigm widely adopts the InfoNCE loss, which is closely connected with maximizing the mutual information. In this work, we propose the $f$-Mutual Information Contrastive Learning framework ($f$-MICL) , which directly maximizes the $f$-divergence-based generalization of mutual information. We theoretically prove that, under mild assumptions, our $f$-MICL naturally attains the alignment for positive pairs and the uniformity for data representations, the two main factors for the success of contrastive learning. We further provide theoretical guidance on designing the similarity function and choosing the effective $f$-divergences for $f$-MICL. Using several benchmark tasks from both vision and natural text, we empirically verify that our novel method outperforms or performs on par with state-of-the-art strategies.
Reject
Most of the reviewers have concerns that the experimental results don’t show stronger enough improvements over baselines and that the theoretical contribution of the paper is not completely clear. These concerns make the paper a borderline paper for NeurIPS. Some reviewers have pointed out problematic or unsupported claims in the paper. With these in mind, I encourage the authors to revise the paper with more clarity and address the reviewers' comments on the exposition of the paper.
train
[ "QyjrjL2UjaL", "YFmF3TxgS_8", "Nwa9sWdAyd-", "2hlvnU0aJyb", "nMXVRgw6W9Q", "pgZnaZ0k00N", "2rH97yFtb7R", "Ymjr5RI4tu0", "m1EPnEYK44I", "UUl2bvfaYZ_", "gnQ2RHJ97wD", "IUUlDbmhJgp" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper makes two contributions to contrastive representation learning: 1. Using the more general f-mutual information rather than using Shannon mutual information for contrastive learning 2. Experimental results to compare the possible options given the new design freedom. This paper addresses an interesting ...
[ 5, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_3kTt_W1_tgw", "iclr_2022_3kTt_W1_tgw", "iclr_2022_3kTt_W1_tgw", "2rH97yFtb7R", "gnQ2RHJ97wD", "IUUlDbmhJgp", "Nwa9sWdAyd-", "QyjrjL2UjaL", "QyjrjL2UjaL", "iclr_2022_3kTt_W1_tgw", "iclr_2022_3kTt_W1_tgw", "iclr_2022_3kTt_W1_tgw" ]
iclr_2022_FWiwSGJ_Bpa
Non-Parametric Neuro-Adaptive Control Subject to Task Specifications
We develop a learning-based algorithm for the control of autonomous systems governed by unknown, nonlinear dynamics to satisfy user-specified spatio-temporal tasks expressed as signal temporal logic specifications. Most existing algorithms either assume certain parametric forms for the unknown dynamic terms or resort to unnecessarily large control inputs in order to provide theoretical guarantees. The proposed algorithm addresses these drawbacks by integrating neural-network-based learning with adaptive control. More specifically, the algorithm learns a controller, represented as a neural network, using training data that correspond to a collection of system parameters and tasks. These parameters and tasks are derived by varying the nominal parameters and the spatio-temporal constraints of the user-specified task, respectively. It then incorporates this neural network into an online closed-form adaptive control policy in such a way that the resulting behavior satisfies the user-defined task. The proposed algorithm does not use any a priori information on the unknown dynamic terms or any approximation schemes. We provide formal theoretical guarantees on the satisfaction of the task. Numerical experiments on a robotic manipulator and a unicycle robot demonstrate that the proposed algorithm guarantees the satisfaction of 50 user-defined tasks, and outperforms control policies that do not employ online adaptation or the neural-network controller. Finally, we show that the proposed algorithm achieves greater performance than standard reinforcement-learning algorithms in the pendulum benchmarking environment.
Reject
The reviewers acknowledge that the paper is well written and contains interesting ideas to combine adaptive control and learning. However, they identified issues regarding the claims about transient tracking and STL formula. Moreover, the significance of the presented learning rule was unclear regarding one reviewer. While the authors could respond well to the identified transient tracking issue, they also needed to weaken their claims, limiting the contribution of the paper. The reviewers therefore stayed with a a reject rating.
train
[ "R2DlvuXHLSt", "xnfnfeleHWZ", "OWLctpzkbRD", "U8IeD8BLuKe", "Ouc-BbB0Jxy", "W9wdAJWOMwk", "LJ_g4pUSlJw", "t_KVmHlzxb", "v29nRPf8j7x", "nMNaKAcbo-P", "jH8Jn8yjRbn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose an adaptive controller that can be applied to a certain class of dynamical systems in order to fulfill Signal Interval Temporal Logic (SITL) tasks. The proposed controller leverages a nominal feedback law learned from past trajectory data collected on possibly many different SITL...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_FWiwSGJ_Bpa", "R2DlvuXHLSt", "LJ_g4pUSlJw", "Ouc-BbB0Jxy", "v29nRPf8j7x", "iclr_2022_FWiwSGJ_Bpa", "nMNaKAcbo-P", "R2DlvuXHLSt", "jH8Jn8yjRbn", "iclr_2022_FWiwSGJ_Bpa", "iclr_2022_FWiwSGJ_Bpa" ]
iclr_2022_4l5iO9eoh3f
Supervised Permutation Invariant Networks for solving the CVRP with bounded fleet size
Learning to solve combinatorial optimization problems, such as the vehicle routing problem, offers great computational advantages over classical operation research solvers and heuristics. The recently developed deep reinforcement learning approaches either improve an initially given solution iteratively or sequentially construct a set of individual tours. However, all existing learning-based approaches are not able to work for a fixed number of vehicles and thus bypass the NP-hardness of the original problem. On the other hand, this makes them less suitable for real applications, as many logistic service providers rely on solutions provided for a specific bounded fleet size and cannot accommodate short term changes to the number of vehicles. In contrast we propose a powerful supervised deep learning framework that constructs a complete tour plan from scratch while respecting an apriori fixed number of vehicles. In combination with an efficient post-processing scheme, our supervised approach is not only much faster and easier to train but also achieves competitive results that incorporate the practical aspect of vehicle costs. In thorough controlled experiments we re-evaluate and compare our method to multiple state-of-the-art approaches where we demonstrate stable performance and shed some light on existent inconsistencies in the experimentation protocols of the related work.
Reject
This paper formulates and solves a capacitated vehicle routing problem (CVRP) in the presence of costs for deploying additional vehicles: a mixture of supervised learning, algorithms, and OR techniques is used. In particular, a mix of greedy decoding, repairing of the solution, and post-processing with OR tools is used to extract a feasible solution from the probabilistic prediction. The paper makes a good case that existing methods do not solve the CVRP with a hard constraint on the fleet size. On the other hand, there is a strong dependence on heuristic improvements: e.g., a strongly dependence on the post-processing, and an additional repair procedure for the decoding process. The authors are encouraged to investigate how such improvements would work with existing approaches: i.e., how novel the new model’s contributions are.
train
[ "UL2bHQUVGO", "sdskui-naY", "QHW-aSSnPFc", "HfTdPd6gGdT", "3qdsNPVQJ5z", "MXRzDooEKnh", "V471zSFgPF", "9GSixirzCz5", "zSmMsj4dYZ1", "cwYtHZW45YP", "AgBkOz3ZhQC", "JGqzzgm9XfV", "GklMKppe5g3", "uTROB-BbpvB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response.\n\n''\nAs for the concern with regards to the model's ability to generalize to large-scale problem sizes, we agree that it is an important capability, but have to note that it is currently a difficult task that limits the current routing approaches in the ML research community in general...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "9GSixirzCz5", "zSmMsj4dYZ1", "iclr_2022_4l5iO9eoh3f", "AgBkOz3ZhQC", "cwYtHZW45YP", "iclr_2022_4l5iO9eoh3f", "iclr_2022_4l5iO9eoh3f", "uTROB-BbpvB", "GklMKppe5g3", "QHW-aSSnPFc", "JGqzzgm9XfV", "iclr_2022_4l5iO9eoh3f", "iclr_2022_4l5iO9eoh3f", "iclr_2022_4l5iO9eoh3f" ]
iclr_2022_cLcLdwOfhoe
FedLite: A Scalable Approach for Federated Learning on Resource-constrained Clients
In classical federated learning, the clients contribute to the overall training by communicating local updates for the underlying model on their private data to a coordinating server. However, updating and communicating the entire model becomes prohibitively expensive when resource-constrained clients collectively aim to train a large machine learning model. Split learning provides a natural solution in such a setting, where only a (small) part of the model is stored and trained on clients while the remaining (large) part of the model only stays at the servers. Unfortunately, the model partitioning employed in split learning significantly increases the communication cost compared to the classical federated learning algorithms. This paper addresses this issue by proposing an end-to-end training framework that relies on a novel vector quantization scheme accompanied by a gradient correction method to reduce the additional communication cost associated with split learning. An extensive empirical evaluation on standard image and text benchmarks shows that the proposed method can achieve up to $490\times$ communication cost reduction with minimal drop in accuracy, and enables a desirable performance vs. communication trade-off.
Reject
The paper introduces a compression method for distributed Split Learning for better communication efficiency, by compressing the intermediate output between client and server model by vector quantization. Convergence analysis and experimental results are provided. Unfortunately consensus among the reviewers remained that it remains slightly below the bar after the discussion phase. Main remaining concerns were the variety of baselines and benefits from split learning setup in experiments, compared to other FL approaches, quantization approaches, architecture splits. Reviewers also missed a discussion of latency requirements of model-parallel training in FL as opposed to data parallel which allows less frequent communication compared to here (e.g. discussing the split layer size vs latency trade-off, here of quantized intermediate layers compared to regular FL). The newly added Figure 6 does not specify or vary the number of local steps (or batch size) in FedAvg. We hope the detailed feedback helps to strengthen the paper for a future occasion.
train
[ "Uu9yMVG-E_N", "l21gPWjn14W", "3uMHQZPHTHz", "7Mkcv63ZRqk", "kz-NgmNBnP", "xOeK1gdQpf", "CJdq2GyJ8C", "Bjj2R91vyyG", "D361STFSZRD", "SqInKBv5Z35" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After carefully reading through the authors' feedback, part of my concerns are addressed. However, my major concern still remains, i.e., it is not clear to me that the proposed FedLite scheme is the only approach to achieve memory-efficient FL. Thus, it is important to understand where FedLite stands compared to ...
[ -1, -1, -1, -1, -1, -1, 6, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "kz-NgmNBnP", "7Mkcv63ZRqk", "SqInKBv5Z35", "D361STFSZRD", "Bjj2R91vyyG", "CJdq2GyJ8C", "iclr_2022_cLcLdwOfhoe", "iclr_2022_cLcLdwOfhoe", "iclr_2022_cLcLdwOfhoe", "iclr_2022_cLcLdwOfhoe" ]
iclr_2022_vLz0e9S-iF3
Quasi-potential theory for escape problem: Quantitative sharpness effect on SGD's escape from local minima
We develop a quantitative theory on the escape problem of stochastic gradient descent (SGD) and investigate the effect of the sharpness of loss surfaces on escape. Deep learning has achieved tremendous success in various domains, however, it has opened up theoretical problems. For instance, it is still an ongoing question as to why an SGD can find solutions that generalize well over non-convex loss surfaces. An approach to explain this phenomenon is the escape problem, which investigates how efficiently the SGD escapes from local minima. In this paper, we develop a novel theoretical framework for the escape problem using ``quasi-potential," the notion defined in a fundamental theory of stochastic dynamical systems. We show that quasi-potential theory can handle the geometric property of loss surfaces and a covariance structure of gradient noise in a unified manner through an eigenvalue argument, while previous research studied them separately. Our theoretical results imply that sharpness contributes to slowing down escape, but the SGD’s noise structure cancels the effect, which ends up exponentially accelerating its escape. We also conduct experiments to empirically validate our theory using neural networks trained with real data.
Reject
The paper uses quasi-potential theory to analyze the escape behavior of SGD. Although this is a topic of interest to the ML community, the reviewers found a critical issue with the paper, which the authors admit can not be fixed during this submission. I, therefore, do not think there is a need for a longer discussion.
train
[ "P1UlVJheqmB", "qJ0I-lYQjZC", "BwLsAzUBXE", "GoO6sWQnQfh", "XUISk6CjyL7", "3KNfD2CmbfX", "Z00UYWVFjat" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for their constructive comments and useful insights.\nThanks to those comments, we realize our work includes a critical issue (Q1 by Reviewer 5Jw7), which, we thought, is hard to fix during this submission.\n\nIn our rebuttal and revision, we have only addressed other comments that are inde...
[ -1, -1, -1, -1, 5, 3, 3 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2022_vLz0e9S-iF3", "Z00UYWVFjat", "3KNfD2CmbfX", "XUISk6CjyL7", "iclr_2022_vLz0e9S-iF3", "iclr_2022_vLz0e9S-iF3", "iclr_2022_vLz0e9S-iF3" ]
iclr_2022_3XD_rnM97s
Understanding Knowledge Integration in Language Models with Graph Convolutions
Pretrained language models (LMs) are not very good at robustly capturing factual knowledge. This has led to the development of a number of knowledge integration (KI) methods which aim to incorporate external knowledge into pretrained LMs. Even though KI methods show some performance gains over base LMs, the efficacy and limitations of these methods are not well-understood. For instance, it is unclear how and what kind of knowledge is effectively integrated into LMs and if such integration may lead to catastrophic forgetting of already learned knowledge. In this paper, we revisit the KI process from the view of graph signal processing and show that KI could be interpreted using a graph convolution operation. We propose a simple probe model called Graph Convolution Simulator (GCS) for interpreting knowledge-enhanced LMs and exposing what kind of knowledge is integrated into these models. We conduct experiments to verify that our GCS model can indeed be used to correctly interpret the KI process, and we use it to analyze two typical knowledge-enhanced LMs: K-Adapter and ERNIE. We find that only a small amount of factual knowledge is captured in these models during integration. While K-Adapter is better at integrating simple relational knowledge, complex relational knowledge is integrated better in ERNIE. We further find that while K-Adapter struggles to integrate time-related knowledge, it successfully integrates knowledge of unpopular entities and relations. Our analysis also show some challenges in KI. In particular, we find simply increasing the size of the KI corpus may not lead to better KI and more fundamental advances may be needed.
Reject
Strengths: * Theoretical foundation provided to knowledge integration problem * Findings from the empirical studies are interesting * Authors dedicated significant time and energy to coordinating with reviewers in the rebuttal period Weaknesses: * It is not clear whether the GCS is a suitable approximation for measuring KI. For example, relation types are not supported in the GCS architecture making it unclear whether GCS adequately approximates knowledge integration. As reviewer 4qCM mentions, (X, born_in, Zurich) is very different knowledge from (X, died_in, Zurich). The current formulation only learns co-occurrence between entities rather than relational knowledge. * Empirical study is limited to two knowledge integration methods (ERNIE & K-Adapter) and only evaluated on entity typing datasets, which are likely to be well-suited for their method which ignores relation information. * The presentation and takeaways of the results could be clearer. Authors should explain in-depth why experiments that drop knowledge randomly are not suitable baselines. This paper is promising and the topic explored by the authors is interesting. I think it would benefit from integrating the comments from the reviewers and will make for a strong submission at a future venue.
train
[ "l7oXR3peIU", "c1aztFWTFF", "o8g_e9Fm9NB", "l4wp1nUaNRQ", "yuB17iPahkk", "IotYDymnJp1", "Yic0SdQ9pGb", "en0VxEbD8Na", "hnrkPzzs_O0", "jnTozzVDwv", "PtpuG3lOLqH", "IHJj6empWRQ", "BoDDQvBq8vV", "KXhNGExnvJ3", "TTdaDw_b_p", "XEE9pfqkxJs", "q9ICcBXpP4y", "5E_9fmlliip", "42_bslA1Be", ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer 4QcM,\n\nThanks for your intime response. Let's try to answer your new questions one by one.\n\n1. We're sorry for the misunderstanding. Here, we are not comparing the absolute values of different metrics. Our foucs is the tendency of curves (Due to the space issue, we put them in one figure. We'll ...
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "o8g_e9Fm9NB", "iclr_2022_3XD_rnM97s", "Yic0SdQ9pGb", "yuB17iPahkk", "QqMTKop8q1W", "en0VxEbD8Na", "en0VxEbD8Na", "hnrkPzzs_O0", "c1aztFWTFF", "DzSLclpJzF", "iclr_2022_3XD_rnM97s", "qtfamYsGMV9", "pMGtYPd9nDm", "c1aztFWTFF", "DzSLclpJzF", "PtpuG3lOLqH", "c1aztFWTFF", "iclr_2022_3XD...
iclr_2022_LGTmlJ10Kes
Curriculum Discovery through an Encompassing Curriculum Learning Framework
We describe a curriculum learning framework capable of discovering optimal curricula in addition to performing standard curriculum learning. We show that this framework encompasses existing curriculum learning approaches such as difficulty-based data sub-sampling, data pruning, and loss re-weighting. We employ the proposed framework to address the following key questions in curriculum learning research: (a) What is the best curriculum to train a given model on a given dataset? (b) What are the characteristics of optimal curricula for different datasets and different difficulty scoring functions? We show that our framework outperforms competing state-of-the-art curriculum learning approaches in natural language inference and other text classification tasks. In addition, exhaustive experiments illustrate the generalizability of the discovered curricula across the three datasets and two difficulty scoring functions.
Reject
The paper proposes a new curriculum learning framework by parameterizing data partitioning and weighting schemes. Extensive experiments are performed on three different datasets to demonstrate the effectiveness of the proposed framework. The reviewers acknowledged that the proposed framework is interesting as it encompasses several existing curriculum learning methods. However, the reviewers pointed out several weaknesses in the paper and shared concerns, including the scalability of the framework to larger datasets and the significance of the improvements over baselines. I want to thank the authors for their detailed responses. Based on the reviewers’ concerns and follow-up discussions, there was a consensus that the work is not ready for publication. The reviewers have provided detailed feedback to the authors. We hope that the authors can incorporate this feedback when preparing future revisions of the paper.
train
[ "GLop6S0vx7d", "zLgUVnZR9P8", "3f0yOQP9qdS", "wvsQXsQeB0p", "yOXl8Ljr1b", "HNvpHk07BE", "0Pi7IlL7OTK", "DGvPYenbhb", "G6NWosckLHK", "ebvxFRRbW0N", "0Q4LjGRnoL", "XdCNRJ7VKoL", "s4pED1kpR9w", "Y4k2H7mVw0" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the p-values. They suggest that the approach worked across its variations only for one data dataset and only for the balanced version of it, which is, as discussed before, an operation not always possible, desirable or allowed to do.\n\nRegarding the rest of comments, they may make sense in principl...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "3f0yOQP9qdS", "0Q4LjGRnoL", "DGvPYenbhb", "DGvPYenbhb", "iclr_2022_LGTmlJ10Kes", "0Pi7IlL7OTK", "G6NWosckLHK", "ebvxFRRbW0N", "Y4k2H7mVw0", "s4pED1kpR9w", "XdCNRJ7VKoL", "iclr_2022_LGTmlJ10Kes", "iclr_2022_LGTmlJ10Kes", "iclr_2022_LGTmlJ10Kes" ]
iclr_2022_8f95ajHrIFc
On Reward Maximization and Distribution Matching for Fine-Tuning Language Models
The availability of large pre-trained models is changing the landscape of Machine Learning research and practice, moving from a "training from scratch" to a "fine-tuning'' paradigm. While in some applications the goal is to "nudge'' the pre-trained distribution towards preferred outputs, in others it is to steer it towards a different distribution over the sample space. Two main paradigms have emerged to tackle this challenge: Reward Maximization (RM) and, more recently, Distribution Matching (DM). RM applies standard Reinforcement Learning (RL) techniques, such as Policy Gradients, to gradually increase the reward signal. DM prescribes to first make explicit the target distribution that the model is fine-tuned to approximate. Here we explore the intimate connections between the two paradigms and show that methods such as KL-control developed in the RM paradigm can also be construed as belonging to DM. We further observe that while DM differs from RM, it can suffer from similar training difficulties, such as high gradient variance. We leverage connections between the two paradigms to import the concept of baseline into DM methods. We empirically validate the benefits of adding a baseline on an array of controllable language generation tasks such as constraining topic, sentiment, and gender distributions in texts sampled from a language model. We observe superior performance in terms of constraint satisfaction, stability, and sample efficiency.
Reject
This paper explores the connections between reward maximization (RM) with REINFORCE and distribution matching (DM) with distributional policy gradients (DPG) for fine-tuning language models. Based on this, the paper proposes to apply a baseline (an idea in reinforcement learning) in DM to reduce variance and improve sample efficiency. Reviewers have concerns on the technical novelty as claimed in the paper, since the application of baseline is a straightforward practice and the resulting method is a simple addition to the existing method. More analysis (such as on the tradeoff between prior and constraint satisfaction, etc) was also suggested.
train
[ "d3pngRpv874", "Q4UkvuuTo3i", "-BZ3-EUpwh", "26rfKWGHM6", "39WASszd0OT", "KTJlLeg-ksv", "RQ1-Q8l8WZT", "kA9QHGqC28F", "jNTfOPXV5Op", "yuK4kFEbGtx", "-XCu2WN2jkg", "ovcEGb4wFBL", "-uW_A2HJsw8", "uhcvlAeWnEf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their detailed response to my initial review. My apologies for the late follow up.\n\nI still think the paper makes a positive contribution and I am not as concerned as other reviewers about the perceived novelty of the approach: if it works, even if it is simple, it is still...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "kA9QHGqC28F", "uhcvlAeWnEf", "jNTfOPXV5Op", "KTJlLeg-ksv", "iclr_2022_8f95ajHrIFc", "39WASszd0OT", "iclr_2022_8f95ajHrIFc", "uhcvlAeWnEf", "-uW_A2HJsw8", "ovcEGb4wFBL", "iclr_2022_8f95ajHrIFc", "iclr_2022_8f95ajHrIFc", "iclr_2022_8f95ajHrIFc", "iclr_2022_8f95ajHrIFc" ]
iclr_2022_kDF4Owotj5j
Thinking Deeper With Recurrent Networks: Logical Extrapolation Without Overthinking
Classical machine learning systems perform best when they are trained and tested on the same distribution, and they lack a mechanism to increase model power after training is complete. In contrast, recent work has observed that recurrent networks can exhibit logical extrapolation; models trained only on small/simple problem instances can extend their abilities to solve large/complex instances at test time simply by performing more recurrent iterations. While preliminary results on these ``thinking systems'' are promising, existing recurrent systems, when iterated many times, often collapse rather than improve their performance. This ``overthinking'' phenomenon has prevented thinking systems from scaling to particularly large and complex problems. In this paper, we design a recall architecture that keeps an explicit copy of the problem instance in memory so that it cannot be forgotten. We also propose an incremental training routine that prevents the model from learning behaviors that are specific to iteration number and instead pushes it to learn behaviors that can be repeated indefinitely. Together, these design choices encourage models to converge to a steady state solution rather than deteriorate when many iterations are used. These innovations help to tackle the overthinking problem and boost deep thinking behavior on each of the benchmark tasks proposed by Schwarzschild et al. (2021a).
Reject
This is an interesting work, and I urge the authors to keep pushing this direction of research. Unfortunately, I feel like the manuscript, in its current format is not ready for acceptance. The research direction is definitely under-explored, which makes the evaluation of the work a bit tricky. Still I think that some of the points raised by the reviewers hold, for e.g. the need of additional baselines (to provide a bit of context for what is going on)I understand that the authors view their work as an improvement of the previously proposed DT network, however that is a recent architecture, not sufficiently established not to require additional baseline for comparisons. This combined with the novely of the dataset makes it really hard to judge the work. The write-up might also require a bit of attention. In particular it seems a lot of important details of the work (or clarifications regarding the method) ended up in the appendix. A lot of the smaller things reviewer pointed out the authors rightfully so acknowledged in the rebuttal and propose to fix, however I feel this might end up requiring a bit of re-organization of the manuscript rather that adding things at the end of the appendix. I also highlight (and agree) with the word "thinking" being overloaded in this scenario. Ablation studies (some done as part of the rebuttal) might be also a key component to get this work over the finish line. E.g. the discussion around the progressive loss. I acknowledge that the authors did run some of those experiments, though I feel a more in depth look at the results and interpretation of them (e.g. not looking just at final performance, but at the behaviour of the system), and integrating them in the main manuscript could also provide considerable additional insight in the proposed architecture. My main worry is that in its current format, the paper might not end up having the impact it deserves and any of the changes above will greatly improve the quality and the attention the work will get in the community.
train
[ "DXRCNBw_i0f", "REju5-0RXAs", "vgRJZtD7RsY", "Q-M2-30xPvO", "RK7zM36h-E2", "4EAu7poqtKy", "Wfsthfa37-", "sn758_aiAe6", "hMItCS5OxPn", "jaNPAZxWYr6", "G2cv2cBP9R", "VvuKKW-mWTh", "HXVg_BzVbf", "cxt6KnbZRO6", "-D3FBEmipBO", "rxcSl-nCEL5" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewers' time and constructive feedback. In particular, we appreciate that the reviewers acknowledge how interesting the problem we address is, as well as how compelling and thorough our results are. Multiple reviewers also commented on how well written the paper is -- thank you again! \n\nThe...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2022_kDF4Owotj5j", "iclr_2022_kDF4Owotj5j", "RK7zM36h-E2", "4EAu7poqtKy", "Wfsthfa37-", "sn758_aiAe6", "jaNPAZxWYr6", "rxcSl-nCEL5", "-D3FBEmipBO", "G2cv2cBP9R", "VvuKKW-mWTh", "REju5-0RXAs", "cxt6KnbZRO6", "iclr_2022_kDF4Owotj5j", "iclr_2022_kDF4Owotj5j", "iclr_2022_kDF4Owotj5j"...
iclr_2022_xs-tJn58XKv
Learning Stable Classifiers by Transferring Unstable Features
While unbiased machine learning models are essential for many applications, bias is a human-defined concept that can vary across tasks. Given only input-label pairs, algorithms may lack sufficient information to distinguish stable (causal) features from unstable (spurious) features. However, related tasks often share similar biases -- an observation we may leverage to develop stable classifiers in the transfer setting. In this work, we explicitly inform the target classifier about unstable features in the source tasks. Specifically, we derive a representation that encodes the unstable features by contrasting different data environments in the source task. We achieve robustness by clustering data of the target task according to this representation and minimizing the worst-case risk across these clusters. We evaluate our method on both text and image classifications. Empirical results demonstrate that our algorithm is able to maintain robustness on the target task, outperforming the best baseline by 22.9% in absolute accuracy across 12 transfer settings. Our code and data will be publicly available.
Reject
The idea of learning unstable features from source tasks to help learn stable features for a target task is interesting and well-motivated. As the proposed method and its theoretical analysis of learning unstable features from tasks are an incremental extension of an existing work [Bao et al. 2021], the technical contributions line in applying the idea of stable and unstable features learning to the setting of transfer learning. Therefore, the evaluation of this work is focused on the effectiveness of the proposed method in the transfer learning setting. In transfer learning, one major goal is to make use of knowledge extracted from source tasks to help learn a precise target classifier even with a few or no labeled examples of the target task. It would be more convincing if experiments are conducted to show how the performance of the proposed method changes when the size of labeled data of the target task changes. This is to verify whether the exploitation of unstable features can help to learn a stable classifier for the target tasks more efficiently (i.e., with fewer labeled examples). In addition, as some baseline methods used for comparison do not need to access any labeled data of the target task (like unsupervised domain adaptation or domain generalization approaches), it is not fair to conduct comparison experiments in the setting where there are sufficient labeled examples of the target task since the original designs of such baselines may fail to fully exploit label information in the target task. Another concern is whether the proposed method is realistic for real-world transfer learning problems. Though in the rebuttal, the authors provided experimental results on a natural environment (CelebA), the constructed transfer learning problem is more like a toy problem. Indeed, there are many transfer learning benchmark datasets that contain multiple domains/tasks. It would be more convincing if experiments are conducted on those datasets. By considering the above two concerns, this paper is on the borderline. My recommendation is a weak rejection based on the current form of this paper. Note that as some references listed by reviewers RJhJ and J8M5 are not really related to the proposed research here, the novelty of the proposed method compared with those references is NOT taken into consideration to make this recommendation.
val
[ "wI6-sOF43mH", "BLa6jOX9Zj", "ajpX0cd9MrX", "4N1rPEHvSfQ", "ZWzZ25tcTJ", "1xCPqsN2PuQ", "xPf11zDbJZ6", "IZgP6cWWFp_", "HsPdPYZnbG9", "lnSSQ6jYA2Z", "GU8TviZsIq3", "X9RRlI-lwmY", "Fy9Yq_Muxwq", "mXjikmwefN", "11ulhdH8wBZ", "zM39NdmTIPJ", "hzXoeyr_CJH", "Uuw81769svx", "a-rmsUIXJ9t"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "...
[ " Gaut, Andrew, et al. \"Towards Understanding Gender Bias in Relation Extraction.\" 2020.\n\nJia, Shengyu, et al. \"Mitigating Gender Bias Amplification in Distribution by Posterior Regularization.\" 2020.\n\nPark, Ji Ho, Jamin Shin, and Pascale Fung. \"Reducing Gender Bias in Abusive Language Detection.\" 2018.\n...
[ -1, -1, 6, -1, -1, -1, 3, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "BLa6jOX9Zj", "ZWzZ25tcTJ", "iclr_2022_xs-tJn58XKv", "mXjikmwefN", "11ulhdH8wBZ", "zM39NdmTIPJ", "iclr_2022_xs-tJn58XKv", "HsPdPYZnbG9", "lnSSQ6jYA2Z", "X9RRlI-lwmY", "iclr_2022_xs-tJn58XKv", "GU8TviZsIq3", "a-rmsUIXJ9t", "ajpX0cd9MrX", "xPf11zDbJZ6", "Uuw81769svx", "iclr_2022_xs-tJn...
iclr_2022_tm9-r3-O2lt
CONTROLLING THE MEMORABILITY OF REAL AND UNREAL FACE IMAGES
Every day, we are bombarded with many face photographs, whether on social media, television, or smartphones. From an evolutionary perspective, faces are intended to be remembered, mainly due to survival and personal relevance. However, all these faces do not have the equal opportunity to stick in our minds. It has been shown that memorability is an intrinsic feature of an image but yet, it’s largely unknown what attributes make the images more memorable. In this work, we aim to address this question by proposing a fast approach to modify and control the memorability of face images. In our proposed method, we first find a hyperplane in the latent space of StyleGAN to separate high and low memorable images. We then modify the image memorability (while keeping the identity and other facial features such as age, emotion, etc.) by moving in the positive or negative direction of this hyperplane normal vector. We further analyzed how different layers of the styleGAN augmented latent space contribute to face memorability. These analyses showed how each individual face attribute makes images more or less memorable. Most importantly, we evaluated our proposed method for both real and unreal (generated) face images. The proposed method successfully modifies and controls the memorability of real human faces as well as unreal(generated) faces. Our proposed method can be employed in photograph editing applications for social media, learning aids, or advertisement purposes.
Reject
The reviewers raised a number of major concerns including the limited novelty of the proposed, inadequate motivation of the design choices and, most importantly, insufficient and unconvincing experimental evaluation presented. The authors’ rebuttal addressed some of the reviewers’ questions but failed to alleviate all reviewers’ concerns. Hence, I cannot suggest this paper for presentation at ICLR.
train
[ "KDC-c-A-FaD", "kdx3u4GtT99", "fv9uzmqNYqc", "I0bfMvoFSdn", "AdWJwH0eOcR", "4UNjwaeK0N", "rbCFiksNDeL", "5cSWViHJ9f", "p_L7tccpw6", "aBd2EQPxWb9", "PF49jcq6ket", "x0AhdfsglFC", "4auUANgZ50a", "IVRMsu64FG8", "SFxfaPHokE5", "8lD0Rv6zf4" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comment.\n\nThe memorability models have been tested on the face database and we have brought their results in A.3. The results show that these models are promising for predicting the memorability score of the faces. Moreover, the qualitative results are similar to [Sidorov., 2019], which sugge...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "AdWJwH0eOcR", "4auUANgZ50a", "4UNjwaeK0N", "rbCFiksNDeL", "5cSWViHJ9f", "p_L7tccpw6", "aBd2EQPxWb9", "8lD0Rv6zf4", "SFxfaPHokE5", "IVRMsu64FG8", "x0AhdfsglFC", "4auUANgZ50a", "iclr_2022_tm9-r3-O2lt", "iclr_2022_tm9-r3-O2lt", "iclr_2022_tm9-r3-O2lt", "iclr_2022_tm9-r3-O2lt" ]
iclr_2022_UGINpaICVOt
Neural networks with trainable matrix activation functions
The training process of neural networks usually optimize weights and bias parameters of linear transformations, while nonlinear activation functions are pre-specified and fixed. This work develops a systematic approach to constructing matrix activation functions whose entries are generalized from ReLU. The activation is based on matrix-vector multiplications using only scalar multiplications and comparisons. The proposed activation functions depend on parameters that are trained along with the weights and bias vectors. Neural networks based on this approach are simple and efficient and are shown to be robust in numerical experiments.
Reject
The paper proposed a new kind of activation function called matrix activation function that can be learnt jointly with the weights and biases. The paper got 2 strong rejects and 3 rejects. The major challenges include unclear motivation, limited novelty, incomplete related work, weak experiments, and poor paper writing. The author rebuttals did not convince the reviewers. The AC also read through the paper and agreed that the paper is below the bar of ICLR. In particular, the authors neglected a large literature of learning activation functions in the original version, (two more examples: [*] Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, Shuicheng Yan: Deep Learning with S-Shaped Rectified Linear Activation Units. AAAI 2016: 1737-1743. [#] Yan Yang, Jian Sun, Huibin Li, Zongben Xu: ADMM-Net: A Deep Learning Approach for Compressive Sensing MRI. NIPS 2017. ) making them unable to compare with existing learnable activation functions thoroughly in the revised version in order to justify the necessity of using matrix activation functions. So the AC recommended rejection.
test
[ "qdFqvDZYrwS", "khoJcGc0biQ", "TRmknD_C5Oj", "s3CWSJD64KA", "247cH127LHV", "aa_jo6cHGlA", "6_hFplOQhlO", "sSglFILMjz", "LQaLvlUs4ok", "YoFXaQJuvP1", "I0I3ZMDQXjm", "6oG1xGdXixO", "zPmaQRcgcVS", "na-ucYH2Fn2", "VmolVyEbaIi", "o5iUN_tTvrW", "Uzgh904Szyw", "SmLQhJMNpwo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "of...
[ " I appreciate the comments by the authors. However, I still disagree that a comparison to PReLU is enough for this kind of work at this conference. \nI understand the computational challenges, but a stronger evaluation is still needed, including the time taken by the activation function. \n\nFor those reasons, I w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5, 4 ]
[ "zPmaQRcgcVS", "LQaLvlUs4ok", "iclr_2022_UGINpaICVOt", "o5iUN_tTvrW", "VmolVyEbaIi", "na-ucYH2Fn2", "YoFXaQJuvP1", "I0I3ZMDQXjm", "na-ucYH2Fn2", "SmLQhJMNpwo", "Uzgh904Szyw", "o5iUN_tTvrW", "VmolVyEbaIi", "iclr_2022_UGINpaICVOt", "iclr_2022_UGINpaICVOt", "iclr_2022_UGINpaICVOt", "icl...
iclr_2022_e0uknAgETh
Adversarial Attacks on Spiking Convolutional Networks for Event-based Vision
Event-based sensing using dynamic vision sensors is gaining traction in low-power vision applications. Spiking neural networks work well with the sparse nature of event-based data and suit deployment on low-power neuromorphic hardware. Being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received very little attention so far. In this work, we show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and to the continuous-time setting of spiking neural networks. We test our methods on the N-MNIST and IBM Gestures neuromorphic vision datasets and show adversarial perturbations achieve a high success rate, while injecting a relatively small number of appropriately placed events. We also verify, for the first time, the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations and possible future directions.
Reject
The manuscript investigates common adversarial attacks on event-based data for spiking neural networks. They conclude that also in this setup adversarial attacks can strongly harm SNN performance. Although the reviewers agree that the paper presents some solid results and is well written, there was also substantial criticism. The main points were: - It is not very clear how the usual attacks are applied to event-based data, and in general experimental setups are unclear. - The methodological contribution of the paper seems limited. - The novelty is limited, in particular Marchisio et al. 2021 investigates a very similar question and goes somewhat further. The author noted that their attacks are not deployed on neuromorphic hardware. A number of other important prior work is not discussed. - The impact of adversarial defences was not considered. - A more detailed comparison of event-based attacks to standard ANN attacks would be desired. After the reviews, the authors have invested substantial efforts to improve the paper. These efforts were appreciated by the reviewers. In particular, the authors ran additional experiments using the defence method TRADES. The results showed that TRADES is effective, but the attack has still a large success. In summary, the reviewers agree that this is a solid manuscript and an interesting direction, however, they see it finally slightly below acceptance threshold for ICLR.
train
[ "avm-jZkZMtH", "mt0-Xgx3TE-", "IMvBGlWIhqm", "6Vd6cZej3jm", "BvY-XI3G0rT", "kdqL8nRflC8", "IGgiGI19YaB", "nZ7kYMWirth", "Vm9Cy42vPDq", "g-9SsoMjAUv", "S-jCJRFS3wL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents proof of the viability of adversarial attacks on CNN processing event-based vision data. The paper investigates adaptations of well-known white box attacks to the event-based and spiking domain, validates the claims on three public benchmarks, and investigates the effect of the adversarial attac...
[ 5, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "iclr_2022_e0uknAgETh", "IGgiGI19YaB", "nZ7kYMWirth", "BvY-XI3G0rT", "S-jCJRFS3wL", "g-9SsoMjAUv", "avm-jZkZMtH", "Vm9Cy42vPDq", "iclr_2022_e0uknAgETh", "iclr_2022_e0uknAgETh", "iclr_2022_e0uknAgETh" ]
iclr_2022_1-lFH8oYTI
Calibration Regularized Training of Deep Neural Networks using Kernel Density Estimation
Calibrated probabilistic classifiers are models whose predicted probabilities can directly be interpreted as uncertainty estimates. This property is particularly important in safety-critical applications such as medical diagnosis or autonomous driving. However, it has been shown recently that deep neural networks are poorly calibrated and tend to output overconfident predictions. As a remedy, we propose a trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true Lp calibration error. This novel estimator enables us to achieve the strongest notion of multiclass calibration, called canonical calibration, while other common calibration methods only allow for top-label and marginal calibration. The empirical results show that our estimator is competitive with the state-of-the-art, consistently yielding tradeoffs between calibration error and accuracy that are (near) Pareto optimal across a range of network architectures. The computational complexity of our estimator is O(n^2), matching that of the kernel maximum mean discrepancy, used in a previously considered trainable calibration estimator. By contrast, the proposed method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities based on conditional expectation, such as the sharpness of an estimator.
Reject
Thank you for your submission to ICLR. The paper proposes a simple method for improving calibration performance using a loss based upon a Dirichlet KDE. The method is appealing in its simplicity, but several reviewers (and myself) have concerns simply about the fact that the method ultimately seemed to give rather marginal improvement over the standard cross-entropy baseline. The authors attempted to address this point in the rebuttal, with their additional example on the Kather domain. And while this is a nice addition, I'm still not fully convinced that the improvement here is _that_ significant, to the point where I think it would be important to consider much broader sweeps of hyperparameters, etc, for all methods (which I believe should be reasonable here given the data set sizes). I believe this has the potential to be a nice contribution, and its simplicity can be a positive, but ultimately I think a bit of additional effort is required to show the full empirical advantages of the method.
train
[ "RLn6bXaBlyQ", "AnJPzA9rhNX", "YJxWX_6yE9z", "Tly9-t6AVrs", "kikiMqliSCM", "RQpBix9gUpm", "uLglZFnxHF7", "_JK_qpA7j5X", "dxUu9yNnPP1", "0h_LPLd8yiM", "0-TMRvqfFY", "rDfuhYRXnz", "Wz33wBraEVq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am satisfied by the additions made to the paper. The experiments on the Kather (medical) dataset do illustrate the superior L1 ECE (canonical) of the proposed approach, and the time measurements show that the overhead introduced by the proposed approach is not significant. I am therefore increasing my score to ...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "uLglZFnxHF7", "iclr_2022_1-lFH8oYTI", "0h_LPLd8yiM", "rDfuhYRXnz", "dxUu9yNnPP1", "Tly9-t6AVrs", "AnJPzA9rhNX", "iclr_2022_1-lFH8oYTI", "0-TMRvqfFY", "Wz33wBraEVq", "iclr_2022_1-lFH8oYTI", "iclr_2022_1-lFH8oYTI", "iclr_2022_1-lFH8oYTI" ]
iclr_2022_FASW5Ed837
Bandwidth-based Step-Sizes for Non-Convex Stochastic Optimization
Many popular learning-rate schedules for deep neural networks combine a decaying trend with local perturbations that attempt to escape saddle points and bad local minima. We derive convergence guarantees for bandwidth-based step-sizes, a general class of learning-rates that are allowed to vary in a banded region. This framework includes many popular cyclic and non-monotonic step-sizes for which no theoretical guarantees were previously known. We provide worst-case guarantees for SGD on smooth non-convex problems under several bandwidth-based step sizes, including stagewise $1/\sqrt{t}$ and the popular \emph{step-decay} (``constant and then drop by a constant’’), which is also shown to be optimal. Moreover, we show that its momentum variant converges as fast as SGD with the bandwidth-based step-decay step-size. Finally, we propose novel step-size schemes in the bandwidth-based family and verify their efficiency on several deep neural network training tasks.
Reject
The reviewers have the following remain concerns: 1. The bounded function value assumption is strong. Note that the previous works for SGD and SGD-M for other LR schemes do not necessarily need this assumption, hence it may be unfair to compare with existing results and say that this work has improvements for non-monotonic schemes. The authors also agree that it is not easy to prove and remove this assumption. 2. The novelty is limited, and the contributions are somewhat incremental. The bandwidth step size scheme was already introduced in a previous work with a very similar setting. The convergence rate for the proposed LR scheme is the same as previous works for other schemes (or only better by a logarithmic term), which makes the results incremental. 3. Some of the claims are not well supported. For example, the reviewers comment that it is not clear how the proposed bandwidth step size can help to escape local minima. Although the authors aim to show this empirically, the toy setting is not strong enough to conclude the superior performance of the proposed scheme. We encourage the authors to improve their paper and resubmit to another venue. Here are the related suggestions: 1. The authors might try to investigate and provide a rigorous proof of how the non-monotonic step size can help to escape local minima. It also helps to characterize the effectiveness of each cyclic rule (cosine/ triangular or any other) and make clear what property (cosine/linear rules or bandwidth or non-monotonicity) contributes most in the good performance of a LR scheme. 2. It is better if the assumption on the bounded function value can be removed. In addition, a theoretical/empirical analysis on the generalization performance of the proposed scheme might also be helpful.
val
[ "Mpjh_SmzhE8", "SCfrRhBZkmL", "k8w84x05Nv8", "xwKyBZIvWTw" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a general framework for analyzing SGD with a bandwidth-based step size. This step size scheme uses a monotonically decreasing boundary function along with upper and lower bound constants to cover several non-monotonic step sizes strategies, including epoch-wise step decay, cosine annealing and ...
[ 5, 5, 3, 6 ]
[ 4, 3, 5, 3 ]
[ "iclr_2022_FASW5Ed837", "iclr_2022_FASW5Ed837", "iclr_2022_FASW5Ed837", "iclr_2022_FASW5Ed837" ]
iclr_2022_S0NsaRIxvQ
Adversarial Style Transfer for Robust Policy Optimization in Reinforcement Learning
This paper proposes an algorithm that aims to improve generalization for reinforcement learning agents by removing overfitting to confounding features. Our approach consists of a max-min game theoretic objective. A generator transfers the style of observation during reinforcement learning. An additional goal of the generator is to perturb the observation, which maximizes the agent's probability of taking a different action. In contrast, a policy network updates its parameters to minimize the effect of such perturbations, thus staying robust while maximizing the expected future reward. Based on this setup, we propose a practical deep reinforcement learning algorithm, Adversarial Robust Policy Optimization (ARPO), to find an optimal policy that generalizes to unseen environments. We evaluate our approach on visually enriched and diverse Procgen benchmarks. Empirically, we observed that our agent ARPO performs better in generalization and sample efficiency than a few state-of-the-art algorithms.
Reject
I thank the authors for their submission and active participation in the discussions. This papers is borderline with reviewers WXXr and eK4b leaning towards acceptance and reviewers f6jT and FV5x leaning towards rejection. On the positive side, reviewers remarked that the paper is interesting [FV5x] and novel [FV5x,f6jT,eK4b,WXXr]. However, there all reviewers found some flaws with respect to the execution and empirical validation [FV5x], specifically around lacking baselines [FV5x,WXXr] and some ablations [f6jT,WXXr]. I side with the comment made by reviewers FV5x as well as WXXr that a comparison to stronger baselines (UCB-DrAC) is warranted. Therefore, I recommend that this paper is not ready for publication at this point and that it will benefit greatly from another iteration with stronger empirical results. I want to very strongly encourage the authors to further improve their paper based on the reviewer feedback.
train
[ "y_F4LYLSXuS", "Sw1mT0jhcey", "VlN79LieCB", "QYez-4vnmNY", "14k63Ma4ndB", "vyq3IYlKkK", "LjAE_db4bZ", "60_vPcO6BwR", "zINF0HGJjQf", "QMOBVpmJMaT", "im_dSeArPqn", "XirY6GUzgn8", "6jX0Vxy8qY7", "_fSxTb671VS", "6v5-iRTlMKk", "bT72SLl3gvU", "ehqOo4IwChG" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper propose a method for improving generalisation and robustness across visual factors of variation in reinforcement learning, using a style transfer network to adversarially perturb the input to the policy, while the policy is trained to be invariant to this visual perturbation. The style transfer network i...
[ 5, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_S0NsaRIxvQ", "VlN79LieCB", "LjAE_db4bZ", "6jX0Vxy8qY7", "iclr_2022_S0NsaRIxvQ", "60_vPcO6BwR", "im_dSeArPqn", "6v5-iRTlMKk", "14k63Ma4ndB", "iclr_2022_S0NsaRIxvQ", "XirY6GUzgn8", "y_F4LYLSXuS", "zINF0HGJjQf", "ehqOo4IwChG", "bT72SLl3gvU", "iclr_2022_S0NsaRIxvQ", "iclr_2022...
iclr_2022_qpcG27kYK6z
Concentric Spherical GNN for 3D Representation Learning
Learning 3D representations of point clouds that generalize well to arbitrary orientations is a challenge of practical importance in problems ranging from computer vision to molecular modeling. The proposed approach is based on a concentric spherical representation of 3D space, formed by nesting spatially-sampled spheres resulting from the highly regular icosahedral discretization. We propose separate intra-sphere and inter-sphere convolutions over the resulting concentric spherical grid, which are combined into a convolutional framework for learning volumetric and rotationally equivariant representations over point clouds. We demonstrate the effectiveness of our approach for 3D object classification, and towards resolving the electronic structure of atomistic systems.
Reject
This paper addresses the problem of learning representation of 3D point clouds and introduces an interesting approach of concentric spherical GNN with the property of rotationally equivariant. It shows some promising results on point cloud classification under SO(3) transformations and on predicting electronic state density of graphene allotropes. The reviews suggest that, while it does not suffer from any major flaws, the paper has a fairly large number of minor issues that add up to make it subpar for publication. The proposed approach have several hyperparameters, but the authors do not seem to be up front about how the parameters are selected except for stating that they use "standard tuning techniques" --- this is not a satisfactory answer and appears to be dodging the question. Many technical details and specific choices could use more thorough explanation and analysis. The distinction of the proposed approach in relation to the large body of existing literature could be more clearly spelled out. Collectively, these issues made the contribution of this paper less clear.
train
[ "2flwJtdYJGX", "Rv5xb77_E8Q", "vfIBXylWb0z", "xQiw_54maoT", "GzHq7n4JyQj", "l_ALzvwVeed", "kg_yox0KVVQ", "2cVUDMZ4mY", "cnullFa9l13", "_suxaLWJK2", "h4PcqDJqhAC" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for their time and giving feedback, suggestions, and questions based on our work.\nBeyond responses to individual reviews, here we note some of the more substantial revisions we've made to the paper.\nWe have also uploaded a revised version of the paper, and marked significant changes in re...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "iclr_2022_qpcG27kYK6z", "h4PcqDJqhAC", "2cVUDMZ4mY", "kg_yox0KVVQ", "l_ALzvwVeed", "cnullFa9l13", "_suxaLWJK2", "iclr_2022_qpcG27kYK6z", "iclr_2022_qpcG27kYK6z", "iclr_2022_qpcG27kYK6z", "iclr_2022_qpcG27kYK6z" ]
iclr_2022_I-nQMZfQz7F
Learning Neural Implicit Functions as Object Representations for Robotic Manipulation
Robotic manipulation planning is the problem of finding a sequence of robot configurations that involves interactions with objects in the scene, e.g., grasp, placement, tool-use, etc. To achieve such interactions, traditional approaches require hand-designed features and object representations, and it still remains an open question how to describe such interactions with arbitrary objects in a flexible and efficient way. Inspired by neural implicit representations in 3D modeling, e.g. NeRF, we propose a method to represent objects as neural implicit functions upon which we can define and jointly train interaction features. The proposed pixel-aligned representation is directly inferred from camera images with known camera geometry, naturally acting as a perception component in the whole manipulation pipeline, while at the same time enabling sequential robot manipulation planning.
Reject
The paper initially received negative reviews; the authors did a good job during the response period: two reviewers have updated their scores to 6. The AC has carefully read the reviews, responses, and discussions, and agreed that the authors have also mostly addressed the concerns of reviewer gsUt as well. It is unprofessional for reviewer gsUt to not engage in discussions after multiple requests. The AC however also agrees with reviewer seqp that the new changes are major, and submissions are supposed to be evaluated in their initial form. Further, neither of the positive reviewers would like to champion the paper. The final recommendation is to reject the paper. The authors are encouraged to further improve and flesh out the paper based on the reviews for the next venue.
train
[ "XIW6anmlqR9", "0YwImhjqqed", "UaAb12dcwF", "hyaFnDgnAhR", "uytembzi8RK", "SZnqRp6Uh17", "0qa2qR5ngDb", "DIKH1BH2M38", "Y9ni4ECqO2j", "85UKbXEq_3", "jrJHtHynGLF", "2Qi_wgitAM", "YlGYZ5rz4ZL", "JVlkl4gHxqp", "yuHFYbXRTU", "RkoCRAOC5Zr", "8292dAmgP3z", "50TajNlA_oi", "Qycr5QxGEY4",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ "This paper build on recent progress in implicit object representations and where the representations are trained(or fine-tuned) along with a feature head on down stream manipulation tasks. Experiments are conducted by using this trained representation within an LGP formulation solved with Gauss-Newton optimizer to...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_I-nQMZfQz7F", "iclr_2022_I-nQMZfQz7F", "hyaFnDgnAhR", "8292dAmgP3z", "DIKH1BH2M38", "0YwImhjqqed", "Tr_sHl7aOJp", "50TajNlA_oi", "Qycr5QxGEY4", "iclr_2022_I-nQMZfQz7F", "2Qi_wgitAM", "YlGYZ5rz4ZL", "JVlkl4gHxqp", "RkoCRAOC5Zr", "85UKbXEq_3", "yuHFYbXRTU", "SZnqRp6Uh17", ...
iclr_2022_7pZiaojaVGU
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
To address the resilience of distributed learning, the ``Byzantine" literature considers a strong threat model where workers can report arbitrary gradients to the parameter server. While this model helped generate several fundamental results, it has however sometimes been considered unrealistic, when the workers are mostly trustworthy machines. In this paper, we show a surprising equivalence between this model and data poisoning, a threat considered much more realistic. More specifically, we prove that any gradient attack can be reduced to data poisoning in a personalized federated learning system that provides PAC guarantees (which we show are both desirable and realistic in various personalized federated learning contexts such as linear regression and classification). Maybe most importantly, we derive a simple and practical attack that may be constructed against classical personalized federated learning models, and we show both theoretically and empirically the effectiveness of this attack.
Reject
This paper presents an analysis showing the equivalence between gradient and data poisoning attacks in personalized federated learning settings. The paper contains an analysis of an attack that requires only a single corrupt learning agent, providing results in the setting of PAC learnable models. The reviewers had several criticisms of the paper, some of which were addressed in the rebuttal. The first is that the presentation of the paper was at times confusing, and the theoretical results were hard to interpret. This has been addressed by several changes to the paper writing, including major changes to the layout. The reviewers feel that other criticisms were not entirely addressed. This includes the criticism that the experiments are in a fairly simplistic setting (GD on MNIST and Fashion MNIST), and that the theoretical results require strong assumptions and focus mostly on classical models that are learnable in convex frameworks. While the reviewers agree there are interesting questions posed in this paper, the consensus seems to be that the experimental and theoretical results in this paper should be further revised, and that a future version of this paper will be a great candidate for publication.
train
[ "TcZYMaH1B0l", "KMt4qcnbtzy", "tXZPJMxGYze", "I-pS2INOwT8", "MKOf3DonAhV", "J___aG3iIf4", "lGLL93CmUm", "8eD4k1Gs-WV", "pMXUUhiiMz", "UwLwU-o7lJ5", "-DsJnhb0DAu", "XKy9ag5YRoz", "opAMC-NUlTU", "4qMB1a75t07", "CCoVDxfj4e", "FZ6ODhA2LE1", "-NFE-uvw76G", "_VZLR2BtiG", "UudDZlYz36m",...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " We thank the reviewer for their feedback on our rebuttal. \nOn the remaining concern about single vs multiple attackers: Note that in the paper we prove a single attacker can bais the model to any desired point. Now if we consider $f$ attackers, they can still bias the model to any desired point and our results h...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "KMt4qcnbtzy", "CCoVDxfj4e", "UwLwU-o7lJ5", "MKOf3DonAhV", "opAMC-NUlTU", "8eD4k1Gs-WV", "iclr_2022_7pZiaojaVGU", "pMXUUhiiMz", "FZ6ODhA2LE1", "iclr_2022_7pZiaojaVGU", "OD8QKqa4XXf", "uYFeJQBt9AT", "4qMB1a75t07", "UudDZlYz36m", "_VZLR2BtiG", "-NFE-uvw76G", "lGLL93CmUm", "iclr_2022_...
iclr_2022_GOr80bgf52v
Factored World Models for Zero-Shot Generalization in Robotic Manipulation
World models for environments with many objects face a combinatorial explosion of states: as the number of objects increases, the number of possible arrangements grows exponentially. In this paper, we learn to generalize over robotic pick-and-place tasks using object-factored world models, which combat the combinatorial explosion by ensuring that predictions are equivariant to permutations of objects. We build on one such model, C-SWM, which we extend to overcome the assumption that each action is associated with one object. To do so, we introduce an action attention module to determine which objects are likely to be affected by an action. The attention module is used in conjunction with a residual graph neural network block that receives action information at multiple levels. Based on RGB images and parameterized motion primitives, our model can accurately predict the dynamics of a robot building structures from blocks of various shapes. Our model generalizes over training structures built in different positions. Moreover crucially, the learned model can make predictions about tasks not represented in training data. That is, we demonstrate successful zero-shot generalization to novel tasks. For example, we measure only 2.4% absolute decrease in our action ranking metric in the case of a block assembly task.
Reject
This paper presents a GNN-based attention mechanism and tests it on a robotic stacking task. While all the reviewers agree that this work is novel and interesting, they also are unanimous (even after the rebuttal) in pointing to the insufficient experimental evaluation of the proposed method. I encourage the authors to incorporate the feedback of all the reviewers.
train
[ "hWdS--8flt4", "PuqptDAvti2", "BQvQJYbVMMA", "2J2mrqqs7fB", "WKgZWIiIDN", "IJdQHjnOwZc", "wiustPPycr", "m9qraNDj7k_", "mWzG6wWV4U", "Jq3hvZBbGcm", "lbTGix6l8Eq", "9r-clhErPyE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The presented work addresses the task of robotic manipulation tailored to environments containing many objects. Task planning in such environments entails large number of possible combinations of actions given the number of objects. To address this challenge, the authors propose to use an attention module in combi...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_GOr80bgf52v", "mWzG6wWV4U", "wiustPPycr", "iclr_2022_GOr80bgf52v", "iclr_2022_GOr80bgf52v", "Jq3hvZBbGcm", "9r-clhErPyE", "lbTGix6l8Eq", "hWdS--8flt4", "iclr_2022_GOr80bgf52v", "iclr_2022_GOr80bgf52v", "iclr_2022_GOr80bgf52v" ]
iclr_2022_YpBHDlalKDG
Complex Locomotion Skill Learning via Differentiable Physics
Differentiable physics enables efficient gradient-based optimizations of neural network (NN) controllers. However, existing work typically only delivers NN controllers with limited capability and generalizability. We present a practical learning framework that outputs unified NN controllers capable of tasks with significantly improved complexity and diversity. To systematically improve training robustness and efficiency, we investigated a suite of improvements over the baseline approach, including periodic activation functions, and tailored loss functions. In addition, we find our adoption of batching and a modified Adam optimizer effective in training complex locomotion tasks. We evaluate our framework on differentiable mass-spring and material point method (MPM) simulations, with challenging locomotion tasks and multiple robot designs. Experiments show that our learning framework, based on differentiable physics, delivers better results than reinforcement learning and converges much faster. We demonstrate that users can interactively control soft robot locomotion and switch among multiple goals with specified velocity, height, and direction instructions using a unified NN controller trained in our system.
Reject
A nice paper and very close to being good. But the focus on hyperparameter tuning of the optimisation method is really not novel, and the experimental validation is not strong enough. With both theory and experimental just being marginal improvements, the paper is not considered quite ready yet. Strong suggestion to improve on the weaknesses of the paper and resubmit – next time you'll have a clear acceptance.
train
[ "BiuL5uskVBY", "B9dcrY62QEO", "VhDQTdsYTUk", "W9uAbJKoLgh", "T1fp36YNzaF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes to learn locomotion skills on soft robots made of springs. It proposes to leverage differentiable physics along with a NN controller based on SIREN, which allows to directly learn policies by minimizing loss functions defined on trajectories. The methods allows to learn locomotion and jump behav...
[ 6, -1, 6, 6, 5 ]
[ 4, -1, 3, 4, 4 ]
[ "iclr_2022_YpBHDlalKDG", "VhDQTdsYTUk", "iclr_2022_YpBHDlalKDG", "iclr_2022_YpBHDlalKDG", "iclr_2022_YpBHDlalKDG" ]
iclr_2022_GMYWzWztDx5
NormFormer: Improved Transformer Pretraining with Extra Normalization
During pretraining, the Pre-LayerNorm transformer suffers from a gradient magnitude mismatch: gradients at early layers are much larger than at later layers, while the optimal weighting of residuals is larger at earlier than at later layers. These issues can be alleviated by the addition of two normalization and two new scaling operations inside each layer. The extra operations incur negligible compute cost (+0.5\% parameter increase), but improve pretraining perplexity and downstream task performance for both causal and masked language models of multiple sizes. Adding NormFormer on top of the GPT3-Medium architecture can reach the SOTA perplexity 22\% faster, or converge 0.33 perplexity better in the same compute budget. This results in significantly stronger zero shot performance. For masked language modeling, NormFormer improves fine-tuned GLUE performance by 1.9\% on average.
Reject
This submission proposes a few small changes to the (PreLN) Transformer architecture that enable training with higher learning rates (and therefore can result in faster convergence). The changes include the addition of two layer norm operations as well as a learnable head scaling operation in multi-headed attention. The proposed operations add only a small computational overhead and should be simple to implement. Experiments are conducted on language modeling and masked language modeling, with improved results demonstrated at various scales and according to various evaluation procedures. The paper also includes a good amount of ablation study as well as some analysis. Reviews on the paper were mixed, and a great deal of changes were made to the paper during the rebuttal period. To summarize the concerns and recommendations, reviewers requested - better connection between the proposed changes and the purported issue (gradient scale mismatch between early/late layers) - better analysis of why gradient scale mismatch is a major issue and investigation of where it comes from - better comparison to existing techniques that allow for higher learning rate training of Transformers - additional experiments on different model types and ideally different codebases/implementations I think overall this is a solid submission, since it proposes a simple change that is reasonably likely to be helpful (or at least not harmful). However, I think that there are enough concerns with the current draft and there were enough changes made during rebuttal that this paper should be resubmitted to a future conference. I would suggest the authors take the final updated form from this round, add additional motivation/analysis/experiments, and resubmit, and I suspect a positive outcome.
val
[ "zMxxguTKqCd", "omBw41JglWX", "IkvnHZtE3xF", "QIaUFbxYPon", "8ury4tWoH_L", "_MqNSwXZdA", "F2AKCypWyK9", "wgr5nqJVU6d", "SJzg51N48ua", "LgGlF2iEroC", "1sSOeq3Kg1z", "fM8boGMg9gu", "mPHngICXWbs", "lHgESZhwyNT", "zIDNSxfTdlj" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims to improve pretraining Pre-LayerNorm transformers by alleviating two issues: early layers have much larger gradients than later ones, and naive residual learning can't provide optimal weighting. To this end, it proposes to add two LayerNorms after the multi-head attention and the GELU non-linear ac...
[ 5, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, 8, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_GMYWzWztDx5", "LgGlF2iEroC", "QIaUFbxYPon", "_MqNSwXZdA", "mPHngICXWbs", "1sSOeq3Kg1z", "wgr5nqJVU6d", "fM8boGMg9gu", "iclr_2022_GMYWzWztDx5", "zMxxguTKqCd", "zIDNSxfTdlj", "SJzg51N48ua", "lHgESZhwyNT", "iclr_2022_GMYWzWztDx5", "iclr_2022_GMYWzWztDx5" ]
iclr_2022_1Z3h4rCLvo-
Improving Long-Horizon Imitation Through Language Prediction
Complex, long-horizon planning and its combinatorial nature pose steep challenges for learning-based agents. Difficulties in such settings are exacerbated in low data regimes where over-fitting stifles generalization and compounding errors hurt accuracy. In this work, we explore the use of an often unused source of auxiliary supervision: language. Inspired by recent advances in transformer-based models, we train agents with an instruction prediction loss that encourages learning temporally extended representations that operate at a high level of abstraction. Concretely, we demonstrate that instruction modeling significantly improves performance in planning environments when training with a limited number of demonstrations on the BabyAI and Crafter benchmarks. In further analysis we find that instruction modeling is most important for tasks that require complex reasoning, while understandably offering smaller gains in environments that require simple plans. Our benchmarks and code will be publicly released.
Reject
The reviewers all consider the paper to be below the acceptance bar. While the revision addressed some concerns, several critical ones remain open. This includes empirical concerns with regard to the extremely simple grid-world environments used, and with regard to the vague distinction between instructions and goal specifications. To improve the submission, the authors should seek stronger empirical foundations, and either refine or remove vague distinctions with regard to the phenomena they aim to study. Special thanks to the reviewers for an extremely productive discussion.
train
[ "5IdMKvFXfm4", "0XsNF5ky-W", "kQRVaqYRnfQ", "BdWsypj46Ik", "TkM0B1ECKGD", "DMCLD7Cdqfv", "rSUCdRJts83", "H7MTVsfP_c1", "vo4jrks1Xbh", "gJaPxCR0iLk", "DoCr5_MeWvj", "gN9J8GRGIk0", "0CjqyzftE8", "2HwOu-UbkO", "qwzQrML_32n", "vre2KlYyNRv", "oBO1iKe2F1" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " First of all, I'd like to thank the authors for the detailed clarification RE ALFRED, I think this is a great addition to the manuscript. To make sure, when you say \"goal\" do you mean the NL instruction? and then when you say \"instruction\" you mean the synthetic PDDL-derived language? Do you think for the pre...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "DoCr5_MeWvj", "rSUCdRJts83", "5IdMKvFXfm4", "iclr_2022_1Z3h4rCLvo-", "gJaPxCR0iLk", "rSUCdRJts83", "H7MTVsfP_c1", "vre2KlYyNRv", "qwzQrML_32n", "oBO1iKe2F1", "oBO1iKe2F1", "BdWsypj46Ik", "BdWsypj46Ik", "iclr_2022_1Z3h4rCLvo-", "iclr_2022_1Z3h4rCLvo-", "iclr_2022_1Z3h4rCLvo-", "iclr_...
iclr_2022_OxgLa0VEyg-
Loss Function Learning for Domain Generalization by Implicit Gradient
Generalising robustly to distribution shift is a major challenge that is pervasive across most real-world applications of machine learning. A recent study highlighted that many advanced algorithms proposed to tackle such domain generalisation (DG) fail to outperform a properly tuned empirical risk minimisation (ERM) baseline. We take a different approach, and explore the impact of the ERM loss function on out-of-domain generalisation. In particular, we introduce a novel meta-learning approach to loss function search based on implicit gradient. This enables us to discover a general purpose parametric loss function that provides a drop-in replacement for cross-entropy. Our loss can be used in standard training pipelines to efficiently train robust models using any neural architecture on new datasets. The results show that it clearly surpasses cross-entropy, enables simple ERM to outperform significantly more complicated prior DG methods, and provides state-of-the-art performance across a variety of DG benchmarks. Furthermore, unlike most existing DG approaches, our setup applies to the most practical setting of single-source domain generalisation, on which we show significant improvement.
Reject
This paper considers the idea of meta-learning the loss function for domain generalization. It's a simple idea, that seems to work reasonably well. Although, as pointed out by the reviewers, the margin is actually quite modest when compared to the strongest baselines (not ERM). On a positive note, many reviewers agree that the idea was simple, novel, and interesting. The insight that cross-entropy can be improved for domain generalization is interesting. On the other hand, many reviewers pointed out that the, despite some careful empirical work, it's not clear why this idea works. I read the paper myself, and I agree that the paper could use a bit more work before it is ready for publication. Specifically, I agree with Reviewer eZ71, who asked for a clear justification of the proposed idea. The idea seems sensible, but there is some burden on the paper to provide insight, and not simply present an idea. Here are some specific suggestions that came up during discussion, which could strengthen the paper: - A more comprehensive discussion of the limitations of this approach. - It would be good to understand how critical was the specific choice of parametric loss family. Here are some questions that would be good to address: does the parametric family interact with the type of domain shift in the datasets? Why are Taylor polynomials preferable or beneficial for domain generalization compared to, e.g., a linear combination of standard loss functions? - Is the dataset on which you learn your ITL loss critical? I.e., how critical was the choice of rotated MNIST for learning the ITL loss? Does it generalize to very different and more diverse domain shift tasks, like those in the WILDS benchmark? It would be particularly interesting to see if loss functions meta-trained on distinct datasets learn similar parameters. - More broadly, evaluation on larger and more diverse domain shift tasks, like those in the WILDS benchmark, would further strengthen the conclusions in the paper.
train
[ "uY0opQs1X1U", "_kfBzmPIAne", "iArSwwsiZbh", "DGzYVuzXJb", "3aWEzmkjysR", "yrSF0YGmJEF", "Voh2rJCR0I", "7B7q6N5cofA", "qIgvp0K8YU8", "iB4FI5mDLux", "Fp7KkCb8WHy", "W7RwxCN3DzN", "b7DnQtB8Uqu", "55R4d5l_qs5", "k7V0PjuewSA", "crXzxqo0VjO", "fjz9Vq1Ixo5", "EP1Y2Wi8Kip", "QPZyl-6fQKP...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_revi...
[ " Dear authors, \n\nI added my post-rebuttal comments in the main review summary and decided to increase my score to (6). Please let me know if you agree with the proposed changes or have final thoughts on the discussion around the statistics.", "The authors design a scheme for meta-learning loss functions for do...
[ -1, 6, -1, 3, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "_kfBzmPIAne", "iclr_2022_OxgLa0VEyg-", "iclr_2022_OxgLa0VEyg-", "iclr_2022_OxgLa0VEyg-", "iB4FI5mDLux", "W7RwxCN3DzN", "W7RwxCN3DzN", "ZMluu6aHG8M", "iclr_2022_OxgLa0VEyg-", "55R4d5l_qs5", "b7DnQtB8Uqu", "DGzYVuzXJb", "crXzxqo0VjO", "fjz9Vq1Ixo5", "ErRFEHIo_GD", "ErRFEHIo_GD", "ErRF...
iclr_2022_cD0O_Sc-wNy
Learn the Time to Learn: Replay Scheduling for Continual Learning
Replay-based continual learning has shown to be successful in mitigating catastrophic forgetting. Most previous works focus on increasing the sample quality in the commonly small replay memory. However, in many real-world applications, replay memories would be limited by constraints on processing time rather than storage capacity as most organizations do store all historical data in the cloud. Inspired by human learning, we illustrate that scheduling over which tasks to revisit is critical to the final performance with finite memory resources. To this end, we propose to learn the time to learn for a continual learning system, in which we learn schedules over which tasks to replay at different times using Monte Carlo tree search. We perform extensive evaluation and show that our method can learn replay schedules that significantly improve final performance across all tasks than baselines without considering the scheduling. Furthermore, our method can be combined with any other memory selection methods leading to consistently improved performance. Our results indicate that the learned schedules are also consistent with human learning insights.
Reject
This paper studies the problem of dynamically selecting samples to replay given that all previous data is stored. The paper shows that in this setting, selecting which samples to replay outperforms several baselines over a variety of datasets. I believe that the reviewers understood this work, but their initial opinions were quite mixed. Two of the reviewers did not "accept" this setting (all past data stored and accessible) as a reasonable one for continual learning. The discussion did not lead to a reconciliation. I found truth in both views. On one side, I can believe that the proposed setting has applications (recommender systems where historical data is kept seems like a reasonable one). I also find the approach reasonable since "compute" is often the bottleneck and not memory/storage. On the other, I also see that this is specializing the CL problem a bit and so, while immediately useful, may or may not help to improve more general continual-learning approaches. This is highly speculative. Another argument against this setting is that it is not absolutely clear that in this setting CL approaches are necessarily required. This really depends on the specifics of the problems. Several of the questions and weaknesses discussed by other reviewers were also discussed and addressed by the reviewers. Overall, the final score from the reviewers makes this a very borderline paper. Further, even amongst the positive reviewers, one provides an overall recommendation of a 6 (marginally above the acceptance threshold). In the end, the paper was in the category of papers that were examined closely for possible acceptance, but the broad view of the area chair and the reviewers was that the paper could benefit from additional work before publication.
train
[ "GN3_mcx2zxt", "fN7TWrmwQUX", "uYk9vghRbcc", "45SqhCeWbKA", "bGxuVZPr6fn", "44aqycXGWgB", "JcZbDQb2Iy", "qQSZM7DWLb3", "DvozmQXlu4W", "k7bviQWMyUb", "wTSWjKGwSik", "26Isi31Fp7", "Xy-WbQdYvmp", "WBgRqNyAkPy", "-eHriLn5Df0", "me8N-eRjwpM", "99Ggiduaznr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The key motivation of this work is that the bottleneck of replay in continual learning is the processing time in each training cycle and not storage space for the historical dataset. Hence, this work has been approached from the angle of fixed-sized memory allowance for each experience training cycle. The main res...
[ 8, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_cD0O_Sc-wNy", "45SqhCeWbKA", "bGxuVZPr6fn", "44aqycXGWgB", "Xy-WbQdYvmp", "26Isi31Fp7", "iclr_2022_cD0O_Sc-wNy", "iclr_2022_cD0O_Sc-wNy", "wTSWjKGwSik", "iclr_2022_cD0O_Sc-wNy", "qQSZM7DWLb3", "99Ggiduaznr", "WBgRqNyAkPy", "me8N-eRjwpM", "GN3_mcx2zxt", "iclr_2022_cD0O_Sc-wNy...