paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_O_OJoU4_yj | Stabilized Self-training with Negative Sampling on Few-labeled Graph Data | Graph neural networks (GNNs) are designed for semi-supervised node classification on graphs where only a small subset of nodes have class labels. However, under extreme cases when very few labels are available (e.g., 1 labeled node per class), GNNs suffer from severe result quality degradation.
Specifically, we observe that existing GNNs suffer from unstable training process on few-labeled graph data, resulting to inferior performance on node classification. Therefore, we propose an effective framework, Stabilized self-training with Negative sampling (SN), which is applicable to existing GNNs to stabilize the training process and enhance the training data, and consequently, boost classification accuracy on graphs with few labeled data. In experiments, we apply our SN framework to two existing GNN base models (GCN and DAGNN) to get SNGCN and SNDAGNN, and evaluate the two methods against 13 existing solutions over 4 benchmarking datasets. Extensive experiments show that the proposed SN framework is highly effective compared with existing solutions, especially under settings with very few labeled data. In particular, on a benchmark dataset Cora with only 1 labeled node per class, while GCN only has 44.6% accuracy, SNGCN achieves 62.5% accuracy, improving GCN by 17.9%; SNDAGNN has accuracy 66.4%, improving that of the base model DAGNN (59.8%) by 6.6%. | Reject | The paper studies stability issues of GNN training when data are limited. The key contribution of this work is to use reweighted self-training and negative sampling to stabilize GNN. Multiple reviewers raised major concerns on the technical novelty, experimental setup, comparison, and results. No response was provided during discussion. I recommend this submission be rejected. | train | [
"8T3-_eeLVGp",
"slwVVHD-ocl",
"t0WpbOOGMEY",
"GRST13devsi",
"u2YHUL6K_4o"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a self-training method with negative sampling for node classification on few-labeled graph data. The proposed method applies data augmentation (i.e., pseudo label) and negative sampling regularization to augment node classification model. Experiments are conducted to show that the proposed meth... | [
3,
1,
3,
5,
5
] | [
5,
4,
4,
4,
4
] | [
"iclr_2022_O_OJoU4_yj",
"iclr_2022_O_OJoU4_yj",
"iclr_2022_O_OJoU4_yj",
"iclr_2022_O_OJoU4_yj",
"iclr_2022_O_OJoU4_yj"
] |
iclr_2022_c8AvdRAyVkz | Perturbation Deterioration: The Other Side of Catastrophic Overfitting | Our goal is to understand why the robustness accuracy would abruptly drop to zero, after conducting FGSM-style adversarial training for too long. While this phenomenon is commonly explained as overfitting, we observe that it is a twin process: not only does the model catastrophic overfits to one type of perturbation, but also the perturbation deteriorates into random noise. For example, at the same epoch when the FGSM-trained model catastrophically overfits, its generated perturbations deteriorate into random noise. Intuitively, once the generated perturbations become weak and inadequate, models would be misguided to overfit those weak attacks and fail to defend strong ones. In the light of our analyses, we propose APART, an adaptive adversarial training method, which parameterizes perturbation generation and progressively strengthens them. In our experiments, APART successfully prevents perturbation deterioration and catastrophic overfitting. Also, APART significantly improves the model robustness while maintaining the same efficiency as FGSM-style methods, e.g., on the CIFAR-10 dataset, APART achieves 53.89%accuracy under the PGD-20 attack and 49.05% accuracy under the AutoAttack. | Reject | The paper focuses on the Catastrophic Overfitting problem of adversarial training of FGSM. One reviewer gave a score of 6 and the other three reviewers gave negative scores. The authors failed to address or clarify (no rebuttal provided) how perturbation distribution and robustness are linked (four reviewers all agree on this). Other issues include unclear motivation, limited experiments validation, and lack of theoretical analysis. Thus, the current version of the paper cannot be accepted to ICLR. | train | [
"V_w0GMX54N",
"L3sVn6pvFgk",
"YeJ8BI3Gts",
"Br5Wv8aD1C"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper made two contributions: 1. it identified the co-occurrence of abrupt robustness drop and perturbation degrading to (potentially) random noise; 2. it proposed a computationally faster adversarial training algorithm while attaining a good accuracy. Point 1. is interesting: it suggests previously identified... | [
6,
3,
5,
3
] | [
2,
3,
5,
5
] | [
"iclr_2022_c8AvdRAyVkz",
"iclr_2022_c8AvdRAyVkz",
"iclr_2022_c8AvdRAyVkz",
"iclr_2022_c8AvdRAyVkz"
] |
iclr_2022_vQmIksuciu2 | EXPLAINABLE AI-BASED DYNAMIC FILTER PRUNING OF CONVOLUTIONAL NEURAL NETWORKS | Filter pruning is one of the most effective ways to accelerate Convolutional Neural Networks (CNNs). Most of the existing works are focused on the static pruning of CNN filters. In dynamic pruning of CNN filters, existing works are based on
the idea of switching between different branches of a CNN or exiting early based on the difficulty of a sample. These approaches can reduce the average latency of inference, but they cannot reduce the longest-path latency of inference. In contrast, we present a novel approach of dynamic filter pruning that utilizes explainable AI along with early coarse prediction in the intermediate layers of a CNN. This coarse prediction is performed using a simple branch that is trained to perform top-k classification. The branch either predicts the output class with high confidence, in which case, the rest of the computations are left out. Alternatively, the branch predicts the output class to be within a subset of possible output classes. After this coarse prediction, only those filters that are important for this subset of classes are utilized for further computations. The importances of filters for each output class are obtained using explainable AI. Using this architecture of dynamic pruning, we not only reduce the average latency of inference, but we can also reduce the longest-path latency of inference. Our proposed architecture for dynamic pruning can be deployed on different hardware platforms. We evaluate our approach using commonly used image classification models and datasets on CPU and GPU platforms and demonstrate speedup without significant overhead. | Reject | ### Summary
This paper presents a technique to reduce the worst-case latency of inference. The key idea is to use a combination of early exit and filter selection to achieve its results. The filter selection predicts the top-k classes for the input and, using that indication, uses the filters that are the most relevant (using DeepLIFT) to refine the result.
### Strengths (from Discussion)
- The idea is interesting. Early exit, mixtures of experts (one potential interpretation of the filter selection here), as well as pruning are interesting mechanisms for neural network efficiency.
There may be new opportunities to find synergies in their combination.
### Weaknesses (from Discussion)
- The clarity of writing could be significantly improved, particularly in the description and illustration of the constituent techniques. Figures, such as those in https://arxiv.org/abs/2008.13006, that clearly present the constitution of various layers, in particular, would help.
- There are relevant and applicable baselines that a comparison would contextualize the strength of the approach (as per Reviewer vKUc examples)
- ImageNet experiments appear to be within reach of this experimental apparatus (i.e., without extreme cost). Hence, such experiments would validate the applicability of this approach to practice.
- A small point arose that longest-path inference was not motivated. Work on optimizing tail latency (https://research.google/pubs/pub40801/) may be helpful contextualization here.
### Recommendation
My recommendation is Reject. The work here is a very promising start for a new idea. Though requests for additional experimentation and baselines can be ill-defined recommendations. Here, scaling of the results to ImageNet as well as comparing against baselines in the literature (as per Reviewer vKUc's examples) would provide much stronger scoping for this work. | train | [
"cDw08tbMpoD",
"0buolkG7MNS",
"YhopupQUyjd",
"sF3SKodhze"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focusses on a dynamic filter pruning technique that reduces the longest path latency of inference while using explainable AI (XAI) to help with determining pruning criterion. The approach uses an early coarse prediction branch that is used to perform a top-k classification. This branch is added to the m... | [
3,
1,
5,
3
] | [
5,
4,
4,
3
] | [
"iclr_2022_vQmIksuciu2",
"iclr_2022_vQmIksuciu2",
"iclr_2022_vQmIksuciu2",
"iclr_2022_vQmIksuciu2"
] |
iclr_2022_R11xJsRjA-W | The Connection between Out-of-Distribution Generalization and Privacy of ML Models | With the goal of generalizing to out-of-distribution (OOD) data, recent domain generalization methods aim to learn ``stable'' feature representations whose effect on the output remains invariant across domains. Given the theoretical connection between generalization and privacy, we ask whether better OOD generalization leads to better privacy for machine learning models, where privacy is measured through robustness to membership inference (MI) attacks. In general, we find that the relationship does not hold. Through extensive evaluation on a synthetic dataset and image datasets like MNIST, Fashion-MNIST, and Chest X-rays, we show that a lower OOD generalization gap does not imply better robustness to MI attacks. Instead, privacy benefits are based on the extent to which a model captures the stable features. A model that captures stable features is more robust to MI attacks than models that exhibit better OOD generalization but do not learn stable features. Further, for the same provable differential privacy guarantees, a model that learns stable features provides higher utility as compared to others. Our results offer the first extensive empirical study connecting stable features and privacy, and also have a takeaway for the domain generalization community; MI attack can be used as a complementary metric to measure model quality. | Reject | Motivated by the connections between privacy and generalization, this paper studies the correlation between MI attack accuracy and OOD accuracy on synthetic and real-world datasets. It shows that the measurements are not always correlated. I found the connection between the motivation and actual measurements performed in the experiments to be rather tenuous. Therefore it is hard to draw any insightful conclusions from the empirical results. It should also be noted that somewhat related disconnect between accuracy of MIA and generalization has already been observed in prior work. | train | [
"ijIA0_HYHMG",
"gE4c7D_uExq",
"ptMZhXMoaI",
"sO7ETyR8tNA",
"YUvQvfou1H",
"uQE0juyC5gB",
"ZjTwxfJ_IoL",
"JUVQG6LJeMC",
"-VedvhvmuI7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for their helpful feedback and appreciate the suggestions for improving the paper. We provide clarification responses for each reviewer separately in the comments. We clarify our main contribution below:\n\nPrior work has shown theoretical connection between generalization gap, stable ... | [
-1,
-1,
-1,
-1,
-1,
1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"iclr_2022_R11xJsRjA-W",
"-VedvhvmuI7",
"JUVQG6LJeMC",
"ZjTwxfJ_IoL",
"uQE0juyC5gB",
"iclr_2022_R11xJsRjA-W",
"iclr_2022_R11xJsRjA-W",
"iclr_2022_R11xJsRjA-W",
"iclr_2022_R11xJsRjA-W"
] |
iclr_2022_uc8UsmcInvB | Statistically Meaningful Approximation: a Theoretical Analysis for Approximating Turing Machines with Transformers | A common lens to theoretically study neural net architectures is to analyze the functions they can approximate. However, constructions from approximation theory may be unrealistic and therefore less meaningful. For example, a common unrealistic trick is to encode target function values using infinite precision. To address these issues, this work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability. We study SM approximation for two function classes: boolean circuits and Turing machines. We show that overparameterized feedforward neural nets can SM approximate boolean circuits with sample complexity depending only polynomially on the circuit size, not the size of the network. In addition, we show that transformers can SM approximate Turing machines with computation time bounded by $T$ with sample complexity polynomial in the alphabet size, state space size, and $log(T)$. We also introduce new tools for analyzing generalization which provide much tighter sample complexities than the typical VC-dimension or norm-based bounds, which may be of independent interest. | Reject | This is an extremely interesting and timely paper regarding the approximation ability, with statistical consequences, of circuits and (computation-bounded) Turing machines by feedforward networks and transformers. The paper has an interesting and valuable setting, and also many unusual ideas, together which can inspire a lot of future work. Unfortunately, the reviewers had significant difficulties with the presentation and setting; the Transformer material in particular lacks clarity. As such, the paper could use more time and polish.
Separately, I will recommend in the future that authors consider making use of the rebuttal and revision phase. While it is not strictly required, it seems that in ICLR, scores shift quite a lot in those phase, and it has (for better or worse) become standard to have a thorough involvement in this phase. It was difficult to cause score changes after the initial phase due to the lack of review responses. That said, I sincerely hope the authors continue with this valuable line of work. | train | [
"leds70P-tTi",
"zc9EwkUwOyz",
"Qs4t7gAtEEl",
"l_9xRCSmT-o"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new criterion for learnable representations of Boolean circuits and Turing machines. The criterion \"statistically meaningful\" is the main contribution along with its application to the above two classes. There are already many proofs that functions and programs can be represented by variou... | [
3,
5,
5,
6
] | [
4,
4,
2,
4
] | [
"iclr_2022_uc8UsmcInvB",
"iclr_2022_uc8UsmcInvB",
"iclr_2022_uc8UsmcInvB",
"iclr_2022_uc8UsmcInvB"
] |
iclr_2022_N2nJzgb_ldR | FastRPB: a Scalable Relative Positional Encoding for Long Sequence Tasks | Transformers achieve remarkable performance in various domains, including NLP, CV, audio processing, and graph analysis. However, they do not scale well on long sequence tasks due to their quadratic complexity w.r.t. the input’s length. Linear Transformers were proposed to address this limitation. However, these models have shown weaker performance on the long sequence tasks comparing to the original one. In this paper, we explore Linear Transformer models, rethinking their two core components. Firstly, we improved Linear Transformer with $\textbf{S}$hift-$\textbf{I}$nvariant $\textbf{K}$ernel $\textbf{F}$unction $\textbf{SIKF}$, which achieve higher accuracy without loss in speed. Secondly, we introduce $\textbf{FastRPB}$ which stands for $\textbf{Fast}$ $\textbf{R}$elative $\textbf{P}$ositional $\textbf{B}$ias, which efficiently adds positional information to self-attention using Fast Fourier Transformation. FastRPB is independent of the self-attention mechanism and can be combined with an original self-attention and all its efficient variants. FastRPB has $\mathcal{O}(N\log{N})$ computational complexity, requiring $\mathcal{O}(N)$ memory w.r.t. input sequence length $N$.
We compared introduced modifications with recent Linear Transformers in different settings: text classification, document retrieval, and image classification. Extensive experiments with FastRPB and SIKF demonstrate that our model significantly outperforms another efficient positional encodings method in accuracy, having up to x1.5 times higher speed and requiring up to x10 times less memory than the original Transformer. | Reject | All reviewers are in agreement to reject this paper. The main objection is that the tasks chosen are small scale and that the mixed results are not strong enough. The authors did not attempt to raise substantial issues to be discussed. | val | [
"XZFYJz65JQ",
"vLUqGBEjUlj",
"DPy71BYw7SA",
"rmo1WGoHJlZ",
"5cm9emo0AiC",
"wGSnw-RYjsr",
"JGJYE-fME95",
"alA6nwWZeZv"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed review! We will take into account the recommendations left by you, make conclusions, and do not make mistakes in the future",
" Thank you for the detailed review! We will take into account the recommendations left by you, make conclusions, and do not make mistakes in the future",
" ... | [
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"JGJYE-fME95",
"wGSnw-RYjsr",
"5cm9emo0AiC",
"alA6nwWZeZv",
"iclr_2022_N2nJzgb_ldR",
"iclr_2022_N2nJzgb_ldR",
"iclr_2022_N2nJzgb_ldR",
"iclr_2022_N2nJzgb_ldR"
] |
iclr_2022_057dxuWpfx | Shaped Rewards Bias Emergent Language | One of the primary characteristics of emergent phenomena is that they are determined by the basic properties of the system whence they emerge as opposed to explicitly designed constraints. Reinforcement learning is often used to elicit such phenomena which specifically arise from the pressure to maximize reward. We distinguish two types of rewards. The first is the base reward which is motivated directly by the task being solved. The second is shaped rewards which are designed specifically to make the task easier to learn by introducing biases in the learning process. The inductive bias which reward shaping introduces is problematic for emergent language experimentation because it biases the object of study: the emergent language. The fact that shaped rewards are intentionally designed conflicts with the basic premise of emergent phenomena arising from basic principles. In this paper, we use a simple sender-receiver navigation game to demonstrate how reward shaping can 1) explicitly bias the semantics of the learned language, 2) significantly change the entropy of the learned communication, and 3) mask the potential effects of other environmental variables of interest. | Reject | All reviewers eventually agreed on rejection. The highest scoring reviewer agreed their interpretation of the framing of the paper caused their initial high-score, where as the other reviewers took a totally different view on the papers contribution. The authors agreed that the text of the paper was not clear in this regard. And the high scoring reviewer downgraded their score and suggested a different pitch.
Much of the reviews focused on how the paper includes a single handcrafted environment for empirical evaluation, and missing related work on reward shaping. In the AC's view (and several of the reviewers said this too) the simple observation "non-obvious shaped rewards bias language" indeed begs of a broader study across a variety of environments.
Whether more experiments are needed or if this work can be reshaped such that one existence proof experiment is enough does not need to be resolved here; the paper in its current form needs significant changes. | train | [
"ry_5xUPt69",
"Bg1c1FlR7Nk",
"E5vLulmKUXT",
"MF0TbivVS4s",
"IUb4YoFgPQI",
"H8EkWNH3ehJ",
"IkDWBRed2hv",
"bk0odE1B7up",
"KAkBp7l0XjZ",
"oNzsOWL0uRC",
"1XS4RxG-AuU",
"058N3FWmPkJ",
"AgUm2nQ7W73",
"a_tJyse4qfe",
"_uK12vmtsQ",
"og-qLtnAHhJ",
"q_lHUxg-aKq",
"tm0SRE4r8Zm",
"DYv_dEvFyBm... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"a... | [
" Thank you for the further response; the perspective you provide on the paper's framing will prove very helpful in future revisions.\nI am largely in agreement that the philosophical aspects of the paper (i.e., how to define \"non-obvious\" and \"first principles\") as well the empirical ones (i.e., breadth of exp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"AgUm2nQ7W73",
"oNzsOWL0uRC",
"IUb4YoFgPQI",
"IUb4YoFgPQI",
"_uK12vmtsQ",
"a_tJyse4qfe",
"a_tJyse4qfe",
"DRtQpbEFCby",
"iclr_2022_057dxuWpfx",
"DRtQpbEFCby",
"tm0SRE4r8Zm",
"1XS4RxG-AuU",
"DRZFzMJ-DZb",
"iclr_2022_057dxuWpfx",
"pzX2ws-zrs3",
"2R1N5cf19Tn",
"2R1N5cf19Tn",
"2r7fw3hZN... |
iclr_2022_M-9bPO0M2K5 | MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data | Class-imbalanced data, in which some classes contain far more samples than others, is ubiquitous in real-world applications. Standard techniques for handling class-imbalance usually work by training on a re-weighted loss or on re-balanced data. Unfortunately, training overparameterized neural networks on such objectives causes rapid memorization of minority class data. To avoid this trap, we harness meta-learning, which uses both an "outer-loop'' and an "inner-loop'' loss, each of which may be balanced using different strategies. We evaluate our method, MetaBalance, on image classification, credit-card fraud detection, loan default prediction, and facial recognition tasks with severely imbalanced data. We find that MetaBalance outperforms a wide array of popular strategies designed to handle class-imbalance, especially in scenarios with very few samples in minority classes. | Reject | This paper proposes a method for class-imbalanced data based on meta-learning. The technical contribution of the proposed method is limited as it is a reasonable but straightforward extension of the existing method. In addition, as commented by the reviewers,
the comparison with existing methods is not enough, it is unclear why it is meta-learned with balanced test data, and hyperparameter tuning details are not given. | train | [
"ClqOLM1h_EX",
"M3v-IL7ZKH1",
"1rF-G18BT3",
"mDB_qTNUyfC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a meta-learning-based method to learn under class imbalance. Class imbalance is ubiquitous in many real applications and neural networks tend to learn biased models toward majority classes. Analogous to a popular meta-learning method called MAML, the proposed method consists of 2 nested training... | [
6,
3,
3,
5
] | [
4,
5,
5,
4
] | [
"iclr_2022_M-9bPO0M2K5",
"iclr_2022_M-9bPO0M2K5",
"iclr_2022_M-9bPO0M2K5",
"iclr_2022_M-9bPO0M2K5"
] |
iclr_2022_zaALYtvbRlH | SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences | Distilling supervision signal from a long sequence to make predictions is a challenging task in machine learning, especially when not all elements in the input sequence contribute equally to the desired output. In this paper, we propose SpanDrop, a simple and effective data augmentation technique that helps models identify the true supervision signal in a long sequence with very few examples. By directly manipulating the input sequence, SpanDrop randomly ablates parts of the sequence at a time and ask the model to perform the same task to emulate counterfactual learning and achieve input attribution. Based on theoretical analysis of its properties, we also propose a variant of SpanDrop based on the beta-Bernoulli distribution, which yields diverse augmented sequences while providing a learning objective that is more consistent with the original dataset. We demonstrate the effectiveness of SpanDrop on a set of carefully designed toy tasks, as well as various natural language processing tasks that require reasoning over long sequences to arrive at the correct answer, and show that it helps models improve performance both when data is scarce and abundant. | Reject | The paper proposes a data augmentation approach called SpanDrop to help to distill supervision signals from a long sequence prediction problem. The reviewers generally agree on two major drawbacks of the paper. First, the novelty of this approach. Second, the experiment results are not very convincing.
After reading the responses from the authors, I don’t think the authors convinced me of the novelty of the work, especially when comparing it to the word dropout. No matter if you treat the data or the model as a black-box, it’s effectively doing the same thing. Apart from that, the model can only be used in the setting of “underspecification” long sequence tasks, which diminishes its value in real applications.
On the experiment side, there are three issues. One, many tasks considered are not long sequence tasks. Second, the improvement is marginal in many cases. Three, more related methods should get considered as baselines. Besides these three points raised by the reviewers, I also want to raise the point that it is not (and should not) be acceptable to report ALL your language experiment results on dev sets. I understand it is more time-consuming to get the test results on tasks where the test has to be done online, e.g. SQuAD. However, it is not a good practice in reaching a conclusion merely on dev sets in general.
Based on the reviewers' comments and the reasons listed above, I recommend rejection of this paper. | train | [
"_l1QZ9izKY2",
"nBjPY8R6YIx",
"qklaQuvhGh_",
"EU8FpkmDo8U",
"YNUGKDH6Qsj",
"1V9AZIdNWvm",
"5NtPffkcg5X",
"mxl6piFMH-E",
"B0SPETCXnxI",
"9kJd9LkP5k",
"5SONjJQCFcx",
"FzzbMZ14-LE",
"XoUHF965-h8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Your response helped me understand better. Thank you for the detailed explanation.",
" Thank you for the insightful follow-up questions and discussion!\n\nR1': To clarify, Remarks 1 and 2 actually make no assumption about the independence in saliency among spans, but reason about the entire collection of $m$ su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"nBjPY8R6YIx",
"EU8FpkmDo8U",
"YNUGKDH6Qsj",
"mxl6piFMH-E",
"XoUHF965-h8",
"FzzbMZ14-LE",
"9kJd9LkP5k",
"5SONjJQCFcx",
"iclr_2022_zaALYtvbRlH",
"iclr_2022_zaALYtvbRlH",
"iclr_2022_zaALYtvbRlH",
"iclr_2022_zaALYtvbRlH",
"iclr_2022_zaALYtvbRlH"
] |
iclr_2022_bM45i3LQBdl | Combining Differential Privacy and Byzantine Resilience in Distributed SGD | Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine learning. The two concepts have been extensively studied individually but the question of how to combine them effectively remains unanswered. This paper contributes to addressing this question by studying the extent to which the distributed SGD algorithm, in the standard parameter-server architecture, can learn an accurate model despite (a) a fraction of the workers being malicious (Byzantine), and (b) the other fraction, whilst being honest, providing noisy information to the server to ensure differential privacy (DP). We first observe that the integration of standard practices in DP and BR is not straightforward. In fact, we show that many existing results on the convergence of distributed SGD under Byzantine faults, especially those relying on $(\alpha,f)$-Byzantine resilience, are rendered invalid when honest workers enforce DP. To circumvent this shortcoming, we revisit the theory of $(\alpha,f)$-BR to obtain an approximate convergence guarantee. Our analysis provides key insights on how to improve this guarantee through hyperparameter optimization. Essentially, our theoretical and empirical results show that (1) an imprudent combination of standard approaches to DP and BR might be fruitless, but (2) by carefully re-tuning the learning algorithm, we can obtain reasonable learning accuracy while simultaneously guaranteeing DP and BR. | Reject | The paper considers the natural class of algorithms, namely Aggregators with Gaussian noise for distributed SGD with differential privacy (DP) and Byzantine resilience (BR). Previous results shows VN->BR-> convergence of SGD. The authors first show that aggregators with Gaussian noise algorithms satisfy DP but violates VN necessarily, so approximate VN is proposed. Theorem 2 shows approximate VN->convergence. Proposition 2 shows the above algorithms satisfies approximate VN with certain parameters. With the combined bound Corollary 1, the authors observe (and then verify by experiments) that larger batch size is beneficial and in particular more beneficial than when DP or BR is enforced alone. In the formulation, an important baseline of robust mean aggregation [Diakonikolas,Kamath,Kane,Li,Moitra,,Stewart'2016] and even more relevant baseline of robust and DP mean aggregation[Liu,kong,Kakade,Oh,'21] are somehow missing. One would assume that directly applying these well-known techniques might give the desired DP and robust SGD. The field at the intersection of differential privacy and robustness has evolved quite a bit recently and tremendous technical innovations are happening. Given the relveance of the proposed problem to this line of work, one should make the connections precise and explain the differences. | test | [
"Pt7iwwX2SEH",
"cCrjuWCuP3p",
"fIfSZjH1ox6",
"ZTkSXgdRjIk",
"varwBvUc8Go",
"fcj4FEc5QmB",
"SluBGzV-RmW",
"wkwQhpMjdlV",
"xwRsD8fIe7K",
"Ecc5OwLhydU",
"-eDiFKApzv3",
"ncowQ208CYo"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors clarified some points, but I still think the paper needs work. I can't recommend acceptance in its current form.",
" **Setup:** Thank you for pointing this out, the setup you presented is exactly the one we consider. We will make it more explicit in the next version of the paper. We will also add ap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"ZTkSXgdRjIk",
"fIfSZjH1ox6",
"wkwQhpMjdlV",
"varwBvUc8Go",
"ncowQ208CYo",
"-eDiFKApzv3",
"Ecc5OwLhydU",
"xwRsD8fIe7K",
"iclr_2022_bM45i3LQBdl",
"iclr_2022_bM45i3LQBdl",
"iclr_2022_bM45i3LQBdl",
"iclr_2022_bM45i3LQBdl"
] |
iclr_2022_9q3g_5gQbbA | Towards Understanding Data Values: Empirical Results on Synthetic Data | Understanding the influence of data on machine learning models is an emerging research field. Inspired by recent work in data valuation, we perform several experiments to get an intuition for this influence on a multi-layer perceptron. We generate a synthetic two-dimensional data set to visualize how different valuation methods value data points on a mesh grid spanning the relevant feature space. In this setting, individual data values can be derived directly from the impact of the respective data points on the decision boundary. Our results show that the most important data points are the miss-classified ones. Furthermore, despite performance differences on real world data sets, all investigated methods except one qualitatively agree on the data values derived from our experiments. Finally, we place our results into the recent literature and discuss data values and their relationship to other methods. | Reject | The reviews are of adequate quality. The responses by the authors are commendable, but ICLR is selective and reviewers continue to believe that more experiments and more rigorous analysis are needed. | train | [
"dku3LVQA7mO",
"IrW887TWAGe",
"7Zn1yWsu6t6",
"S8NS0UKQkha",
"zQ-OmmPR3Yt",
"fLM0Qz5ho6Q",
"6sYlXEhPsLI",
"fTW6vq7atHn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response. Their suggestion of validating their method with PCA should be integrated into future versions of the paper. At this point, we don't seem to have a disagreement. The decision of acceptance will be determined on the sufficiency of the presented material.",
"T... | [
-1,
3,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
5,
-1,
-1,
-1,
4,
4,
3
] | [
"S8NS0UKQkha",
"iclr_2022_9q3g_5gQbbA",
"fTW6vq7atHn",
"6sYlXEhPsLI",
"IrW887TWAGe",
"iclr_2022_9q3g_5gQbbA",
"iclr_2022_9q3g_5gQbbA",
"iclr_2022_9q3g_5gQbbA"
] |
iclr_2022_0rjx6jy25R4 | Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data | The scarcity of class-labeled data is a ubiquitous bottleneck in a wide range of machine learning problems. While abundant unlabeled data normally exist and provide a potential solution, it is extremely challenging to exploit them. In this paper, we address this problem by leveraging Positive-Unlabeled~(PU) classification and the conditional generation with extra unlabeled data \emph{simultaneously}, both of which aim to make full use of agnostic unlabeled data to improve classification and generation performance. In particular, we present a novel training framework to jointly target both PU classification and conditional generation when exposing to extra data, especially out-of-distribution unlabeled data, by exploring the interplay between them: 1) enhancing the performance of PU classifiers with the assistance of a novel Conditional Generative Adversarial Network~(CGAN) that is robust to noisy labels, 2) leveraging extra data with predicted labels from a PU classifier to help the generation. Our key contribution is a Classifier-Noise-Invariant Conditional GAN~(CNI-CGAN) that can learn the clean data distribution from noisy labels predicted by a PU classifier. Theoretically, we proved the optimal condition of CNI-CGAN and experimentally, we conducted extensive evaluations on diverse datasets, verifying the simultaneous improvements on both classification and generation. | Reject | The paper combines discriminative and generative positive-unlabeled learning into a single framework. The reviewers argued the novelty and contributions are not enough for ICLR and unfortunately we cannot accept it for publication. | train | [
"BR2DTkl-7ZX",
"FZd6EtjxEB3",
"lcZM8KWoyv",
"MLgSHHfdJnq",
"UqO2gOfk3Gs",
"w3wh_lQJNfh",
"XPSoEASydg",
"WLOskkD0yXm",
"p8oFvH1Ix8S",
"xpWQOVVCvu",
"8hj0Nm3Na7i",
"K4JtnLVPF5F",
"qdsTlZUMj4Z",
"wCRH7OFchYm",
"B60bePyMGAs"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper targets at relieving the massive labeled data consumption of deep learning through the framework of semi-supervised learning. In particular, it finds out that two training approaches, Positive-Unlabeled classification and the conditional generation, can benefit each other. Jointly conducting these two a... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
1,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_0rjx6jy25R4",
"wCRH7OFchYm",
"BR2DTkl-7ZX",
"XPSoEASydg",
"w3wh_lQJNfh",
"xpWQOVVCvu",
"p8oFvH1Ix8S",
"iclr_2022_0rjx6jy25R4",
"B60bePyMGAs",
"qdsTlZUMj4Z",
"wCRH7OFchYm",
"BR2DTkl-7ZX",
"iclr_2022_0rjx6jy25R4",
"iclr_2022_0rjx6jy25R4",
"iclr_2022_0rjx6jy25R4"
] |
iclr_2022_BM7RjuhAK7W | Model-Invariant State Abstractions for Model-Based Reinforcement Learning | Accuracy and generalization of dynamics models is key to the success of model-based reinforcement learning (MBRL). As the complexity of tasks increases, learning accurate dynamics models becomes increasingly sample inefficient. However, many complex tasks also exhibit sparsity in dynamics, i.e., actions have only a local effect on the system dynamics. In this paper, we exploit this property with a causal invariance perspective in the single-task setting, introducing a new type of state abstraction called \textit{model-invariance}. Unlike previous forms of state abstractions, a model-invariance state abstraction leverages causal sparsity over state variables. This allows for compositional generalization to unseen states, something that non-factored forms of state abstractions cannot do. We prove that an optimal policy can be learned over this model-invariance state abstraction and show improved generalization in a simple toy domain. Next, we propose a practical method to approximately learn a model-invariant representation for complex domains and validate our approach by showing improved modelling performance over standard maximum likelihood approaches on challenging tasks, such as the MuJoCo-based Humanoid. Finally, within the MBRL setting we show strong performance gains with respect to sample efficiency across a host of continuous control tasks. | Reject | This paper proposes a method to learn representations in MBRL by exploiting sparsity in the model to improve data efficiency. The key idea is to build a representation for which the model is invariant.
The idea is quite interesting, but one weakness of the current draft is that there is a disconnect between the presented theory (linear case) and the relevant experimental setup (non-linear).
The paper is overall well written but would still benefit from a revision to improve clarity as pointed out by the reviewers.
The experimental results are inconclusive due to the choice of weak baselines. | train | [
"L_4_qQn8n77",
"wUpoW0Fyo1",
"VPOSHmqysqO",
"edeR6Mz5GyL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper defines a novel model-invariant state abstraction for factored MDPs. It’s also shown that using invariant causal prediction significantly reduces transition prediction error in both toy example and several continuous control tasks. Inspired by theoretical results, this paper also proposes a novel method... | [
3,
5,
3,
3
] | [
4,
4,
4,
3
] | [
"iclr_2022_BM7RjuhAK7W",
"iclr_2022_BM7RjuhAK7W",
"iclr_2022_BM7RjuhAK7W",
"iclr_2022_BM7RjuhAK7W"
] |
iclr_2022_QEBHPRodWYE | InstaHide’s Sample Complexity When Mixing Two Private Images | Inspired by InstaHide challenge [Huang, Song, Li and Arora'20], [Chen, Song and Zhuo'20] recently provides one mathematical formulation of InstaHide attack problem under Gaussian images distribution. They show that it suffices to use $O(n_{\mathsf{priv}}^{k_{\mathsf{priv}} - 2/(k_{\mathsf{priv}} + 1)})$ samples to recover one private image in $n_{\mathsf{priv}}^{O(k_{\mathsf{priv}})} + \mathrm{poly}(n_{\mathsf{pub}})$ time for any integer $k_{\mathsf{priv}}$, where $n_{\mathsf{priv}}$ and $n_{\mathsf{pub}}$ denote the number of images used in the private and the public dataset to generate a mixed image sample. Under the current setup for the InstaHide challenge of mixing two private images ($k_{\mathsf{priv}} = 2$), this means $n_{\mathsf{priv}}^{4/3}$ samples are sufficient to recover a private image. In this work, we show that $n_{\mathsf{priv}} \log ( n_{\mathsf{priv}} )$ samples are sufficient (information-theoretically) for recovering all the private images.
| Reject | All reviewers agree that this is a reasonable contribution but that it is also extremely limited in scope. The authors suggest in one of their response that their technique could apply to "any data mixing method with “batched k-sum” structure". Such a larger level of generality might make the paper more interesting, but at the moment it is an extremely niche result. | val | [
"6tEJfBMmmp-",
"FnQdYgeVLKG",
"lUCi8dD03Ir",
"8ti5IaHsy4T",
"e4rJM6JLNYV",
"neG3uoea5RI",
"Wh9GMmm_hD7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the your explanations. ",
" Thank you for your valuable feedback. We address the questions and comments in the following. We hope given these clarifications you will consider increasing your score.\n\n- \"Is it possible to apply some data reduction techniques (e.g., coresets) of the data with resp... | [
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"8ti5IaHsy4T",
"Wh9GMmm_hD7",
"neG3uoea5RI",
"e4rJM6JLNYV",
"iclr_2022_QEBHPRodWYE",
"iclr_2022_QEBHPRodWYE",
"iclr_2022_QEBHPRodWYE"
] |
iclr_2022_5fmBRf5rrC | Knothe-Rosenblatt transport for Unsupervised Domain Adaptation | Unsupervised domain adaptation (UDA) aims at exploiting related but different data sources in order to tackle a common task in a target domain. UDA remains a central yet challenging problem in machine learning.
In this paper, we present an approach based on the Knothe-Rosenblatt transport: we exploit autoregressive density estimation algorithms to accurately model the different sources by an autoregressive model using a mixture of Gaussians.
Our Knothe-Rosenblatt Domain Adaptation (KRDA) then takes advantage of the triangularity of the autoregressive models to build an explicit mapping of the source samples into the target domain. We show that the transfer map built by KRDA preserves each component quantiles of the observations, hence aligning the representations of the different data sets in the same target domain.
Finally, we show that KRDA has state-of-the-art performance on both synthetic and real world UDA problems. | Reject | This paper proposes to address the problem of domain adaption using Knothe-Rosenblatt transport withe the method denoted as KRDA . The main idea is to perform density estimation of the different distributions with mixture of Gaussians and then estimate a an explicit mapping between the distribution using Knothe-Rosenblatt. Experiments show that the proposed method works well on toy and real life datasets.
The paper had low score during the reviews (3,3,3,3). While the reviewers appreciated the idea, they felt that the originality of the method is not well justified compared to a number of existing UDA approaches using OT. Also the reviewers noted several important references missing and that should also be compared during the numerical experiments. A discussion about the limits of the method in high dimension would also be very interesting.
The authors did not provide a reply to the reviewers' comments so their opinion stayed t same during the discussion. The paper is then rejected and the AC strongly suggests that the authors take into account the numerous comments from the reviewers before re-submitting ton a new venue. | val | [
"cf3OKHgscZL",
"UZaQnMUc9pk",
"aEpuwE_oSd",
"j0XkBEE48l_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to tackle unsupervised domain adaptation by following a classic trend aiming at finding an alignment between source and target domains. More specifically, the method uses a Knothe-Rosenblatt transport approach which applies a one-dimensional optimal transport to all conditional marginals of one... | [
3,
3,
3,
3
] | [
4,
3,
4,
4
] | [
"iclr_2022_5fmBRf5rrC",
"iclr_2022_5fmBRf5rrC",
"iclr_2022_5fmBRf5rrC",
"iclr_2022_5fmBRf5rrC"
] |
iclr_2022_0lSoIruExF | Incorporating User-Item Similarity in Hybrid Neighborhood-based Recommendation System | Modern hybrid recommendation systems require a sufficient amount of data. However, several internet privacy issues make users skeptical about sharing their personal information with online service providers. This work introduces various novel methods utilizing the baseline estimate to learn user interests from their interactions. Subsequently, extracted user feature vectors are implemented to estimate the user-item correlations, providing an additional fine-tuning factor for neighborhood-based collaborative filtering systems. Comprehensive experiments show that utilizing the user-item similarity can boost the accuracy of hybrid neighborhood-based systems by at least $2.11\%$ while minimizing the need for tracking users' digital footprints. | Reject | This paper proposed to improve hybrid neighborhood-based recommender systems by incorporating learned user-item similarity. Overall, the scores are towards negative. The reviewers did acknowledge that the paper proposed a simple-to-implement method and reads well. However, the negatives are plenty: the lacking of a comprehensive literature review as well as more relevant state-of-the-art baselines in the experiments is a common concern among most reviewers. The novelty of the proposed approach is also rather limited as incorporating user-item similarity from user rating and item contents is a well-explore topic within the literature. Finally, using rating prediction as evaluation method ignores the missing-not-at-random nature of a recommender system. The authors didn't provide any response. Therefore, I vote for reject. | train | [
"zU6B8omQKkt",
"q_ffWf9wEzY",
"ngo1weGGFqo",
"qGAhlKz1p3s",
"YeTz0qm0fL2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed several ideas to improve neighborhood-based recommendation systems by characterizing user preferences using both rating data and item content information. The 2 major proposed ideas are 1) weight item feature by feature score, 2) incorporate user-item similarity score in item score prediction. ... | [
1,
3,
1,
3,
5
] | [
4,
4,
4,
5,
3
] | [
"iclr_2022_0lSoIruExF",
"iclr_2022_0lSoIruExF",
"iclr_2022_0lSoIruExF",
"iclr_2022_0lSoIruExF",
"iclr_2022_0lSoIruExF"
] |
iclr_2022_zHZ1mvMUMW8 | Succinct Compression: Near-Optimal and Lossless Compression of Deep Neural Networks during Inference Runtime | Recent advances in Deep Neural Networks (DNN) compression (e.g. pruning, quantization and etc.) significantly reduces the amount of space consumption for storage, making them easier to deploy in low-cost devices. However, those techniques do not keep the compressed representation during inference runtime, which incurs significant overheads in terms of both performance and space consumption. We introduce ``Succinct Compression”, a three-stage framework to enable DNN inference with near-optimal compression and much better performance during inference runtime. The key insight of our method leverages the concept of \textit{Succinct Data Structures}, which supports fast queries directly on compressed representation without decompression. Our method first transforms DNN models as our proposed formulations in either Element-wise or Block-wise manner, so that \textit{Succinct Data Structures} can take advantage of. Then, our method compresses transformed DNN models using \textit{Succinct Data Structures}. Finally, our method exploits our specialized execution pipelines for different model formulations, to retrieve relevant data for DNN inference. Our experimental results show that, our method keeps near-optimal compression, and achieves at least 8.7X/11.5X speedup on AlexNet/VGG-16 inference, compared with Huffman Coding. We also experimentally show that our method is quite synergistic with Pruning and Quantization.
| Reject | ### Summary
The paper proposes a technique that enables inference directly on a compressed model without decompressing the model.
### Discussion
- Strengths
- An important problem as well as a compelling direction, namely inference without decompression.
- Weaknesses:
- The reviewers provided a number of both broad and specific criticisms of the work.
The most salient point is the lack of comparison to modern baselines. Notably, the primary comparison is to a 2015 technique that, while seminal, has since been followed by significant related work (e.g, that identified by Reviewer eHWE, R8Un, and G6tm). In concert, the evaluation should consider at least one more contemporary network in the domain, such as a ResNet.
### Recommendation
I recommend Reject. At current, this work is the first step in a strong, compelling direction. However, the work needs to be contextualized within a more modern context of contemporary results | train | [
"WXb4pbkCIHY",
"VcWtljMg5rL",
"HgGJthBcOGt",
"mvja9CxqTpZ",
"-xI5JSDlcPd",
"clzQxjdJvG",
"cG2_Skc92C3",
"KI-TyYBj3t0",
"0Zeo8Oj0i0u",
"fIF7EWWlO_r",
"avWBm2Mp74J",
"DkygIf4f9Wq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for responding to the reviews, and the reviewer would like to keep the score. The reviewer also encourages the authors to take into the feedback to improve the paper. ",
" I appreciate the clear responses from the authors to my questions. In this case I have decided to confirm my score. ",
" I thank th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"cG2_Skc92C3",
"-xI5JSDlcPd",
"KI-TyYBj3t0",
"clzQxjdJvG",
"DkygIf4f9Wq",
"avWBm2Mp74J",
"0Zeo8Oj0i0u",
"fIF7EWWlO_r",
"iclr_2022_zHZ1mvMUMW8",
"iclr_2022_zHZ1mvMUMW8",
"iclr_2022_zHZ1mvMUMW8",
"iclr_2022_zHZ1mvMUMW8"
] |
iclr_2022_mMiKHj7Pobj | Revealing the Incentive to Cause Distributional Shift | Decisions made by machine learning systems have increasing influence on the world, yet it is common for machine learning algorithms to assume that no such influence exists. An example is the use of the i.i.d. assumption in content recommendation: In fact, the (choice of) content displayed can change users’ perceptions and preferences, or even drive them away, causing a shift in the distribution of users. We introduce the term auto-induced distributional shift (ADS) to describe the phenomenon of an algorithm causing change in the distribution of its own inputs. Leveraging ADS can be a means of increasing performance. But this is not always desirable, since performance metrics often underspecify what type of behaviour is desirable. When real-world conditions violate assumptions (such as i.i.d. data), this underspecification can result in unexpected behaviour. To diagnose such issues, we introduce the approach of unit tests for incentives: simple environments designed to show whether an algorithm will hide or reveal incentives to achieve performance via certain means (in our case, via ADS). We use these unit tests to demonstrate that changes to the learning algorithm (e.g. introducing meta-learning) can cause previously hidden incentives to be revealed, resulting in qualitatively different behaviour despite no change in performance metric. We further introduce a toy environment for modelling real-world issues with ADS in content recommendation, where we demonstrate that strong meta-learners achieve gains in performance via ADS. These experiments confirm that the unit tests work – an algorithm’s failure of the unit test correctly diagnoses its propensity to reveal incentives for ADS. | Reject | This paper proposes to study Auto-induced Distribution Shift (ADS), the phenomenon that models can create a feedback-loop: the predictions of a model influence user behaviors when it is deployed, which, in turn, affects the accuracy measure of the model. The paper empirically shows that a meta-learning algorithm called PBT causes a distribution shift instead of maximizing accuracy. While the premise of this paper is interesting, the proposed frameworks are very similar to the idea of strategic behavior in machine learning, and of "Performative Prediction" (Juan C. Perdomo, Tijana Zrnic, Celestine Mendler-Dünner, Moritz Hardt). However, this line of work is neither cited nor discussed in this paper. In addition, the paper is hard to read in certain parts. We encourage the authors to compare their work with performative prediction. We hope the authors find the reviews helpful. | train | [
"I1ZG4L1do8",
"byGcCYb2GY0",
"i9r0GZ87cAF",
"yMafck2Ae9_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper talks about a self-selection phenomenon which the authors call Auto-induced Distributional Shift (ADS). This phenomenon is common in recommender systems, where the promotion of a type of contents (e.g., liberal versus conservative) leads to a change in the active user base. Self-selection bias is a big t... | [
5,
3,
3,
3
] | [
3,
3,
2,
4
] | [
"iclr_2022_mMiKHj7Pobj",
"iclr_2022_mMiKHj7Pobj",
"iclr_2022_mMiKHj7Pobj",
"iclr_2022_mMiKHj7Pobj"
] |
iclr_2022_24N4XH2NaYq | Sparse Hierarchical Table Ensemble | Deep learning for tabular data is drawing increasing attention, with recent work attempting to boost the accuracy of neuron-based networks. However, when computational capacity is low as in Internet of Things (IoT), drone, or Natural User Interface (NUI) applications, such deep learning methods are deserted. We offer to enable deep learning capabilities using ferns (oblivious decision trees) instead of neurons, by constructing a Sparse Hierarchical Table Ensemble (S-HTE). S-HTE inference is dense at the beginning of the training process and becomes gradually sparse using an annealing mechanism, leading to an efficient final predictor. Unlike previous work with ferns, S-HTE learns useful internal representations, and it earns from increasing depth. Using a standard classification and regression benchmark, we show its accuracy is comparable to alternatives while having an order of magnitude lower computational complexity. Our PyTorch implementation is available at https://anonymous.4open.science/r/HTE_CTE-60EB/ | Reject | The submission introduces the sparse hierarchical table ensemble (S-HTE), based on oblivious decision trees for tabular data. The reviewers acknowledged the clarity of the presentation and the importance of the computational complexity analysis. However, they also raised concerns regarding the novelty of the proposed method and the significance of the results compared to competing methods (e.g., CatBoost). Given the consensus that the submission is not ready for publication at ICLR, I recommend rejection at this point. | train | [
"42P7gVfwaFh",
"2tf4OR2baho",
"h510j9dxecc",
"-WHMcfyKslX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the prediction problem on tabular datasets and proposed a differentiable multi-layer fern-based architecture. The training and inference algorithms are provided, and the computational complexity has been analyzed. The selling point is that the approach is computationally efficient on CPUs and ca... | [
5,
5,
1,
5
] | [
2,
3,
5,
4
] | [
"iclr_2022_24N4XH2NaYq",
"iclr_2022_24N4XH2NaYq",
"iclr_2022_24N4XH2NaYq",
"iclr_2022_24N4XH2NaYq"
] |
iclr_2022_z3Tf4kdOE5D | FedDiscrete: A Secure Federated Learning Algorithm Against Weight Poisoning | Federated learning (FL) is a privacy-aware collaborative learning paradigm that allows multiple parties to jointly train a machine learning model without sharing their private data. However, recent studies have shown that FL is vulnerable to weight poisoning attacks. In this paper, we propose a probabilistic discretization mechanism on the client side, which transforms the client's model weight into a vector that can only have two different values but still guarantees that the server obtains an unbiased estimation of the client's model weight. We theoretically analyze the utility, robustness, and convergence of our proposed discretization mechanism and empirically verify its superior robustness against various weight-based attacks under the cross-device FL setting. | Reject | This manuscript proposes a quantization approach to improve adversarial robustness. Reviewers agree that the problem studied is timely and the approach is interesting. However, note concerns about the novelty compared to closely related work, the quality of the presentation, the strength of the evaluated attacks compared to the state of the art, among other concerns. There is no rebuttal. | train | [
"R0eQ6sC0zxE",
"2vRtqporJog",
"ZklvgmBi0En",
"PNwCmkKxFoU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Federated learning (FL) has been shown to be vulnerable to weight poisoning attacks. An attacker who controls malicious clients can poison the clients’ model weights such that a backdoor to perform availability poisoning attacks, integrity backdoor attacks and inference attacks. In this work, the authors proposed ... | [
5,
3,
5,
3
] | [
4,
5,
3,
4
] | [
"iclr_2022_z3Tf4kdOE5D",
"iclr_2022_z3Tf4kdOE5D",
"iclr_2022_z3Tf4kdOE5D",
"iclr_2022_z3Tf4kdOE5D"
] |
iclr_2022_NPJ5zWk_IQj | Translating Robot Skills: Learning Unsupervised Skill Correspondences Across Robots | In this paper, we explore how we can endow robots with the ability to learn correspondences between their own skills, and those of morphologically different robots in different domains, in an entirely unsupervised manner. We make the insight that different morphological robots use similar task strategies to solve similar tasks. Based on this insight, we frame learning skill correspondences as a problem of matching distributions of sequences of skills across robots. We then present an unsupervised objective that encourages a learnt skill translation model to match these distributions across domains, inspired by recent advances in unsupervised machine translation. Our approach is able to learn semantically meaningful correspondences between skills across 3 robot domain pairs despite being completely unsupervised. Further, the learnt correspondences enable the transfer of task strategies across robots and domains.
We present dynamic visualizations of our results at https://sites.google.com/view/translatingrobotskills/home. | Reject | The paper proposes an algorithm for unsupervised skill transfer between robots with different kinematics. Integral to the approach is the idea that while the robots differ, they may use similar strategies to perform similar tasks. Without access to paired data, the paper formulates the problem of learning correspondences between robots as one of matching skill distributions across robots. Drawing insights from work in machine translation, the paper proposes an unsupervised objective that encourages the model to learn to align the distribution over skill sequences. Experimental results demonstrate the ability to use learned skill correspondences to support transfer across different robots in different domains.
As several reviewers point out, the problem of learning to transfer skills across robots with different kinematic designs from video demonstrations raises a number of interesting challenges that are relevant to the robotics and learning communities. Among them, a fundamental contribution of the paper is the ability to learn skill correspondences in an unsupervised manner based on unlabeled demonstrations. The approach by which this is achieved (i.e., using distribution matching) is sensible and clearly described. While the reviewers agree on the significance of the research problem, they raise a few key concerns regarding the initial submission. Among them are questions about the nature and extent of the domain variations that the method can handle (e.g., between robots with different degrees-of-freedom); the significance of the contributions; and how this work is situated in the context of existing approaches to robot skill learning. Several reviewers question the definition of morphological variation and comment that these variations may violate the assumption that task strategies are similar across designs. The authors provided detailed feedback to each of the reviews, which helps to clarify several of these concerns. Unfortunately, several reviewers did not respond to multiple requests to update their reviews. The one who did decided to maintain their score.
The paper tackles an important problem in robot learning and the work has the potential to have significant impact on the way in which robots acquire new skills. The original submission together with the author responses suggest that there is are solid contributions here. The authors are strongly encouraged to revisit the discussion of the approach to more clearly convey the novelty of the approach and to consider experimental evaluations that better support these claims. | train | [
"ezPNxsP4ZSO",
"zArE0aD-BiV",
"D8-P3D7VdL",
"D8b0dRNswt",
"0XuFUfqwnyf",
"iGbhxhnatxz",
"-uitIp8Gbom",
"kF7tNt4KHvy",
"PyoLh-FFxpa",
"t16mRjEmWr6",
"X4zTBMl_0L_",
"WyxOEgYPzCH",
"gM1WB2OMv3",
"1Sxq--dppbM",
"8Qd6Ej5Qsa",
"yy0lfJ_JfHE",
"KtUAR4N82bA",
"T1m6RJwPG2v",
"-Sei8gg9Mn"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for taking the extra time and efforts to address my concerns, as well as those of other reviewers.\nThe concern I have is with the 1st conceptual contribution. I understand that using a gantry or parallel robot would produce a gap in the morphologies that cannot be overcome. Howe... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"PyoLh-FFxpa",
"iclr_2022_NPJ5zWk_IQj",
"0XuFUfqwnyf",
"0XuFUfqwnyf",
"-uitIp8Gbom",
"iclr_2022_NPJ5zWk_IQj",
"iGbhxhnatxz",
"-Sei8gg9Mn",
"T1m6RJwPG2v",
"KtUAR4N82bA",
"KtUAR4N82bA",
"T1m6RJwPG2v",
"T1m6RJwPG2v",
"-Sei8gg9Mn",
"-Sei8gg9Mn",
"-Sei8gg9Mn",
"iclr_2022_NPJ5zWk_IQj",
"... |
iclr_2022_q9zIvzRaU94 | Causal discovery from conditionally stationary time-series | Causal discovery, i.e., inferring underlying cause-effect relationships from observations of a scene or system, is an inherent mechanism in human cognition, but has been shown to be highly challenging to automate. The majority of approaches in the literature aiming for this task consider constrained scenarios with fully observed variables or data from stationary time-series.
In this work we aim for causal discovery in a more general class of scenarios, scenes with non-stationary behavior over time. For our purposes we here regard a scene as a composition objects interacting with each other over time. Non-stationarity is modeled as stationarity conditioned on an underlying variable, a state, which can be of varying dimension, more or less hidden given observations of the scene, and also depend more or less directly on these observations.
We propose a probabilistic deep learning approach called State-Dependent Causal Inference (SDCI) for causal discovery in such conditionally stationary time-series data. Results in two different synthetic scenarios show that this method is able to recover the underlying causal dependencies with high accuracy even in cases with hidden states. | Reject | This paper extends Lowe et al. 2020 to discover causal relations from nonstationary time series by assuming conditionally stationarity of the times series. Based on the assumption, a deep learning method based on VAE is proposed to learn the causal relations from data.
The paper is well-motivated and well-organized.
However, there are some concerns from the reviewers. 1) The presentation needs significant improvement, e.g., clarification of the notations. 2) The identifiability of the causal graph is not given. It is unclear under what conditions the proposed method can discover the true causal graph. 3) The capacity of the discrete states might not be able to handle complex real situations. 4) The experiments are limited to synthetic and low complexity cases. This further weakens the significance of the proposed method given that there are also no theoretical guarantees of the proposed method. 5) Discussions about some important relevant works are missing.
Overall, the paper studies an interesting problem. However, given the above concerns, the novelty and significance of the paper will degenerate. Both theoretical and empirical analysis of the proposed method need further improvement. Addressing the concerns needs a significant amount of work. Thus, I do not recommend acceptance of this paper. | val | [
"bCdF_ECrIWj",
"87poqHMvILc",
"poEBVeY93Zx",
"u9N5PSGra0B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new method for discovering the causal graph from time-series data when the time-series are generated by a non-stationary process. The method relies on previous work from Lowe et al, 2020 and proposes to condition the causal summary graph driving the (causal) edge generation between variables ... | [
5,
3,
5,
3
] | [
4,
3,
3,
4
] | [
"iclr_2022_q9zIvzRaU94",
"iclr_2022_q9zIvzRaU94",
"iclr_2022_q9zIvzRaU94",
"iclr_2022_q9zIvzRaU94"
] |
iclr_2022_Xg47v73CDaj | Non-deep Networks | Depth is the hallmark of deep neural networks. But more depth means more sequential computation and higher latency. This begs the question -- is it possible to build high-performing ``non-deep" neural networks? We show that it is. To do so, we use parallel subnetworks instead of stacking one layer after another. This helps effectively reduce depth while maintaining high performance. By utilizing parallel substructures, we show, for the first time, that a network with a depth of just 12 can achieve top-1 accuracy over 80% on ImageNet, 96% on CIFAR10, and 81% on CIFAR100. We also show that a network with a low-depth (12) backbone can achieve an AP of 48% on MS-COCO. We analyze the scaling rules for our design and show how to increase performance without changing the network's depth. Finally, we provide a proof of concept for how non-deep networks could be used to build low-latency recognition systems. We will open-source our code. | Reject | This paper shows the possibility to design a relatively shallow architecture, ParNet, based on parallel subnetworks, instead of traditionally deeply stacked blocks. During discussions, the reviewers pointed out two important concerns: (1) the current design heavily hinges on the recently proposed RepVGG block, whose comparison was even missed in the original submission (later added in rebuttal); (2) comparing ParNet with RepVGG, there seems no performance advantage. Although RepVGG is 2.5 times deeper than ParNet, it is still faster due to highly optimized layers.
The authors mainly argued that their contribution is to answer the scientific question “is it possible to build high-performing non-deep neural networks?” While this is indeed an interesting question, AC feels: (1) it is perhaps unfair for this paper to claim as the first work proving the feasibility. WideResNet provided similar insight much earlier, among others; (2) the presented results, with tools being not novel, are pre-mature as they display no real appeal of using ParNet, in any aspect. Probing a new question is of course valuable, but presenting an immature and novelty-lacking answer shouldn't automatically grant publication.
In sum, the reviewers were unanimously UN-convinced by this paper's value, nor was the AC. The authors are suggested to very seriously take into account reviewers' suggestions to make improvements, before submitting their work to the next venue. | train | [
"EE3_bBuhKHg",
"LWVk96UK8BI",
"0uYlR-2QWLW",
"JHPd68TUEcE",
"ujvZmfJN1YH",
"2sDbocQo8_y",
"UpGm5fyJKpK",
"YW6Ua1IVn6K",
"4UU6UoEazZs",
"ZIXGSpyZN-m"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much feedback and suggestions. Following we have address your concerns:\n\n**Concern:** \"architecture complexity: the authors claim outperforming ResNet in efficiency, but do not mention that the architecture is far more complex. It is also more complex than RepVGG. This needs to be stressed in the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
"ZIXGSpyZN-m",
"0uYlR-2QWLW",
"YW6Ua1IVn6K",
"4UU6UoEazZs",
"UpGm5fyJKpK",
"iclr_2022_Xg47v73CDaj",
"iclr_2022_Xg47v73CDaj",
"iclr_2022_Xg47v73CDaj",
"iclr_2022_Xg47v73CDaj",
"iclr_2022_Xg47v73CDaj"
] |
iclr_2022_-AW3SFO63GO | Dissecting Local Properties of Adversarial Examples | Adversarial examples have attracted significant attention over the years, yet a sufficient understanding is in lack, especially when analyzing their performances in combination with adversarial training. In this paper, we revisit some properties of adversarial examples from both frequency and spatial perspectives: 1) the special high-frequency components of adversarial examples tend to mislead naturally-trained models while have little impact on adversarially-trained ones, and 2) adversarial examples show disorderly perturbations on naturally-trained models and locally-consistent (image shape related) perturbations on adversarially-trained ones. Motivated by these, we analyze the fragile tendency of models with the generated adversarial perturbations, and propose a connection with model vulnerability and local intermediate response. That is, a smaller local intermediate response comes along with better model adversarial robustness. To be specific, we demonstrate that: 1) DNNs are naturally fragile at least for large enough local response differences between adversarial/natural examples, 2) and smoother adversarially-trained models can alleviate local response differences with enhanced robustness. | Reject | This paper explores geometric properties of image perturbations (e.g. frequency content and local consistency) and their impact on the adversarial response of networks. The reviewers feel that the paper is at times unclear about the meaning of terminology (e.g. “local consistency”) that is not clearly defined. Also, while the reviewers acknowledge that the paper contains a number of interesting ideas, it is not always clear how the paper’s discussions and contributions differ from existing papers (e.g. Dong 2019, Yin et al., 2019, Wang et al., 2020a, Tsuzuku and Sato 2019) that also discuss the frequency content and smoothness properties of adversarial perturbations. | train | [
"lwt_BFvpy-S",
"ATSKbVn_yDX",
"S3O_Qgzaklb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the properties of adversarial examples from a spatial and frequency perspective and shows that naturally trained models are more vulnerable to high-frequency components in adversarial examples. Perturbations for naturally trained models are disordered, but perturbations for adv-trained models ar... | [
3,
3,
1
] | [
3,
4,
4
] | [
"iclr_2022_-AW3SFO63GO",
"iclr_2022_-AW3SFO63GO",
"iclr_2022_-AW3SFO63GO"
] |
iclr_2022_FqKolXKrQGA | Learning to Infer the Structure of Network Games | Strategic interactions between a group of individuals or organisations can be modelled as games played on networks, where a player's payoff depends not only on their actions but also on those of their neighbors.
Inferring the network structure from observed game outcomes (equilibrium actions) is an important problem with numerous potential applications in economics and social sciences.
Currently available methods require the knowledge of the utility function associated with the game, which is often unrealistic to obtain in real-world scenarios. To address this limitation, we propose a novel transformer-like architecture which correctly accounts for the symmetries of the problem and learns a mapping from the equilibrium actions to the network structure of the game without explicit knowledge of the utility function. We test our method on three different types of network games using both synthetic and real-world data, and demonstrate its effectiveness in network structure inference and superior performance over existing methods. | Reject | The paper introduces a transformer-like architecture to perform network inference in network games. While the reviewers acknowledge that the research direction is interesting, they raise concerns regarding the significance of the contribution in terms of methodology, particularly in light of the state of the art, and the experimental evaluation, which in their view did not support the promise of the work. The authors did not reply/follow up on the reviews during the rebuttal period. I would encourage the authors to use the reviewers' comments to revise their paper and resubmit to another conference. | train | [
"9TAf2vvG4oo",
"zVCh2YLem29x",
"xNLCz3XfKC",
"_cVYyfklJkG",
"B4nQXn4uvbi"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a purely data-driven deep learning approach for network structure discovery using just action signals from players of network games with knowing the underlying utility function which can be mostly arbitrary.\n\nThe proposed approach uses an encoder that ingests a list of action sets of all us... | [
3,
3,
6,
6,
5
] | [
3,
3,
2,
3,
3
] | [
"iclr_2022_FqKolXKrQGA",
"iclr_2022_FqKolXKrQGA",
"iclr_2022_FqKolXKrQGA",
"iclr_2022_FqKolXKrQGA",
"iclr_2022_FqKolXKrQGA"
] |
iclr_2022_Nn4BjABPRPN | Encoding Event-Based Gesture Data With a Hybrid SNN Guided Variational Auto-encoder | Commercial mid-air gesture recognition systems have existed for at least a decade, but they have not become a widespread method of interacting with machines. These systems require rigid, dramatic gestures to be performed for accurate recognition that can be fatiguing and unnatural. To address this limitation, we propose a neuromorphic gesture analysis system which encodes event-based gesture data at high temporal resolution. Our novel approach consists of an event-based guided Variational Autoencoder (VAE) which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation suitable to compute the similarity of mid-air gesture data. We show that the Hybrid Guided-VAE achieves 87% classification accuracy on the DVSGesture dataset and it can encode the sparse, noisy inputs into an interpretable latent space representation, visualized through T-SNE plots. We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time, self-supervised learning of natural mid-air gestures. | Reject | This paper received 3 rejections and 1 marginal accept. Reviewers were unanimous in that empirical evaluation is lacking. No rebuttal was submitted and I have no reason to overturn the reviewers' decisions. I recommendation this paper be rejected. | train | [
"YI5XW0QPyGi",
"FSwxAX84VQK",
"-mrVz1kjb6D",
"56jbX0M7MI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a hybrid spiking neural network (SNN) guided variational auto-encoder (VAE) in order to overcome the limitation of rigid and dramatic gestures that are required in commercial mid-air gesture recognition systems. The motivation is that this limitation causes unnatural movements and thus fatigue.... | [
1,
3,
6,
3
] | [
4,
3,
4,
3
] | [
"iclr_2022_Nn4BjABPRPN",
"iclr_2022_Nn4BjABPRPN",
"iclr_2022_Nn4BjABPRPN",
"iclr_2022_Nn4BjABPRPN"
] |
iclr_2022_f7cWROZYSU | Detecting Worst-case Corruptions via Loss Landscape Curvature in Deep Reinforcement Learning | The non-robustness of neural network policies to adversarial examples poses a challenge for deep reinforcement learning. One natural approach to mitigate the impact of adversarial examples is to develop methods to detect when a given input is adversarial. In this work we introduce a novel approach for detecting adversarial examples that is computationally efficient, is agnostic to the method used to generate adversarial examples, and theoretically well-motivated. Our method is based on a measure of the local curvature of the neural network policy, which we show differs between adversarial and clean examples. We empirically demonstrate the effectiveness of our method in the Atari environment against a large set of state-of-the-art algorithms for generating adversarial examples. Furthermore, we exhibit the effectiveness of our detection algorithm with the presence of multiple strong detection-aware adversaries. | Reject | This paper proposes a computationally-efficient method to detect adversarial examples in reinforcement learning models. The detection method is based on the curvature of the loss landscape around the inputs, which is shown to have larger negative value for clean examples compared to adversarial ones. The experiments on Atari environment models show the effectiveness of the method.
The paper is well-written and backs up the experimental results with mathematical intuition and analysis.
However, the baseline of Roth et al. and all attack methods used have been designed for image classifiers.
If the authors decide to focus on RL, the attack methods should be tailored to RL. The word “worst-case” in the title is misleading, since the attacks used in the paper are not optimal for RL algorithms. This reduces the credibility of the claimed successful detection.
If the authors decide to frame this work as introducing a new property of adversarial examples which can be applied to other tasks, the authors should test this method on other tasks such as benchmark image classification datasets (for example CIFAR10).
With the current experiment section, it is unclear whether this method works in RL applications since the authors use attack methods designed for image classifiers rather than RL algorithms (Please refer to the following papers for some existing RL attack methods). It is also unclear whether this paper introduces a new property of adversarial examples that is general.
- Sandy Huang, Nicolas Papernot, Ian Goodfellow, Yan Duan, and Pieter Abbeel. Adversarial attacks on neural network policies
- Ezgi Korkmaz. Nesterov momentum adversarial perturbations in the deep reinforcement learning domain.
- Anay Pattanaik, Zhenyi Tang, Shuijing Liu, Gautham Bommannan, and Girish Chowdhary. Robust deep reinforcement learning with adversarial attacks.
- Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, and Cho-Jui Hsieh. Robust deep reinforcement learning against adversarial perturbations on state observations.
- Huan Zhang, Hongge Chen, Duane S Boning, and Cho-Jui Hsieh. Robust reinforcement learning on state observations with learned optimal adversary. | train | [
"rDkUknaOks5",
"qbfJ_wHfMgU",
"LaRKTu5qXHq",
"Dz0oVVdzOjM",
"n5cx6_Cs_Qf",
"07qwjTkV8m",
"FntGtNvfpKd",
"IotKJV47DfK",
"2gYnJUFA4ZA",
"Rw80b2rlbqU",
"FL9mlo7hRm1",
"X87QwdbngKt",
"M2qa-tGXh1S",
"l0bHK-eumdK",
"BpRrf85aI4c"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree with your summary, though I see this paper more as an adversarial example paper than an RL one, especially that, as you mentioned, the proposed method doesn't seem to be tied to RL. ",
" Hi Reviewer x4K3,\n\nThank you for the reply! I understand your concerns and believe they are great research question... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"qbfJ_wHfMgU",
"LaRKTu5qXHq",
"n5cx6_Cs_Qf",
"FntGtNvfpKd",
"2gYnJUFA4ZA",
"BpRrf85aI4c",
"M2qa-tGXh1S",
"l0bHK-eumdK",
"FL9mlo7hRm1",
"X87QwdbngKt",
"X87QwdbngKt",
"iclr_2022_f7cWROZYSU",
"iclr_2022_f7cWROZYSU",
"iclr_2022_f7cWROZYSU",
"iclr_2022_f7cWROZYSU"
] |
iclr_2022_pgKE5Q-CF2 | Neuron-Enhanced Autoencoder based Collaborative filtering: Theory and Practice | This paper presents a novel recommendation method called neuron-enhanced autoencoder based collaborative filtering (NE-AECF). The method uses an additional neural network to enhance the reconstruction capability of autoencoder. Different from the main neural network implemented in a layer-wise manner, the additional neural network is implemented in an element-wise manner. They are trained simultaneously to construct an enhanced autoencoder of which the activation function in the output layer is learned adaptively to approximate possibly complicated response functions in real data. We provide theoretical analysis for NE-AECF to investigate the generalization ability of autoencoder and deep learning in collaborative filtering. We prove that the element-wise neural network is able to reduce the upper bound of the prediction error for the unknown ratings, the data sparsity is not problematic but useful, and the prediction performance is closely related to the difference between the number of users and the number of items.
Numerical results show that our NE-AECF has promising performance on a few benchmark datasets. | Reject | This paper proposed an enhanced autoencoder for collaborative filtering by adding another element-wise neural network for rating predictions. Overall the scores are negative, where reviewers pointed out concerns around the motivation, time complexities, and most importantly, using rating prediction as the evaluation setting which ignores the missing-not-at-random nature of the recommender systems. The authors didn't provide any response. Therefore, I vote for rejection. | train | [
"ZP-FNFEhPWv",
"K-1pdNUDppV",
"uH3qzkmICG",
"F3lPlTIMHri"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors study a traditional collaborative filtering problem with users' ratings, where the goal is to predict ratings that are unobserved. Specifically, the authors propose an enhanced autoencoder-based method called NE-AECF. The main idea of NE-AECF as shown in Eq(10) and Figure 1 is that it co... | [
5,
5,
3,
1
] | [
4,
4,
3,
5
] | [
"iclr_2022_pgKE5Q-CF2",
"iclr_2022_pgKE5Q-CF2",
"iclr_2022_pgKE5Q-CF2",
"iclr_2022_pgKE5Q-CF2"
] |
iclr_2022_UkgBSwjxwe | Neuro-Symbolic Forward Reasoning | Reasoning is an essential part of human intelligence and thus has been a long-standing goal in artificial intelligence research. With the recent success of deep learning, incorporating reasoning with deep learning systems i.e. neuro-symbolic AI has become a major field of interest. We propose Neuro-Symbolic Forward Reasoner (NS-FR), a new approach for reasoning tasks taking advantage of differentiable forward-chaining using first-order logic. The key idea is to combine differentiable forward-chaining reasoning with object-centric learning. Differentiable forward-chaining reasoning computes logical entailments smoothly, i.e., it deduces new facts from given facts and rules in a differentiable manner. The object-centric learning approach factorizes raw inputs into representations in terms of objects. This allows us to provide a consistent framework to perform the forward-chaining inference from raw inputs. NS-FR factorizes the raw inputs into the object-centric representations, then converts them into probabilistic ground atoms and finally performs differentiable forward-chaining inference using weighted rules for inference. Our comprehensive experimental evaluations on object-centric reasoning data sets, 2D Kandinsky patterns and 3D CLEVR-Hans, and variety of tasks show the effectiveness and advantage of our approach. | Reject | Unfortunately, the reviewers have unanimously voted to reject this paper.
There was some discussion of whether the paper was out-of-scope for ICLR;
I don't think that it is, necessarily, but I think that we can kind of screen off that topic because the reviewers had plenty of non-scope-related concerns that seem disqualifying to me, including both issues of novelty and issues related to the experimental validation.
Therefore, I am also recommending rejection in this case. | train | [
"G2nVeiFU6a5",
"4Hc2JMhdKD4",
"ItJubqdydPA",
"LecY-yd2FVm",
"C_wxWkZO1iu",
"nkw3UDHgRLw",
"h6Omxhko9Jj",
"zL55xMelcLE",
"FXNlhaIKFtq",
"aTIcvGn4fT_",
"dUAxobRd1bS",
"aBr437VqVWq",
"EYUBJO-CQOm",
"EFd0hGDO53n",
"8ZEXdYe9p73",
"u3VKO131hfg",
"4aBUjm0gkF5",
"pMq6kEwiLg-",
"VEBGd1Ivm... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" I am more than happy to defer to the AC and / or other reviewers on this issue if the consensus is that ICLR can take papers that are not about learning. However, I note that all four of the papers you cite certainly are about learning (as well as other things, including reasoning). I mean no disrespect to your w... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"4Hc2JMhdKD4",
"C_wxWkZO1iu",
"nkw3UDHgRLw",
"iclr_2022_UkgBSwjxwe",
"aTIcvGn4fT_",
"dUAxobRd1bS",
"zL55xMelcLE",
"EFd0hGDO53n",
"8ZEXdYe9p73",
"iclr_2022_UkgBSwjxwe",
"u3VKO131hfg",
"LecY-yd2FVm",
"pMq6kEwiLg-",
"EYUBJO-CQOm",
"eQbbR7uO_wn",
"C-sXoTmi_lN",
"aBr437VqVWq",
"VEBGd1Iv... |
iclr_2022_tJhIY38d2TS | Local Reweighting for Adversarial Training | Instances-reweighted adversarial training (IRAT) can significantly boost the robustness of trained models, where data being less/more vulnerable to the given attack are assigned smaller/larger weights during training. However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e.g., even worse than no reweighting). In this paper, we study this problem and propose our solution--locally reweighted adversarial training (LRAT). The rationale behind IRAT is that we do not need to pay much attention to an instance that is already safe under the attack. We argue that the safeness should be attack-dependent, so that for the same instance, its weight can change given different attacks based on the same model. Thus, if the attack simulated in training is mis-specified, the weights of IRAT are misleading. To this end, LRAT pairs each instance with its adversarial variants and performs local reweighting inside each pair, while performing no global reweighting--the rationale is to fit the instance itself if it is immune to the attack, but not to skip the pair, in order to passively defend different attacks in future. Experiments show that LRAT works better than both IRAT (i.e., global reweighting) and the standard AT (i.e., no reweighting) when trained with an attack and tested on different attacks. | Reject | Reviewers raised various concerns and authors sent in no rebuttal. In view of the negative consensus, this paper then made a clear rejection case. | train | [
"w3jy8pa9Lx",
"ex7sU5AGKNG",
"wnVvke1ef6k",
"VwnSixd0ViY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new method named LART that assigns weights to adversarial examples during adversarial training for better robustness. Compared to the previous baseline work, i.e., IART, the proposed method has overcame several disadvantages.\nQuantitative results demonstrate the effectiveness of the proposed... | [
5,
3,
5,
3
] | [
4,
4,
5,
4
] | [
"iclr_2022_tJhIY38d2TS",
"iclr_2022_tJhIY38d2TS",
"iclr_2022_tJhIY38d2TS",
"iclr_2022_tJhIY38d2TS"
] |
iclr_2022_7MLeqJrHNa | Continual Learning of Neural Networks for Realtime Wireline Cable Position Inference | In the oil fields, Wireline cable is spooled onto a drum where computer vision techniques based on convolutional neural networks (CNNs) are applied to estimate the cable position in real time for automated spooling control. However, as new training data keeps arriving to continuously improve the network, the re-training procedure faces challenges. Online learning fashion with no memory to historical data leads to catastrophic forgetting. Meanwhile, saving all data will cause the disk space and training time to increase without bounds. In this paper, we proposed a method called the modified-REMIND (mREMIND) network. It is a replay-based continual learning method with longer memory to historical data and no memory overflow issues. Information of old data are kept for multiple iterations using a new dictionary update rule. Additionally, by dynamically partitioning the dataset, the method can be applied on devices with limited memory. In our experiments, we compared the proposed method with multiple state-of-the-art continual learning methods and the mREMIND network outperformed others both in accuracy and in disk space usage. | Reject | This submission receives four negative reviews. The raised issues include paper organizations, presentation clarity, more experimental evaluations, the trade-off between technical contribution and application configuration, and the potential impact on more general visual recognition scenarios. In the rebuttal and discussion phases, the authors do not make any response to these reviews. Overall, the AC agrees with four reviewers that the current submission does not reach the publication bar. The authors are suggested to improve the current submission based on the reviews to make further improvements. | train | [
"MZ5iW0qB52N",
"dkhgg0iD32O",
"WRsXV0jf-Oy",
"IcaFnPsX56n"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors proposed a method called the modified-REMIND (MREMIND) network. It is a replay-based continual learning method with a longer memory to historical data and no memory overflow issues. Information of old data is kept for multiple iterations using a new dictionary update rule. The modified-R... | [
5,
1,
3,
3
] | [
4,
5,
5,
4
] | [
"iclr_2022_7MLeqJrHNa",
"iclr_2022_7MLeqJrHNa",
"iclr_2022_7MLeqJrHNa",
"iclr_2022_7MLeqJrHNa"
] |
iclr_2022_nD9Pf-PjTbT | Convergence of Generalized Belief Propagation Algorithm on Graphs with Motifs | Belief propagation is a fundamental message-passing algorithm for numerous applications in machine learning. It is known that belief propagation algorithm is exact on tree graphs. However, belief propagation is run on loopy graphs in most applications. So, understanding the behavior of belief propagation on loopy graphs has been a major topic for researchers in different areas. In this paper, we study the convergence behavior of generalized belief propagation algorithm on graphs with motifs (triangles, loops, etc.) We show under a certain initialization, generalized belief propagation converges to the global optimum of the Bethe free energy for ferromagnetic Ising models on graphs with motifs. | Reject | All reviewers are very critical about the submitted paper regarding novelty of results, insufficient placement with respect to existing results, and clarity of presentation. The authors also did not submit a rebuttal. Hence I am recommending rejection of the paper. | train | [
"cbIAIKH0s4A",
"1A6TqfJ09h1",
"kb5ybk76qG2",
"QDqF_Wouup"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors extend a result for BP on ferromagnetic Ising models to BP on higher order Ising models. This is a purely theoretical result. Clarity: The paper contains a number of typos, but they do not affect understanding. I found the presentation of the results to be counterintuitive and frustrating, with a nu... | [
3,
3,
1,
1
] | [
4,
3,
4,
5
] | [
"iclr_2022_nD9Pf-PjTbT",
"iclr_2022_nD9Pf-PjTbT",
"iclr_2022_nD9Pf-PjTbT",
"iclr_2022_nD9Pf-PjTbT"
] |
iclr_2022_WcZUevpX3H3 | Personalized Neural Architecture Search for Federated Learning | Federated Learning (FL) is a recently proposed learning paradigm for decentralized devices to collaboratively train a predictive model without exchanging private data. Existing FL frameworks, however, assume a one-size-fit-all model architecture to be collectively trained by local devices, which is determined prior to observing their data. Even with good engineering acumen, this often falls apart when local tasks are different and require diverging choices of architecture modelling to learn effectively. This motivates us to develop a novel personalized neural architecture search (NAS) algorithm for FL. Our algorithm, FedPNAS, learns a base architecture that can be structurally personalized for quick adaptation to each local task. We empirically show that FedPNAS significantly outperforms other NAS and FL benchmarks on several real-world datasets. | Reject | The reviewers had remarkably consistent feedback about this paper. They appreciated the formulation of the federated learning problem with architectures having both shared and private (personalized) components. On the other hand, they felt the experiments were insufficient to prove the effectiveness of the method, and had several suggestions in terms of tasks and datasets. They also felt that it's hard to assess whether the existence of private/personalized components is warranted without visualizing the difference between architectures. Overall, the reviewers had good feedback that could strengthen the paper. | train | [
"_SRytiqKrS7",
"rWEYi6zMkFu",
"9rDidbb-xtu",
"R_DLI1um2L6",
"jcYa96s-Yz_",
"00KwoHi99-U",
"5WdrGIohYE",
"JnkTZOpo4gT",
"2q2DRKhb2Sp"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the response. I have read the author's response as well as the comments from other reviewers. Here are my suggestions: It would make the work more informative and convincing to have those visualization includes. The size of the search space is a big concern to me. It would make the paper st... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"jcYa96s-Yz_",
"00KwoHi99-U",
"5WdrGIohYE",
"2q2DRKhb2Sp",
"JnkTZOpo4gT",
"iclr_2022_WcZUevpX3H3",
"iclr_2022_WcZUevpX3H3",
"iclr_2022_WcZUevpX3H3",
"iclr_2022_WcZUevpX3H3"
] |
iclr_2022_jxTRL-VOoQo | Evaluating Deep Graph Neural Networks | Graph Neural Networks (GNNs) have already been widely applied in various graph mining tasks. However, most GNNs only have shallow architectures, which limits performance improvement. In this paper, we conduct a systematic experimental evaluation on the fundamental limitations of current architecture designs. Based on the experimental results, we answer the following two essential questions: (1) what actually leads to the compromised performance of deep GNNs; (2) how to build deep GNNs. The answers to the above questions provide empirical insights and guidelines for researchers to design deep GNNs. Further, we present Deep Graph Multi-Layer Perceptron (DGMLP), a powerful approach implementing our proposed guidelines. Experimental results demonstrate three advantages of DGMLP: 1) high accuracy -- it achieves state-of-the-art node classification performance on various datasets; 2) high flexibility -- it can flexibly choose different propagation and transformation depths according to certain graph properties; 3) high scalability and efficiency -- it supports fast training on large-scale graphs. | Reject | The paper studies why existing deep GCNs suffer from poor performance and propose DGMLP to improve over existing GCNs. However, the reviewers think there are still many unjustified claims and the paper. Further, several reviewers question about the novelty of the proposed method, which seems to be a combination of existing approaches.
I suggest the authors to revise the paper by defining terms like model degradation and smoothness mathematically and try to justify each claim (e.g., the effect of disentangling) with solid experiments. These will significantly improve the analysis part and make the conclusions stronger. | train | [
"EaZR4eDbIDl",
"pVZpKOBlLQM",
"mZXTr5FqM-Z",
"J8Ahuh_ese",
"wtNCstHCDSl",
"1ypt2QkkTI",
"AOLNO9W_m-",
"bJ5U9zPHZTn",
"EtoQvllqaKB",
"8q5Qac7c6tt",
"WTJaVrtdSt-",
"5ZQa2VZWTKt",
"qLkMc8YaZ1N",
"ZZkHkhXAYDs"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your continuous comments! We address your concerns as follows.\n### 1.Definition of Model degradation.\nThe phenomenon of model degradation is the same for both CNNs (e.g., ResNet) and GNNs, and we will define it formally in the revised manuscript. \n### 2. How does model degradation influence GNNs?\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"pVZpKOBlLQM",
"bJ5U9zPHZTn",
"J8Ahuh_ese",
"EtoQvllqaKB",
"ZZkHkhXAYDs",
"AOLNO9W_m-",
"qLkMc8YaZ1N",
"8q5Qac7c6tt",
"5ZQa2VZWTKt",
"WTJaVrtdSt-",
"iclr_2022_jxTRL-VOoQo",
"iclr_2022_jxTRL-VOoQo",
"iclr_2022_jxTRL-VOoQo",
"iclr_2022_jxTRL-VOoQo"
] |
iclr_2022_gD0KBsQcGKg | Distribution-Driven Disjoint Prediction Intervals for Deep Learning | This paper redefines prediction intervals (PIs) as the form of a union of disjoint intervals. PIs represent predictive uncertainty in the regression problem. Since previous PI methods assumed a single continuous PI (one lower and upper bound), it suffers from performance degradation in the uncertainty estimation when the conditional density function has multiple modes. This paper demonstrates that multimodality should be considered in regression uncertainty estimation. To address the issue, we propose a novel method that generates a union of disjoint PIs. Throughout UCI benchmark experiments, our method improves over current state-of-the-art uncertainty quantification methods, reducing an average PI width by over 27$\%$. Through qualitative experiments, we visualized that the multi-mode often exists in real-world datasets and why our method produces high-quality PIs compared to the previous PI. | Reject | This paper presents a method for producing a mixture of (disjoint) predictive distributions for deep learning models rather than a single predictive distribution. The reviewers in general found that the idea had strong potential, was well motivated and addresses an important and under-appreciated problem in deep learning. They seemed to find the proposed approach of using mixture density networks to be sensible. However, the reviewers seemed to find that the paper was unclear in presentation and grammatically, as if hastily written. One reviewer noted that they would not be able to reproduce the method given the confusing presentation. The reviewers also found that the experiments didn't adequately evaluate their method empirically. Unfortunately, the reviewers all agreed that the paper is not quite ready for publication (5, 3, 5). Careful rewriting of the paper and the technical contributions and strengthening the experiments would go a long way towards improving this paper for a future submission. | train | [
"p4KjHzgBIPk",
"rhHQrV1sr4",
"aqkO12pquDK",
"u8fJHxFt2a",
"cqDnO6Yc2VS",
"7k4YIBQ49AF",
"Zhez8FEz34M",
"txWDLqe9SeS",
"P4FoNVqmnLZ",
"a94qmZQZvta"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Regarding evaluation and comparison to existing methods:**\n\nI understand that generating disjoint PI's is a novel concept. However, my concern regarding evaluation still stands. There exist many methods that can produce multimodal predictive distributions. However, the proposed evaluation framework for disjoi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"aqkO12pquDK",
"u8fJHxFt2a",
"cqDnO6Yc2VS",
"7k4YIBQ49AF",
"a94qmZQZvta",
"txWDLqe9SeS",
"P4FoNVqmnLZ",
"iclr_2022_gD0KBsQcGKg",
"iclr_2022_gD0KBsQcGKg",
"iclr_2022_gD0KBsQcGKg"
] |
iclr_2022_C81udlH5yMv | Invariant Causal Mechanisms through Distribution Matching | Learning representations that capture the underlying data generating process is akey problem for data efficient and robust use of neural networks. One key property for robustness which the learned representation should capture and which recently received a lot of attention is described by the notion of invariance. In this work we provide a causal perspective and new algorithm for learning invariant representations. Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization, where we are able to significantly boost the score of existing models. | Reject | The reviewers are in consensus. I recommend that the authors take their recommendations into consideration in revising their manuscript. | train | [
"hvsmQQErja5",
"nRuRDXg3jZB",
"50L0o49xct0",
"XP6PMfN-JaD",
"kdW1ebK2K0B",
"S1ftz2dFo2H"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer sMkT for their feedback. \n\nWe agree that factor models are different from causal models, however we disagree that figure 1 is not a causal model. It is even a precise causal model e.g. for the datasets of [1, 2]. Similarly to the motivation of our work, others likewise use the underlying causa... | [
-1,
-1,
-1,
1,
5,
3
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"XP6PMfN-JaD",
"S1ftz2dFo2H",
"kdW1ebK2K0B",
"iclr_2022_C81udlH5yMv",
"iclr_2022_C81udlH5yMv",
"iclr_2022_C81udlH5yMv"
] |
iclr_2022_VAmkgdMztWs | Network robustness as a mathematical property: training, evaluation and attack | Neural networks are widely used in AI for their ability to detect general patterns in noisy data. Paradoxically, by default they are also known to not be particularly robust, i.e. moving a small distance in the input space can result in the network's output changing significantly.
Many methods for improving neural network robustness have been proposed recently. This growing body of research gave rise to numerous explicit or implicit notions of robustness. Connections between these notions are often subtle, and a systematic comparison of these different definitions was lacking in the literature.
In this paper we attempt to address this gap by performing an in-depth comparison of the different definitions of robustness, by analysing their relationships, assumptions, interpretability and verifiability.
By abstracting robustness as a stand-alone mathematical property, we are able to show that, having a choice of several definitions of robustness, one can combine them in a modular way when defining training modes, evaluation metrics, and attacks on neural networks.
We also perform experiments to compare the applicability and efficacy of different training methods for ensuring the network obeys these different definitions. | Reject | The paper studies and compares different notions of robustness. However, reviewers found there are many unjustified claims in the analysis, and the paper does not provide novel findings nor useful approaches. | train | [
"RMAU9cU8x3",
"3ogz6I12EYo",
"42_Jb7uZekL",
"ve2Q7K7b_5L",
"SwoTexPgF0k",
"ZRDqv7zxn-u",
"_U_f7mmzNb",
"RUwu1kFmEC",
"I8gffd74uwR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempts to compare different robustness definitions. The paper discusses the relations between various definitions and conducts experiments to show the relations. Strengths:\n* This paper attempts to systematically compare different robustness definitions.\n \nWeaknesses:\n* Many arguments in the paper... | [
1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_VAmkgdMztWs",
"RMAU9cU8x3",
"SwoTexPgF0k",
"I8gffd74uwR",
"RUwu1kFmEC",
"RMAU9cU8x3",
"iclr_2022_VAmkgdMztWs",
"iclr_2022_VAmkgdMztWs",
"iclr_2022_VAmkgdMztWs"
] |
iclr_2022_u6ybkty-bL | When Complexity Is Good: Do We Need Recurrent Deep Learning For Time Series Outlier Detection? | Outlier detection is a critical part of understanding a dataset and extracting results. Outlier detection is used in different domains for various reasons; including detecting stolen credit cards, spikes of energy usage, web attacks, or in-home activity monitoring. Within this paper, we look at when it is appropriate to apply recurrent deep learning methods for time series outlier detection versus non-recurrent methods. Recurrent deep learning methods have a larger capacity for learning complex representations in time series data. We apply these methods to various synthetic and real-world datasets, including a dataset containing information about the in-home movement of people living with dementia in a clinical study cross-referenced with their recorded unplanned hospital admissions and infection episodes. We also introduce two new outlier detection methods, that can be useful in detecting contextual outliers in time series data where complex temporal relationships and local variations in the time series are important. | Reject | This paper has been reviewed by four experts. Their independent evaluations were consistent, all recommended rejection. I agree with that assessment as this paper is not ready for publication at ICLR in its current form. The reviewers have provided the authors with ample constructive feedback and the authors have been encouraged to consider this feedback if they choose to continue the work on this topic. | train | [
"1AVhRKz1uLP",
"YhcG7bASOZ0",
"ULyUWUZSM7q",
"Rfoh-aCxxyt",
"kXwi4MlXryr",
"liSYJyoblvq",
"seApix77Uk_",
"HVdolXcvl3M"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to review our paper and for your constructive comments.\n\nThe aim of this paper was to understand where RNN based methods are useful compared to non-RNN based methods for time series outlier detection. We wanted to show the applicability of methods rather than perform model optimisa... | [
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"HVdolXcvl3M",
"seApix77Uk_",
"liSYJyoblvq",
"kXwi4MlXryr",
"iclr_2022_u6ybkty-bL",
"iclr_2022_u6ybkty-bL",
"iclr_2022_u6ybkty-bL",
"iclr_2022_u6ybkty-bL"
] |
iclr_2022_3Skn65dgAr4 | Differentiable Self-Adaptive Learning Rate | Adaptive learning rate has been studied for a long time. In the training session of neural networks, learning rate controls update stride and direction in a multi-dimensional space. A large learning rate may cause failure to converge, while a small learning rate will make the convergence too slow.
Even though some optimizers make learning rate adaptive to the training, e.g., using first-order and second-order momentum to adapt learning rate, their network's parameters are still unstable during training and converges too slowly in many occasions.
To solve this problem, we propose a novel optimizer which makes learning rate differentiable with the goal of minimizing loss function and thereby realize an optimizer with truly self-adaptive learning rate. We conducted extensive experiments on multiple network models compared with various benchmark optimizers. It is shown that our optimizer achieves fast and high qualified convergence in extremely short epochs, which is far more faster than those state-of-art optimizers. | Reject | The paper deals with the problem of adjusting the learning rate during gradient descent optimisation. Unfortunately the proposed approach is very similar to methods already presented in the literature and no significant contribution can be recognised. During the rebuttal, the author(s) have acknowledged their ignorance about the relevant literature and provided some further clarifications that did not turn into a revision of the reviewers’ initial assessment of the work. | train | [
"-UAwYtgEg46",
"v8QhYdJodyA",
"T28pDKRHcY3",
"Amp0XFqe41",
"LiyibWftBbv",
"3ViGJayF_Mm",
"tmqRcBuJuli",
"mXEgE-c2-l2",
"Dy8eoSa81m8",
"BMi_UpafGS7",
"cpisJBGTT77"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and explanation! After carefully reviewing your rebuttal and other reviewers' comments, I decided to keep my score. Your response was really helpful for me to understand the idea of the paper, but my initial thoughts still hold. I'd recommend authors make revisions to the paper. As I s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
8,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
5
] | [
"v8QhYdJodyA",
"T28pDKRHcY3",
"Amp0XFqe41",
"Dy8eoSa81m8",
"cpisJBGTT77",
"BMi_UpafGS7",
"mXEgE-c2-l2",
"iclr_2022_3Skn65dgAr4",
"iclr_2022_3Skn65dgAr4",
"iclr_2022_3Skn65dgAr4",
"iclr_2022_3Skn65dgAr4"
] |
iclr_2022_zBhwgP7kt4 | Dynamic Least-Squares Regression | In large-scale supervised learning, after a model is trained with an initial dataset, a common challenge is how to exploit new incremental data without re-training the model from scratch. Motivated by this problem, we revisit the canonical problem of dynamic least-squares regression (LSR), where the goal is to learn a linear model over incremental training data. In this setup, data and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)}) \in \mathbb{R}^{t \times d}\times \mathbb{R}^t$ evolve in an online fashion ($t\gg d$), and the goal is to efficiently maintain an (approximate) solution of $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$. Our main result is a dynamic data structure which maintains an arbitrarily small constant approximate solution to dynamic LSR with amortized update time $O(d^{1+o(1)})$, almost matching the running time of the static (sketching-based) solution. By contrast, for exact (or $1/\mathrm{poly}(n)$-accuracy) solutions, we show a separation between the models, namely, that dynamic LSR requires $\Omega(d^{2-o(1)})$ amortized update time under the OMv Conjecture (Henzinger et al., STOC'15). Our data structure is fast, conceptually simple, easy to implement, and our experiments demonstrate their practicality on both synthetic and real-world datasets. | Reject | There wasn't enough enthusiasm to push this paper over the bar, based on no reviewer championing the paper (the one score above 6 was consulted and thought this was a fair assessment). The reviewers appreciated the contributions of the paper but felt that in terms of technical depth, there was a lot of overlap with prior work, and the statements of the results themselves were good but not exciting enough to convince the reviewers. Some suggestions for further improvement that came up were to try to extend this to update time for low rank approximation, which was an application that other work that built off of Cohen et al did, see, e.g., https://arxiv.org/abs/1805.03765 . Regarding presentation, it would be great if in a re-submission the authors handle the presentation concerns of some of the reviewers regarding the experiments. | train | [
"KDmgbX5P6xA",
"fMUd6_J8XdU",
"MKahZtJVo3",
"NdRnFxqLeyu",
"muX9FfxJpF",
"Rbf-o2OK_1R",
"rccFR7beCMk",
"Zswv7Xt7H03",
"n1_W9tfdmf",
"Z1TvwdC77nd",
"oMe_Sz11xGg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies incremental least-squares regression, where the goal is to maintain an $(1+\\epsilon)$-approximate solution to $\\min_x \\left\\Vert Ax-b\\right\\Vert_2^2$ for some $A\\in\\mathbb{R}^{n\\times d}$, under row insertions to $\\begin{pmatrix}A & b\\end{pmatrix}$, while keeping the total runtime as ... | [
6,
-1,
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_zBhwgP7kt4",
"Zswv7Xt7H03",
"rccFR7beCMk",
"iclr_2022_zBhwgP7kt4",
"iclr_2022_zBhwgP7kt4",
"n1_W9tfdmf",
"NdRnFxqLeyu",
"oMe_Sz11xGg",
"muX9FfxJpF",
"KDmgbX5P6xA",
"iclr_2022_zBhwgP7kt4"
] |
iclr_2022_MGIg_Q4QtW2 | RAR: Region-Aware Point Cloud Registration | This paper concerns the research problem of point cloud registration to find the rigid transformation to optimally align the source point set with the target one. Learning robust point cloud registration models with deep neural networks has emerged as a powerful paradigm, offering promising performance in predicting the global geometric transformation for a pair of point sets. Existing methods firstly leverage an encoder to regress a latent shape embedding, which is then decoded into a shape-conditioned transformation via concatenation-based conditioning. However, different regions of a 3D shape vary in their geometric structures which makes it more sense that we have a region-conditioned transformation instead of the shape-conditioned one. With this observation, in this paper we present a \underline{R}egion-\underline{A}ware point cloud \underline{R}egistration, denoted as RAR, to predict transformation for pairwise point sets in the self-supervised learning fashion. More specifically, we develop a novel region-aware decoder (RAD) module that is formed with an implicit neural region representation parameterized by neural networks. The implicit neural region representation is learned with a self-supervised 3D shape reconstruction loss without the need for region labels. Consequently, the region-aware decoder (RAD) module guides the training of the region-aware transformation (RAT) module and region-aware weight (RAW) module, which predict the transforms and weights for different regions respectively. The global geometric transformation from source point set to target one is then formed by the weighted fusion of region-aware transforms. Compared to the state-of-the-art approaches, our experiments show that our RAR achieves superior registration performance over various benchmark datasets (e.g. ModelNet40). | Reject | This paper proposes a learning-based method for shape registration that conditions on regions of the shape rather than learning from the entire point cloud in one shot. The reviewers point out several questions about the method, thanks to expository issues as well as missing comparisons/ablation studies. As the authors have chosen not to submit a rebuttal, I will refer them to the original reviews for details here for additional points of improvement. | train | [
"rvdD4Ik79nk",
"PMkkbihhGLQ",
"vY_A22sSm3Ba",
"ZbfA43fFPQ",
"3TZCw6MinrW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper aims to solve the rigid registration of 3D point clouds using a deep neural network. The key difference from previous methods is that this paper proposes a region-conditioned transformation. Specifically, this method first estimates k transformation matrices and then adopts a region segmentation module ... | [
3,
3,
5,
3,
3
] | [
4,
4,
3,
4,
2
] | [
"iclr_2022_MGIg_Q4QtW2",
"iclr_2022_MGIg_Q4QtW2",
"iclr_2022_MGIg_Q4QtW2",
"iclr_2022_MGIg_Q4QtW2",
"iclr_2022_MGIg_Q4QtW2"
] |
iclr_2022_KeBPcg5E3X | Representation Disentanglement in Generative Models with Contrastive Learning | Contrastive learning has shown its effectiveness in image classification and generation. Recent works apply the contrastive learning on the discriminator of the Generative Adversarial Networks, and there exists little work on exploring if contrastive learning can be applied on encoders to learn disentangled representations. In this work, we propose a simple yet effective method via incorporating contrastive learning into latent optimization, where we name it $\textbf{\texttt{ContraLORD}}$. Specifically, we first use a generator to learn discriminative and disentangled embeddings via latent optimization. Then an encoder and two momentum encoders are applied to dynamically learn disentangled information across large amount of samples with content-level and residual-level contrastive loss. In the meanwhile, we tune the encoder with the learned embeddings in an amortized manner. We evaluate our approach on ten benchmarks in terms of representation disentanglement and linear classification. Extensive experiments demonstrate the effectiveness of our ContraLORD on learning both discriminative and generative representations. | Reject | The paper presents modifying latent optimization for representation disentanglement using contrastive learning, resulting in improved performance on disentanglement benchmarks. Despite the empirical success, the proposed algorithm has many moving parts and loss functions. Most reviewers agree that given the incremental and complex nature of the proposed technique, the empirical results are not sufficient for acceptance at ICLR, especially since the results do not present additional insights into the inner workings of the method. I encourage the authors to try to simplify the technique, or provide a convincing evidence that such complexity is necessary.
PS:
I didn't find much discussion of how the hyper-parameters are chosen (temperature, lambda terms, etc.).
A discussion of recent self-supervised disentanglement methods (e.g., https://arxiv.org/abs/2102.08850 and https://arxiv.org/abs/2007.00810) can be helpful. | val | [
"87pwCLJWnWt",
"O8ofFQP7Ti0",
"uhoPRZhVUmT",
"lCKOKiw5l-",
"yMsez9AZyzf",
"lGArf68uKr",
"LDZqJGmUuR4",
"yPZ9MIDv25",
"nMa7zP1XapX",
"aorcXU4Ihiy",
"0wZEfDRU6dh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the comments, however, still remain not convinced on the lack of flaws I have mentioned previously in the review.\n\nThe authors claim to have achieved similar or better scores compared to the OverLORD method. However, if we calculate the difference between the metrics that ContraLORD and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"lGArf68uKr",
"lCKOKiw5l-",
"yMsez9AZyzf",
"0wZEfDRU6dh",
"aorcXU4Ihiy",
"nMa7zP1XapX",
"yPZ9MIDv25",
"iclr_2022_KeBPcg5E3X",
"iclr_2022_KeBPcg5E3X",
"iclr_2022_KeBPcg5E3X",
"iclr_2022_KeBPcg5E3X"
] |
iclr_2022_t3BFUDHwEJU | Delayed Geometric Discounts: An alternative criterion for Reinforcement Learning | The endeavor of artificial intelligence (AI) is to design autonomous agents capable of achieving complex tasks. Namely, reinforcement learning (RL) proposes a theoretical background to learn optimal behaviors. In practice, RL algorithms rely on geometric discounts to evaluate this optimality. Unfortunately, this does not cover decision processes where future returns are not exponentially less valuable.
Depending on the problem, this limitation induces sample-inefficiency (as feed-backs are exponentially decayed) and requires additional curricula/exploration mechanisms (to deal with sparse, deceptive or adversarial rewards).
In this paper, we tackle these issues by generalizing the discounted problem formulation with a family of delayed objective functions. We investigate the underlying RL problem to derive: 1) the optimal stationary solution and 2) an approximation of the optimal non-stationary control. The devised algorithms solved hard exploration problems on tabular environment and improved sample-efficiency on classic simulated robotics benchmarks. | Reject | Overall, the reviewers were insufficiently enthused by this paper. There was no rebuttal, and the authors did not engage or answer questions raised. I concur with the reviewers, and encourage the authors to carefully consider the provided feedback. | val | [
"_Ucn1dzoJAV",
"6FX5CPu_9x_",
"sd_LEVr-u6X",
"MNmApYudw72"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper generalizes the discounted objective in RL with \"delayed\" discounted objectives. They analyze the proposed objective and how it may be used to approximate optimal control, and evaluate the proposed method empirically. I'm recommending rejection at this time, due to the following concerns:\n\n1. The ar... | [
3,
3,
6,
5
] | [
4,
4,
3,
3
] | [
"iclr_2022_t3BFUDHwEJU",
"iclr_2022_t3BFUDHwEJU",
"iclr_2022_t3BFUDHwEJU",
"iclr_2022_t3BFUDHwEJU"
] |
iclr_2022_NblYkw2U2Yg | A Generalised Inverse Reinforcement Learning Framework | The global objective of inverse Reinforcement Learning (IRL) is to estimate the unknown cost function of some MDP based on observed trajectories generated by (approximate) optimal policies. The classical approach consists in tuning this cost function so that associated optimal trajectories (that minimise the cumulative discounted cost, i.e. the classical RL loss) are “similar” to the observed ones. Prior contributions focused on penalising degenerate solutions and improving algorithmic scalability. Quite orthogonally to them, we question the pertinence of characterising optimality with respect to the cumulative discounted cost as it induces an implicit bias against policies with longer mixing times. State of the art value based RL algorithms circumvent this issue by solving for the fixed point of the Bellman optimality operator, a stronger criterion that is not well defined for the inverse problem.
To alleviate this bias in IRL, we introduce an alternative training loss that puts more weights on future states which yields a reformulation of the (maximum entropy) IRL problem. The algorithms we devised exhibit enhanced performances (and similar tractability) than off-the-shelf ones in multiple OpenAI gym environments. | Reject | The paper extends the maximum entropy inverse reinforcement learning (IRL) framework by changing the optimal criterion used in reinforcement learning (RL). This novel criterion is an expectation of the Q-values over a weighted distribution over states and actions induced by a policy, which is in contrast to the standard criterion that is an expectation over the initial state distribution.
All the reviewers agree that the topic addressed in this paper is interesting and novel. On the other hand, there are some concerns about the technical novelty and relevance of the paper. Since the authors have not provided any feedback, the reviewers did not solve their concerns and they reach a consensus on rejecting this paper. | train | [
"NgmuaaaAAD5",
"42mktx3ppmu",
"OH-d2Z_0MP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper points out that the classical IRL approach has a tendency to match those occupancy measures that favor short-term behavior. To address this issue, a reformulation is proposed based on GAIL in order to put more emphasis on matching longer-term behavior. Specifically, the main difference is to replace the... | [
5,
5,
6
] | [
4,
3,
3
] | [
"iclr_2022_NblYkw2U2Yg",
"iclr_2022_NblYkw2U2Yg",
"iclr_2022_NblYkw2U2Yg"
] |
iclr_2022_DSCsslei9r | Multi-modal Self-supervised Pre-training for Regulatory Genome Across Cell Types | In the genome biology research, regulatory genome modeling is an important topic for many regulatory downstream tasks, such as promoter classification, transaction factor binding sites prediction. The core problem is to model how regulatory elements interact with each other and its variability across different cell types. However, current deep learning methods often focus on modeling genome sequences of a fixed set of cell types and do not account for the interaction between multiple regulatory elements, making them only perform well on the cell types in the training set and lack the generalizability required in biological applications. In this work, we propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call $\textbf{\texttt{GeneBERT}}$. Specifically, we simultaneously take the 1d sequence of genome data and a 2d matrix of (transcription factors × regions) as the input, where three pre-training tasks are proposed to improve the robustness and generalizability of our model. We pre-train our model on the ATAC-seq dataset with 17 million genome sequences. We evaluate our GeneBERT on regulatory downstream tasks across different cell types, including promoter classification, transaction factor binding sites prediction, disease risk estimation, and splicing sites prediction. Extensive experiments demonstrate the effectiveness of multi-modal and self-supervised pre-training for large-scale regulatory genomics data. | Reject | While several reviewers acknowledge that the paper contains potentially useful ideas related to multi-modal self-training applied to genomic data, they also point out a number of weaknesses and room for improvement that the discussion with authors did not fully address. This includes in particular the need to better explain the details of what is done in the paper; the choice of experiments which is not relevant (eg, predicting promoter regions) or complete (eg, showing results on only one transcription factor); the lack of comparison with existing methods, etc... We therefore consider that the paper is not ready for publication in its current form, but hope that the reviews will help the authors work on a revision addressing the issues. | train | [
"jkiYmC8d7xe",
"wXPXWlrYuCk",
"cvj8a9sw0l",
"_MoA943ziPQ",
"TldP6I3StAy",
"77b1myKVfKR",
"GBnavfUXMqE",
"4OBcMd9ZLba",
"nb_lf13U2iw",
"-O3t3kLw4X-",
"xErcZDaZB6t",
"uLDo1nhfDjL",
"fR62tYeGN8J"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for responding to the comments. Based on the other reviews and responses, I intend to maintain my score. ",
" Thank you to the authors for their response to my comments. I still believe the authors would need to do more to explain, demonstrate, and interpret their model before it would ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"GBnavfUXMqE",
"TldP6I3StAy",
"_MoA943ziPQ",
"fR62tYeGN8J",
"77b1myKVfKR",
"uLDo1nhfDjL",
"xErcZDaZB6t",
"nb_lf13U2iw",
"-O3t3kLw4X-",
"iclr_2022_DSCsslei9r",
"iclr_2022_DSCsslei9r",
"iclr_2022_DSCsslei9r",
"iclr_2022_DSCsslei9r"
] |
iclr_2022_0GhVG1de-Iv | Stability and Generalisation in Batch Reinforcement Learning | Overfitting has been recently acknowledged as a key limiting factor in the capabilities of reinforcement learning algorithms, despite little theoretical characterisation. We provide a theoretical examination of overfitting in the context of batch reinforcement learning, through the fundamental relationship between algorithmic stability (Bousquet & Elisseeff, 2002)–which characterises the effect of a change at a single data point–and the generalisation gap–which quantifies overfitting. Examining a popular fitted policy evaluation method with linear value function approximation, we characterise the dynamics of overfitting in the RL context. We provide finite sample, finite time, polynomial bounds on the generalisation gap in RL. In addition, our approach applies to a class of algorithms which only partially fit to temporal difference errors, as is common in deep RL, rather than perfectly optimising at each step. As such, our results characterise an unexplored bias-variance trade-off in the frequency of target network updates. To do so, our work extends the stochastic gradient-based approach of Hardt et al. (2016) to the iterative methods more common in RL. We find that under regimes where learning requires few iterations, the expected temporal difference error over the dataset is representative of the true performance on the MDP, indicating that, as is the case in supervised learning, good generalisation in RL can be ensured through the use of algorithms that learn quickly.
| Reject | In this paper, the authors studied algorithmic stability of batch reinforcement learning algorithms, as well as its connection to certain generalization bounds (motivated by the prior work Hardt et al developed for SGD on nonconvex optimization problems). While understanding the stability and generalization of batch RL is certainly an interesting and important direction, the paper in its current form is not yet ready to be published. As the reviewers pointed out, both the analyses and the claims need to be polished (in fact, important details and definitions are missing); and the theoretical contributions are only made in a limited setting. | test | [
"z6D-rS32pAI",
"4PhXFyWI8Pd",
"IO5Lw8VxA_9",
"6eQB372kWvi",
"C26Y9cboI3u",
"cAJPZ47x2Z8",
"TFJbEd1xQ6M"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the exceptionally detailed review. We largely agree with the editorial feedback that you've provided, and much of it will be included in subsequent versions of the paper. In particular, we're very grateful for the reference to the work by Bousquet, Klochkov, Zhivotovskiy, (2020), the suggestion about d... | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"TFJbEd1xQ6M",
"cAJPZ47x2Z8",
"C26Y9cboI3u",
"iclr_2022_0GhVG1de-Iv",
"iclr_2022_0GhVG1de-Iv",
"iclr_2022_0GhVG1de-Iv",
"iclr_2022_0GhVG1de-Iv"
] |
iclr_2022_4YOOO4ZNKM | Self-supervised Learning for Sequential Recommendation with Model Augmentation | The sequential recommendation aims at predicting the next items in user behaviors, which can be solved by characterizing item relationships in sequences. Due to the data sparsity and noise issues in sequences, a new self-supervised learning (SSL) paradigm is proposed to improve the performance, which employs contrastive learning between positive and negative views of sequences.
However, existing methods all construct views by adopting augmentation from data perspectives, while we argue that 1) optimal data augmentation methods are hard to devise, 2) data augmentation methods destroy sequential correlations, and 3) data augmentation fails to incorporate comprehensive self-supervised signals.
Therefore, we investigate the possibility of model augmentation to construct view pairs. We propose three levels of model augmentation methods: neuron masking, layer dropping, and encoder complementing.
This work opens up a novel direction in constructing views for contrastive SSL. Experiments verify the efficacy of model augmentation for the SSL in the sequential recommendation.
| Reject | This paper proposed a self-supervised learning view for sequential recommendation with different forms of model augmentation: neuron masking, layer dropping, and encoder complementing. Overall the scores are negative. The reviewers raised concerns mostly around the motivation of the proposed approach (which wasn't fully supported by the experimental results) as well as the limited contribution (especially considering some of the augmentation strategies have been proposed in the past). One reviewer also brought out an interesting connection between model augmentation and model regularization. The authors responded that they will keep improving the paper and hopefully we will see a much improved version in the next submission. | train | [
"isolzHNVR1f",
"7H_GYOO7FqF",
"CXJi6t2wkeZ",
"f-i1mkCicL",
"fy8hsm3ems1",
"KrA1BbjLvNc"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" SSE-PT is a good paper for sequential recommendation but may not related to our contrastive learning framework. But definitely, considering the model augmentation as a way of regularization is a very interesting point. Thanks for your suggestion! We will further extend our claims and conduct more experiments to s... | [
-1,
-1,
-1,
3,
3,
5
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"KrA1BbjLvNc",
"f-i1mkCicL",
"fy8hsm3ems1",
"iclr_2022_4YOOO4ZNKM",
"iclr_2022_4YOOO4ZNKM",
"iclr_2022_4YOOO4ZNKM"
] |
iclr_2022_alaQzRbCY9w | Bolstering Stochastic Gradient Descent with Model Building | Stochastic gradient descent method and its variants constitute the core optimization algorithms that achieve good convergence rates for solving machine learning problems. These rates are obtained especially when these algorithms are fine-tuned for the application at hand. Although this tuning process can require large computational costs, recent work has shown that these costs can be reduced by line search methods that iteratively adjust the stepsize. We propose an alternative approach to stochastic line search by using a new algorithm based on forward step model building. This model building step incorporates a second-order information that allows adjusting not only the stepsize but also the search direction. Noting that deep learning model parameters come in groups (layers of tensors), our method builds its model and calculates a new step for each parameter group. This novel diagonalization approach makes the selected step lengths adaptive. We provide convergence rate analysis, and experimentally show that the proposed algorithm achieves faster convergence and better generalization in most problems. Moreover, our experiments show that the proposed method is quite robust as it converges for a wide range of initial stepsizes. | Reject | There was a consensus among the reviewers to reject the paper. While they noted that the paper proposed a new interesting stochastic algorithm for deep learning, they think the paper needs to be substantially improved in both theory and empirical study. The paper was judged quite incremental in comparison to the work of Öztoprak et al 2018 (where most of the theory was developed), while not showing improved empirical performance on the benchmarks. | train | [
"F8BhbflCng5",
"FSLx3UgSr0",
"0aFWRn_dGTD",
"nb_KvczqpCg",
"2cinxn3g7pr",
"jBN0St3PyAx",
"aJS9na2o5rR",
"c8DXxlwh7ZD"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable feedback and suggestions. We would like to clarify the following points:\n\nThe trial step is given as $s_k^t = \\alpha_k g_k$ where $\\alpha_k$ is the step size and $g_k$ is the stochastic gradient. Therefore, the model step $s_k$ involves the step size $\\alpha_k$ implicit... | [
-1,
-1,
-1,
-1,
5,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"c8DXxlwh7ZD",
"aJS9na2o5rR",
"jBN0St3PyAx",
"2cinxn3g7pr",
"iclr_2022_alaQzRbCY9w",
"iclr_2022_alaQzRbCY9w",
"iclr_2022_alaQzRbCY9w",
"iclr_2022_alaQzRbCY9w"
] |
iclr_2022__Ko4kT3ckWy | Increase and Conquer: Training Graph Neural Networks on Growing Graphs | Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful features from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon --- the limit object of a graph --- in this paper we consider the problem of learning a graphon neural network (WNN) --- the limit object of a GNN --- by training GNNs on graphs sampled Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is benchmarked on both a recommendation system and a decentralized control problem where it is shown to retain comparable performance, to its large-scale counterpart, at a reduced computational cost. | Reject | This paper proposes a scalable learning method for GNN that gradually increases the training data size by randomly adding vertexes generated from a graphon. Theoretical justification to the proposed method is given that bounds the difference between the gradients on the sampled network and on the graphon. A numerical experiment was conducted to support the validity of the proposed method.
Unfortunately, this paper contains several issues as listed below:
1. Novelty: There are already some existing work to address the issue of scalability of training a graph neural network model. However, the relation to them is not appropriately exposed.
2. Experiments: Although the main purpose of this paper is to resolve the scalability of GNN, the numerical experiments are conducted only on a small scale dataset ($\sim$1k).
3. Practicality: There are several hyperparameters. However, the theory and methodology do not give a practical guideline to determine them (e.g., how many vertexes should be added at each epoch).
4. Correctness: The proof of the theorems would contain some flaws, which should be resolved by the authors. However, there was no response from the authors.
For these reasons, this paper would not be appropriate to appear in ICLR. | train | [
"KKBd6DJJ4g",
"8SHBLp99we",
"ALYEuMCM-uf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed an effective and scalable algorithm to train the graph neural network. Their method leverages the framework of graphon neural networks to enlarge the training size of GNN during the training. Specifically, a growing size subgraph is sampled from graphon by a Bernoulli distribution and fed to a ... | [
3,
6,
3
] | [
4,
4,
4
] | [
"iclr_2022__Ko4kT3ckWy",
"iclr_2022__Ko4kT3ckWy",
"iclr_2022__Ko4kT3ckWy"
] |
iclr_2022_8kVP8m93VqN | Task-oriented Dialogue System for Automatic Disease Diagnosis via Hierarchical Reinforcement Learning | In this paper, we focus on automatic disease diagnosis with reinforcement learning (RL) methods in task-oriented dialogues setting. Different from conventional RL tasks, the action space for disease diagnosis (i.e., symptoms) is inevitably large, especially when the number of diseases increases. However, existing approaches to this problem typically works well in simple tasks but has significant challenges in complex scenarios. Inspired by the offline consultation process, we propose to integrate a hierarchical policy of two levels into the dialogue policy learning. The high level policy consists of a master model that is responsible for triggering a low level model, the low level policy consists of several symptom checkers and a disease classifier. Experimental results on both self-constructed real-world and synthetic datasets demonstrate that our hierarchical framework achieves higher accuracy and symptom recall in disease diagnosis compared with existing systems.
| Reject | The paper applies a reinforcement learning (RL) approach to a medical diagnosis dialog task. Motivated by a large action space, the authors utilize a hierarchical model where the higher level model triggers a lower level model comprising of symptom checkers and disease classifiers. They evaluate their approach on real-world and synthetic data sets.
Pros
+ The application (societal relevance) and the hierarchical approach (large action space) are motivated well
+ The paper is presented relatively clearly (with caveats: see reviewer comments) and improves performance over reasonable baselines (with caveats over one metric: why longer dialog is better?)
Cons
- The novelty of the work was not entirely clear, other than the application to a new task
- Lack of examples make it difficult to gauge the complexity of the task
- Ablation studies would also have provided better insight into task and the proposed model
The reviewers have several concerns about the work described in the paper. But the authors did not provide any response unfortunately. | train | [
"nvAUqJrH1SM",
"gN5kC6hklxG",
"DlLu9Gn9N0",
"E4nHjyoDO6Y"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces hierarchical reinforcement learning (HRL) into automatic disease diagnosis, which reduces the action space and improves training efficiency. Besides, the authors also expand an existing public dataset and build a synthetic dataset for evaluation. The Experimental results show that their propo... | [
3,
3,
5,
3
] | [
4,
4,
3,
3
] | [
"iclr_2022_8kVP8m93VqN",
"iclr_2022_8kVP8m93VqN",
"iclr_2022_8kVP8m93VqN",
"iclr_2022_8kVP8m93VqN"
] |
iclr_2022_OD_dnx57ksK | Momentum Conserving Lagrangian Neural Networks | Realistic models of physical world rely on differentiable symmetries that, in turn, correspond to conservation laws. Recent works on Lagrangian and Hamiltonian neural networks show that the underlying symmetries of a system can be easily learned by a neural network when provided with an appropriate inductive bias. However, these models still suffer from issues such as inability to generalize to arbitrary system sizes, poor interpretability, and most importantly, inability to learn translational and rotational symmetries, which lead to the conservation laws of linear and angular momentum, respectively. Here, we present a momentum conserving Lagrangian neural network (MCLNN) that learns the Lagrangian of a system, while also preserving the translational and rotational symmetries. We test our approach on linear and non-linear spring systems, and a gravitational system, demonstrating the energy and momentum conservation. We also show that the model developed can generalize to systems of any arbitrary size. Finally, we discuss the interpretability of the MCLNN, which directly provides physical insights into the interactions of multi-particle systems. | Reject | This paper enhances Lagrangian neural networks by adding conservation of the angular and linear momenta. According to the reviewers, the technical contribution of the paper is marginal, it is a incremental change of an existing model, and it seems that there is some over claim on the generalization of the model to unseen systems. The theoretical contributions in the paper are not significant, and the experiments have not demonstrate the practical potential of the proposed model yet. After the reviewers provided their comments, the authors did not submit their rebuttals. Therefore, as a result, we do not think the paper is ready for publication at ICLR. | train | [
"erbTyVwI0wT",
"FzG7xmCbSTC",
"hshLPSQ8i0c",
"bI4w9_j89DB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper enhances Lagrangian neural networks by adding conservation of the angular and linear momenta. It does so by enforcing symmetry with respect to translation and rotation of the system in the Lagrangian. Experiments performed on a set of physical systems shows angular and linear momenta conservation reduce... | [
5,
3,
3,
3
] | [
3,
5,
4,
4
] | [
"iclr_2022_OD_dnx57ksK",
"iclr_2022_OD_dnx57ksK",
"iclr_2022_OD_dnx57ksK",
"iclr_2022_OD_dnx57ksK"
] |
iclr_2022_36SHWj0Gp1 | GenTAL: Generative Denoising Skip-gram Transformer for Unsupervised Binary Code Similarity Detection | Binary code similarity detection serves a critical role in cybersecurity. It alleviates the huge manual effort required in the reverse engineering process for malware analysis and vulnerability detection, where often the original source code is not available for analysis. Most of the existing solutions focus on a manual feature engineering process and customized code matching algorithms that are inefficient and inaccurate. Recent deep-learning-based solutions embed the semantics of binary code into a latent space through supervised contrastive learning. However, one cannot cover all its possible forms in the training set to learn the variance of the same semantic. In this paper, we propose an unsupervised model aiming to learn the intrinsic representation of assembly code semantics. Specifically, we propose a Transformer-based auto-encoder-like language model for the low-level assembly code grammar to capture the abstract semantic representation. By coupling a Transformer encoder and a skip-gram-style loss design, it can learn a compact representation that is robust against different compilation options. We conduct experiments on four different block-level code similarity tasks. It shows that our method is more robust compared to the state-of-the-art. | Reject | This paper presents a transformer model for learning representations of assembly code blocks, trained using a variant of the masked language modeling objective that encodes the full code block token sequence into a single bottleneck vector and then uses that vector to decode all the masked out tokens. Overall reviewer assessment for this paper is on the rejection side, mostly due to the not so novel model architecture and training objective. Experiments show that this variant of the MLM performs significantly better than the standard MLM objective without the bottleneck, which surprisingly is even worse than the simple TF-IDF in many tasks. This raises questions and it is unclear from the paper why the variant with a sequence-level bottleneck should perform better than the standard MLM. Binary code similarity detection has many implications in security, so this is a good domain to explore more in, and I encourage the authors to continue to improve this work and send it to the next venue.
One related work also published in the ML community comes to mind that the authors might not be aware of: Graph matching networks for learning the similarity of graph structured objects by Li et al., ICML 2019, which also looked at binary code similarity detection, but works at the function level. It would be good to also take a look for other potentially missing related work. | train | [
"j8zbp5D9JlX",
"7MgkRFCNG63",
"oJp-DSIdvx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an ML-based binary similarity detection technique that uses a skip-gram to encode the instruction sequences and then uses a transformer to encode the whole code fragment. The paper evaluated in cross-compilation and obfuscation settings. + The paper is well-written and described very well. \n+ T... | [
1,
3,
6
] | [
5,
5,
2
] | [
"iclr_2022_36SHWj0Gp1",
"iclr_2022_36SHWj0Gp1",
"iclr_2022_36SHWj0Gp1"
] |
iclr_2022_KVhvw16pvi | TAG: Task-based Accumulated Gradients for Lifelong learning | When an agent encounters a continual stream of new tasks in the lifelong learning setting, it leverages the knowledge it gained from the earlier tasks to help learn the new tasks better. In such a scenario, identifying an efficient knowledge representation becomes a challenging problem. Most research works propose to either store a subset of examples from the past tasks in a replay buffer, dedicate a separate set of parameters to each task or penalize excessive updates over parameters by introducing a regularization term. While existing methods employ the general task-agnostic stochastic gradient descent update rule, we propose a task-aware optimizer that adapts the learning rate based on the relatedness among tasks. We utilize the directions taken by the parameters during the updates by additively accumulating the gradients specific to each task. These task-based accumulated gradients act as a knowledge base that is maintained and updated throughout the stream. We empirically show that our proposed adaptive learning rate not only accounts for catastrophic forgetting but also exhibits knowledge transfer. We also show that our method performs better than several state-of-the-art methods in lifelong learning on complex datasets. Moreover, our method can also be combined with the existing methods and achieve substantial improvement in performance. | Reject | The authors develop a memory-based method for continual learning that stores gradient information from past tasks. This memory is then used by a proposed task-aware optimizer that, based on the task relatedness, aims at preserving knowledge learned in previous tasks.
The initial reviews were reasonable but indicated that this paper was not yet ready to be published. In particular, the reviewers seemed to agree on the somewhat limited methodological novelty of the paper given prior work (such as LA-MAML and OGD in terms of method and GEM in terms of task similarity comparison).
In their response, the authors do seem to agree to a certain extent with some of the criticisms, but also point to clear differences with respect to previous work (and other distinguishing aspects such as a smaller memory footprint than OGD). The authors also carefully responded to reviewer comments and provided additional results when possible.
In the end, the main criticism from the reviewers remained (Reviewer 95tf also suggests that the authors should compare their method to others in terms of memory consumption (which the authors partly did) and compare to replay-based methods) and this paper was a borderline one. Three, out of the four, reviewers suggest that it is not ready to be published. One reviewer did give it a high score (8) but also understood the limitations raised by the other reviewers. As a result, my recommendation is that this paper falls below the acceptance threshold.
I am sorry that for this recommendation and I strongly suggest the authors consider the reviewer's suggestions in preparing the next version of this work. In particular, it seems like providing a full study of the memory usage of your approach vs. others as well as providing more insights about the "trajectory" (see the comment from ZR5n) might go a long way toward improving the paper. | train | [
"nY19xWdtbM",
"WjOTKDcKht-",
"xGZbfQI6jN1",
"1aUkx2XpJD6",
"0anDxMCi_OO",
"tIWi40Syd0b",
"OIjMrDGv2b",
"OP1PVxp-0U",
"VVTfi1YHBEP",
"Bpf6DqimTCo",
"aeE0YTrxg4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the author's feedback and additional experiments. I appreciate the author's efforts. I think the main concerns are still the limited technical novelty, so I have to keep my score. ",
"In this paper, the authors propose a new optimization method for continual learning. The authors propose a task-a... | [
-1,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"OIjMrDGv2b",
"iclr_2022_KVhvw16pvi",
"iclr_2022_KVhvw16pvi",
"tIWi40Syd0b",
"OP1PVxp-0U",
"xGZbfQI6jN1",
"aeE0YTrxg4",
"Bpf6DqimTCo",
"WjOTKDcKht-",
"iclr_2022_KVhvw16pvi",
"iclr_2022_KVhvw16pvi"
] |
iclr_2022_NOApNZTiTNU | Aggressive Q-Learning with Ensembles: Achieving Both High Sample Efficiency and High Asymptotic Performance | Recently, Truncated Quantile Critics (TQC), using distributional representation of critics, was shown to provide state-of-the-art asymptotic training performance on all environments from the MuJoCo continuous control benchmark suite. Also recently, Randomized Ensemble Double Q-Learning (REDQ), using a high update-to-data ratio and target randomization, was shown to achieve high sample efficiency that is competitive with state-of-the-art model-based methods. In this paper, we propose a novel model-free algorithm, Aggressive Q-Learning with Ensembles (AQE), which improves the sample-efficiency performance of REDQ and the asymptotic performance of TQC, thereby providing overall state-of-the-art performance during all stages of training. Moreover, AQE is very simple, requiring neither distributional representation of critics nor target randomization. | Reject | This paper introduces a model-free RL algorithm claiming SOTA performance. All but one reviewer agreed on rejection.
#1 The empirical results are based on only 5 seeds (too low) and the plots across 5 domains show no clear evidence of improved performance due to overlapping error bars. The paper's poor empirical practice does not support the main contribution.
#2 The proposed method builds on REDQ, but the authors maintained in the response that their method performed better than REDQ (failing to articulate significant algorithmic novelty) . Even the most positive reviewer (iNq8) did not agree when the authors claimed "our performance improvements are achieved by the innovations we introduced in our algorithm". iNq8 responded "it is unclear whether this performance improvement is really meaningful". The authors never responded to iNq8's followup questions about overlapping error bars and differences in the behaviours produced by the new method.
Points #1 and #2 combine to form the clear conclusion that this work is not ready in its current from for publication. | train | [
"Ecrx5N_3-Zc",
"aQXiHheTdo7",
"pDqepZuTjK",
"gpsu-91fZmE",
"mkpBZ4817FE",
"COfX_ecukUY",
"od-iL8Ioi4G",
"Vtsg7FpTnfB",
"vKcPDbDXkyu",
"oaWnuGRtPxe",
"hOis2eW8ZK0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"629\nThis paper studies data efficiency and asymptotic convergence rate of deep reinforcement learning and proposed a new algorithm called AQE and claims the algorithm are both fast in both senses. The paper is mostly empirical with results on Mujoco, and compared with TQC and SAC and showed competitive performanc... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_NOApNZTiTNU",
"iclr_2022_NOApNZTiTNU",
"mkpBZ4817FE",
"hOis2eW8ZK0",
"aQXiHheTdo7",
"oaWnuGRtPxe",
"vKcPDbDXkyu",
"Ecrx5N_3-Zc",
"iclr_2022_NOApNZTiTNU",
"iclr_2022_NOApNZTiTNU",
"iclr_2022_NOApNZTiTNU"
] |
iclr_2022_1gEb_H1DEqZ | Logic Pre-Training of Language Models | Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks. However, the capacity for capturing logic relations in challenging NLU still remains a bottleneck even for state-of-the-art PrLM enhancement, which greatly stalled their reasoning abilities. Thus we propose logic pre-training of language models, leading to the logic reasoning ability equipped PrLM, \textsc{Prophet}. To let logic pre-training perform on a clear, accurate, and generalized knowledge basis, we introduce \textit{fact} instead of the plain language unit in previous PrLMs. The \textit{fact} is extracted through syntactic parsing in avoidance of unnecessary complex knowledge injection. Meanwhile, it enables training logic-aware models to be conducted on a more general language text. To explicitly guide the PrLM to capture logic relations, three pre-training objectives are introduced: 1) logical connectives masking to capture sentence-level logics, 2) logical structure completion to accurately capture facts from the original context, 3) logical path prediction on a logical graph to uncover global logic relationships among facts. We evaluate our model on a broad range of NLP and NLU tasks, including natural language inference, relation extraction, and machine reading comprehension with logical reasoning. Results show that the extracted fact and the newly introduced pre-training tasks can help \textsc{Prophet} achieve significant performance in all the downstream tasks, especially in logic reasoning related tasks. | Reject | This paper proposes a pre-training technique for improving the logical abilities of pre-trained language models.
Reviewers point to many issues with clarity and experimental evaluation. No response was given by authors. | train | [
"0IA3D9P1rgp",
"dIvlQkOdExm",
"uqC7N-fcEzG",
"sNyCe5_HzEt"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The goal of the paper is to incorporate logical relations into pre-training of language models to solve the reliance of existing reasoning-enabled language models on external knowledge bases. This is done in a self-supervised way - facts (tuple of 2 arguments and a predicate) are parsed using dependency parsing, a... | [
5,
3,
5,
3
] | [
4,
5,
4,
5
] | [
"iclr_2022_1gEb_H1DEqZ",
"iclr_2022_1gEb_H1DEqZ",
"iclr_2022_1gEb_H1DEqZ",
"iclr_2022_1gEb_H1DEqZ"
] |
iclr_2022__cz2R6QnpQJ | Noise Reconstruction and Removal Network: A New Way to Denoise FIB-SEM Images | Recent advances in Focused Ion Beam-Scanning Electron Microscopy (FIB-SEM) allow the imaging and analysis of cellular ultrastructure at nanoscale resolution, but the collection of labels and/or noise-free data sets has several challenges, often immutable. Reasons range from time consuming manual annotations, requiring highly trained specialists, to introducing imaging artifacts from the prolonged scanning during acquisition. We propose a fully unsupervised Noise Reconstruction and Removal Network for denoising scanning electron microscopy images. The architecture, inspired by gated recurrent units, reconstructs and removes the noise by synthesizing the sequential data. At the same time, the fully unsupervised training guides the network in distinguishing true signal from noise and gives comparable/even better results than supervised approaches on 3D electron microscopy data sets. We provide detailed performance analysis using numerical as well as empirical metrics. | Reject | This paper addresses the challenging application of denoising FIB-SEM images. State-of-the-art results are reported on a real and a noisy-simulated dataset. Unfortunately, this paper failed to convince the reviewers and received 4 negative ratings. The paper misses critical comparisons against baselines and appears rather limited in scope. The authors failed to provide adequate answers to some of the reviewers' points. | train | [
"Ar4b0atKIJT",
"nfVUqLVAyRr",
"_FTmcUSyF62",
"KD-TUiX04xs",
"TSVhdep1DlZ",
"rkydarJv673",
"ukizC8DyNwV",
"WS6-ihPHwBF",
"Ys5wECW6lde"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an unsupervised denoising convolutional neural network model for Focused Ion Beam-Scanning Electron Microscopy (FIB-SEM) images. The whole framework is adapted from the Noise2Noise approach. The proposed architecture Noise Reconstruction and Removal Network (NRRN) contains stacked layers of bui... | [
5,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"iclr_2022__cz2R6QnpQJ",
"_FTmcUSyF62",
"Ar4b0atKIJT",
"TSVhdep1DlZ",
"ukizC8DyNwV",
"iclr_2022__cz2R6QnpQJ",
"iclr_2022__cz2R6QnpQJ",
"iclr_2022__cz2R6QnpQJ",
"iclr_2022__cz2R6QnpQJ"
] |
iclr_2022__HFPHFbJrP- | Certified Adversarial Robustness Under the Bounded Support Set | Deep neural networks (DNNs) have revealed severe vulnerability to adversarial perturbations, beside empirical adversarial training for robustness, the design of provably robust classifiers attracts more and more attention. Randomized smoothing method provides the certified robustness with agnostic architecture, which is further extended to a provable robustness framework using $f$-divergence. While these methods cannot be applied to smoothing measures with bounded support set such as uniform probability measure due to the use of likelihood ratio in their certification methods. In this paper, we introduce a framework that is able to deal with robustness properties of arbitrary smoothing measures including those with bounded support set by using Wasserstein distance as well as total variation distance. By applying our methodology to uniform probability measures with support set $B_{2}(O,r)$, we obtain certified robustness properties with respect to $l_{p}$-perturbations. And by applying to uniform probability measures with support set $B_{\infty}(O,r)$, we obtain certified robustness properties with respect to $l_{1},l_{2},l_{\infty}$-perturbations. We present experimental results on CIFAR-10 dataset with ResNet to validate our theory. It is worth mentioning that our certification procedure only costs constant computation time which is an improvement upon the state-of-the-art methods in terms of the computation time. | Reject | Authors study robustness properties of arbitrary smoothing measures with bounded support using Wasserstein distance and total variation distance. Reviewers pointed out several weaknesses about this work. In particular, they mentioned the paper is not well-organized, comparison with prior work is lacking, the conclusion of the theoretical analysis is not novel and the experiments are not comprehensive. I suggest authors to take these comments into account in improving their work. | train | [
"wiQye0ZVt94",
"mikn39lcRtE",
"2ZneIbm4jfS",
"7BSe7WxCP3R",
"jp6sPWIK9n5",
"H7VcitC-Jrp"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would first like to thank the reviewer for the time reviewing our work and valuable comments. In the following, we will provide a step-by-step response to the comments of the reviewer.\n\n$\\bullet$ As for the motivation, the initial purpose of our work is to generalize Dvijotham's f-divergence-based framework... | [
-1,
-1,
-1,
3,
5,
3
] | [
-1,
-1,
-1,
4,
3,
3
] | [
"H7VcitC-Jrp",
"jp6sPWIK9n5",
"7BSe7WxCP3R",
"iclr_2022__HFPHFbJrP-",
"iclr_2022__HFPHFbJrP-",
"iclr_2022__HFPHFbJrP-"
] |
iclr_2022_bl9zYxOVwa | Understanding the robustness-accuracy tradeoff by rethinking robust fairness | Although current adversarial training (AT) methods can effectively improve the robustness on adversarial examples,
they usually lead to a decrease in accuracy, called the robustness-accuracy trade-off. In addition, researchers have recently discovered a robust fairness phenomenon in the AT model; that is, not all categories of the dataset have experienced a serious decline in accuracy with the introduction of AT methods. In this paper, we explore the relationship between the robustness-accuracy tradeoff and robust fairness for the first time. Empirically, we have found that AT will cause a substantial increase in the inter-class similarity, which could be the root cause of these two phenomena. We argue that the label smoothing (LS) is more than a trick in AT. The smoothness learned from LS can help reduce the excessive inter-class similarity caused by AT, and also reduce the intra-class variance, thereby significantly improving accuracy. Then, we explored the effect of another classic smoothing regularizer, namely, the maximum entropy (ME), and we have found ME can also help reduce both inter-class similarity and intra-class variance. Additionally, we revealed that TRADES actually implies the function of ME,
which can explain why TRADES usually performs better than PGD-AT on robustness. Finally, we proposed the maximum entropy PGD-AT (ME-AT) and the maximum entropy TRADES (ME-TRADES), and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness. | Reject | The paper argues that adversarial training increases inter-class similarities, therby increasingly the misclassification of some classes and lowering accuracy parity across classes. It proposes to combine existing adversarial training methodologies, PGD-AT and TRADES, with a maximum entropy term to improve the classification fairness while remaining robust.
While they agree that the problem is timely and important, the reviewers identify the following issues that place the current iteration of the paper below the bar of acceptance: the comparison to other works on fair robust training and accuracy parity is incomplete; experimental evaluation is conducted only on CIFAR10, making the generalizability of the paper's claims about performance unclear; and the proposed methodology has low technical novelty. | train | [
"a76mAzAMbiA",
"2b9c0E_O5ro",
"cIw-I_mLx4",
"Q0O48osAz5e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates why adversarial training can sometimes present a worse trade-off between robustness and accuracy, and found one root cause could be that AT causes a substantial increase in the inter-class similarity. The paper then proposes combining both AT and TRADES with two smoothing techniques, label-... | [
6,
3,
3,
6
] | [
3,
4,
4,
4
] | [
"iclr_2022_bl9zYxOVwa",
"iclr_2022_bl9zYxOVwa",
"iclr_2022_bl9zYxOVwa",
"iclr_2022_bl9zYxOVwa"
] |
iclr_2022_bB6YLDJewoK | Simpler Calibration for Survival Analysis | Survival analysis, also known as time-to-event analysis, is the problem to predict the distribution of the time of the occurrence of an event. This problem has applications in various fields such as healthcare, security, and finance. While there have been many neural network models proposed for survival analysis, none of them are calibrated. This means that the average of the predicted distribution is different from the actual distribution in the dataset. Therefore, X-CAL has recently been proposed for the calibration, which is supposed to be used as a regularization term in the loss function of a neural network. X-CAL is formulated on the basis of the widely used definition of calibration for distribution regression. In this work, we propose new calibration definitions for distribution regression and survival analysis, and demonstrate a simpler alternative to X-CAL based on the new calibration definition for survival analysis.
| Reject | A number of suggestions have been given about the manuscript. The evaluation raised questions about clarify, placement with respect to other approaches, choices for the design, etc. There are no immediate replies from authors, so I hope the suggestions are useful for future work. | train | [
"dJH8WYepI7B",
"ZxTiLKVcWMj",
"4QsV_WmMQOt",
"5S5VG5x78C5"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper first defines a new definition of calibration which, in contrast to the one used in prior work, is confined to the maximum observed time in the data. They then propose a KM regularizer for making sure their survival curves are calibrated: they are closer to KM curves. They also claim that their regulari... | [
3,
3,
5,
5
] | [
4,
4,
3,
5
] | [
"iclr_2022_bB6YLDJewoK",
"iclr_2022_bB6YLDJewoK",
"iclr_2022_bB6YLDJewoK",
"iclr_2022_bB6YLDJewoK"
] |
iclr_2022_PHugX0j2xcE | Predictive Maintenance for Optical Networks in Robust Collaborative Learning | Machine learning (ML) has recently emerged as a powerful tool to enhance the proactive optical network maintenance and thereby, improve network reliability and operational efficiency, and reduce unplanned downtime and maintenance costs. However, it is challenging to develop an accurate and reliable ML based prognostic models due mainly to the unavailability of sufficient amount of training data since the device failure does not occur often in optical networks. Federated learning (FL) is a promising candidate to tackle the aforementioned challenge by enabling the development of a global ML model using datasets owned by many vendors without revealing their business-confidential data. While FL greatly enhances the data privacy, a global model can be strongly affected by a malicious local model. We propose a robust collaborative learning framework for predictive maintenance on cross-vendor in a dishonest setting. Our experiments confirm that a global ML model can be accurately built with sensitive datasets in federated learning even when a subset of vendors behave dishonestly.
| Reject | The paper presents an optimization technique for optical networks based on federated learning. The motivation for using federated learning stems from the privacy of datasets arising from different operators. The performance of the method is compared to the one based on centralized learning. Despite demonstrating an interesting and promising application of a federated learning, the paper is rather weak in its methodical contribution. Its experimental evaluation, however, is rather artificial with an FL problem generated by splitting the dataset for a centralized problem into parts. No response to the reviewers' comments was provided. | train | [
"7ADIGC3vmpx",
"lBDZ95_iW59",
"EHONubA5AwN",
"6Bg9EygTPY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper uses federated machine learning for predictive maintenance on optical networks. Federated learning provides a number of advantages, including security, privacy, and accuracy. The accuracy claims are motivated by having a broader set of failure examples to draw from, addressing a known issue in predictiv... | [
5,
3,
3,
3
] | [
3,
4,
4,
4
] | [
"iclr_2022_PHugX0j2xcE",
"iclr_2022_PHugX0j2xcE",
"iclr_2022_PHugX0j2xcE",
"iclr_2022_PHugX0j2xcE"
] |
iclr_2022_3Qh8ezpsca | Towards simple time-to-event modeling: optimizing neural networks via rank regression | Time-to-event analysis, also known as survival analysis, aims to predict the first occurred event time, conditional on a set of features.
However, the presence of censorship brings much complexity in learning algorithms due to data incompleteness.
Hazard-based models (e.g. Cox's proportional hazards) and accelerated failure time (AFT) models are two popular tools in time-to-event modeling, requiring the proportional hazards and linearity assumptions, respectively.
In addition, AFT models require pre-specified parametric distributional assumptions in most cases.
To alleviate such strict assumptions and improve predictive performance, there have been many deep learning approaches for hazard-based models in recent years.
However, compared to hazard-based methods, AFT-based representation learning has received limited attention in neural network literature, despite its model simplicity and interpretability.
In this work, we introduce a Deep AFT Rank-regression for Time-to-event prediction model (DART), which is a deep learning-based semiparametric AFT model, and propose a $l_1$-type rank loss function that is more suitable for optimizing neural networks.
Unlike existing neural network-based AFT models, the proposed model is semiparametric in that any distributional assumption is not imposed for the survival time distribution without requiring further hyperparameters or complicated model architectures.
We verify the usefulness of DART via quantitative analysis upon various benchmark datasets.
The results show that our method has considerable potential to model high-throughput censored time-to-event data. | Reject | Most reviewers came to the conclusion, that this work lacks novelty and theoretical depth. Further severe concerns about the validity of some statements and about the experimental setup have been raised. The rebuttal was not perceived as being fully convincing, and nobody wanted to champion this paper.
I share most of these points of criticism. Although there is certainly some potential in this work, I think it is not ready for publication and would (at least) need a major revision. | val | [
"Uo2XgxHJM-Z",
"iP6fWvxbzW2",
"ac7xAEnuKBw",
"fk5lneJU1gk",
"N1gsanu1P9f",
"nCFsoIoVbW",
"1yib_RsK-1U",
"34n-sn_gul",
"GBIKSjMwKWV",
"12W7IIJlpaC",
"5FO0cy4Ywh",
"-xuYZ6bJMU",
"HQ0eU9uYqir",
"XbGtzk1C6uA"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for responding to my review. I think the exposition changes made are a bit substantial and warrants another round of careful review. \n\nRegarding the experiments (Q3 in your response): using similar or even much fewer number of nodes and layers for the KKbox dataset for DeepHit (compared to Kvamme et al) ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
5,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
5,
4
] | [
"N1gsanu1P9f",
"iclr_2022_3Qh8ezpsca",
"fk5lneJU1gk",
"XbGtzk1C6uA",
"HQ0eU9uYqir",
"N1gsanu1P9f",
"-xuYZ6bJMU",
"5FO0cy4Ywh",
"12W7IIJlpaC",
"iclr_2022_3Qh8ezpsca",
"iclr_2022_3Qh8ezpsca",
"iclr_2022_3Qh8ezpsca",
"iclr_2022_3Qh8ezpsca",
"iclr_2022_3Qh8ezpsca"
] |
iclr_2022__67HnXYixmN | Nested Policy Reinforcement Learning for Clinical Decision Support | Off-policy reinforcement learning (RL) has proven to be a powerful framework for guiding agents' actions in environments with stochastic rewards and unknown or noisy state dynamics. In many real-world settings, these agents must operate in multiple environments, each with slightly different dynamics. For example, we may be interested in developing policies to guide medical treatment for patients with and without a given disease, or policies to navigate curriculum design for students with and without a learning disability. Here, we introduce nested policy fitted Q-iteration (NFQI), an RL framework that finds optimal policies in environments that exhibit such a structure. Our approach develops a nested $Q$-value function that takes advantage of the shared structure between two groups of observations from two separate environments while allowing their policies to be distinct from one another. We find that NFQI yields policies that rely on relevant features and perform at least as well as a policy that does not consider group structure. We demonstrate NFQI's performance using an OpenAI Gym environment and a clinical decision making RL task. Our results suggest that NFQI can develop policies that are better suited to many real-world clinical environments. | Reject | This paper provides a method for offline RL in settings where the environment may exhibit significant similar structure, such as one part having nearly the same dynamics as other parts. The work is motivated in part by healthcare settings. The reviewers appreciated the potential applications to areas like healthcare but also thought there is a strong body of related work (e.g. transfer learning, meta-RL and other related papers) and it was unclear how novel the approach was within that related work, or how it would compare. The authors did not respond to the reviewers’ reviews. We hope their input is useful to the authors’ in revising their work for the future. | train | [
"aG8WWlSY5bv",
"uXXdFgHHLA9",
"u_Pe0LRu7XI",
"CUbJDC3RLIp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an off-policy RL called NFQI (Nested Fitted Q-Iteration) as an extension of Fitted Q-iteration to estimate group-specific policies and account for group structure in the data having two pre-defined groups of observations (background and foreground). NFQI imposes a structure on the family of func... | [
5,
3,
3,
3
] | [
2,
5,
4,
4
] | [
"iclr_2022__67HnXYixmN",
"iclr_2022__67HnXYixmN",
"iclr_2022__67HnXYixmN",
"iclr_2022__67HnXYixmN"
] |
iclr_2022_Rivn22SJjg9 | Contrastive Embeddings for Neural Architectures | The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space. We use contrastive learning to identify networks across different initializations based on their data Jacobians and their number of parameters, and automatically produce the first architecture embeddings independent from the parametrization of the search space. Using our contrastive embeddings, we show that traditional black-box optimization algorithms, without modification, can reach state-of-the-art performance in Neural Architecture Search. As our method provides a unified embedding space, we successfully perform transfer learning between search spaces. Finally, we show the evolution of embeddings during training, motivating future studies into using embeddings at different training stages to gain a deeper understanding of the networks in a search space. | Reject | All reviewers unanimously recommending rejecting this submission and I concur with that recommendation. However, many reviewers were quite pleased with the premise and basic concept of the submission and would have liked to see a clearer version with a bit more in terms of experiments.
I agree with the submission that the most interesting architecture search research is about the search space, not the search algorithm.
The submission uses measurements of the data Jacobian matrix at different points to construct an extended data Jacobian matrix that then is projected and serves as input to a contrastive embedding learning algorithm. The resulting architecture embeddings can be used for many different things, including architecture search.
Ultimately, I am recommending rejecting this submission not because of one single overriding weakness, but because the totality of issues the reviewers raised make it clear the submission is not strong enough to publish in its current form. I encourage the authors to continue this line of work and produce a stronger submission in the future to ICLR or another venue. | train | [
"sewXvNOhSuk",
"6eWdWyarD18",
"BSq4bMCeUt",
"1kspQDAhPcs",
"dSHkw18wqW6",
"Jx4Z5fmvO68",
"vGtFDqjBmZe",
"2owNyiPQX0t",
"g4o4VuqGBlB",
"LoQ3AGdxO7t"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. It indeed cleared some of my confusing. For example, the decoding process is finding the nearest architecture. \n\nUnfortunately, I believe the paper still is not ready for publication, for a few reasons:\n1. Some (non-trivial) components of the proposed procedure is not well motivated. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"1kspQDAhPcs",
"Jx4Z5fmvO68",
"dSHkw18wqW6",
"LoQ3AGdxO7t",
"g4o4VuqGBlB",
"2owNyiPQX0t",
"iclr_2022_Rivn22SJjg9",
"iclr_2022_Rivn22SJjg9",
"iclr_2022_Rivn22SJjg9",
"iclr_2022_Rivn22SJjg9"
] |
iclr_2022_A4-dkBuXbA | Deep convolutional recurrent neural network for short-interval EEG motor imagery classification | In this paper, a high-performance short-interval motor imagery classifier is presented that has good potential for use in real-time EEG-based brain-computer interfaces (BCIs). A hybrid deep Convolutional Recurrent Neural Network with Temporal Attention (CRNN-TA) is described that achieves state-of-art performance in four-class classification (73% accuracy, 60% kappa, 3% higher than the winner of the BCI IV 2A competition). An adaptation of the guided grad-CAM method is proposed for decision visualization. A novel EEG data augmentation technique, shuffled-crossover, is introduced that leads to a 3% increase in classification accuracy (relative to a comparable baseline). Classification accuracies for different windows sizes and time intervals are evaluated. An attention mechanism is also proposed that could serve as a feedback loop during data capture for the rejection of bad trials (e.g., those in which participants were inattentive). | Reject | This paper develops a deep convolutional network with RNN layers and
a new data augmentation method for EEG motor imagery classification.
Reviewers agreed that the paper was not very clearly written, and that
without comparisons to other related methods or at least demonstration
of the importance of each of the components of the model (through for
example ablation analyses), it was hard to understand the generality
of the approach.
The authors did not respond to the reviews, so I am recommending not
accepting this paper. | train | [
"k3H6sem3WIJ",
"1mqthJIA6Ei",
"ypzh9j0ByJ0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a model combining convolutional operation and recurrent operation along with temporal attention for EEG classification. Strengths:\n1. The topic of EEG classification is meaningful. \n\n2. The modified class activation mapping (Grad-CAM) is interesting to me.\n\nWeaknesses:\n1. The representat... | [
1,
1,
3
] | [
5,
5,
2
] | [
"iclr_2022_A4-dkBuXbA",
"iclr_2022_A4-dkBuXbA",
"iclr_2022_A4-dkBuXbA"
] |
iclr_2022_KUmMSZ_r28W | Particle Based Stochastic Policy Optimization | Stochastic polic have been widely applied for their good property in exploration and uncertainty quantification. Modeling policy distribution by joint state-action distribution within the exponential family has enabled flexibility in exploration and learning multi-modal policies and also involved the probabilistic perspective of deep reinforcement learning (RL). The connection between probabilistic inference and RL makes it possible to leverage the advancements of probabilistic optimization tools. However, recent efforts are limited to the minimization of reverse KLdivergence which is confidence-seeking and may fade the merit of a stochastic policy. To leverage the full potential of stochastic policy and provide more flexible property, there is a strong motivation to consider different update rules during policy optimization. In this paper, we propose a particle-based probabilistic pol-icy optimization framework, ParPI, which enables the usage of a broad family of divergence or distances, such asf-divergences, and the Wasserstein distance which could serve better probabilistic behavior of the learned stochastic policy. Experiments in both online and offline settings demonstrate the effectiveness of the proposed algorithm as well as the characteristics of different discrepancy measures for policy optimization. | Reject | The manuscript extends the popular "RL as inference" framework with a generalized divergence minimization perspective. The authors observe that most policy optimization can be thought of as minimizing a reverse KL divergence, which has potentially undesirable mode-seeking properties. The authors propose a particle-based scheme wherein samples generated via Langevin dynamics are used for learning.
Several reviewers found the ideas presented interesting, and cited potential novelty and high potential for tackling an important problem. Unfortunately, all reviewers found major shortcomings, from presentation ("messy" presentation, lack of definition of notation and inconsistent use, issues around motivation and logical flow, vague and imprecise use of language, etc.). Several reviewers also had more fundamental criticisms, notably Uu6f who helpfully provided quite actionable feedback on the presentation. Unfortunately, discussion ended with the reviews: the authors offered no rebuttal or updates. The AC considers this a missed opportunity.
The AC concurs with, first and foremost, the concerns around presentation. The current state of the manuscript makes it difficult to parse apart the contribution being made, and in light of all 4 reviewers recommending rejection either strongly or weakly and with no rebuttals or responses put forth, I have no basis to recommend anything other than rejection. | train | [
"TvBD2OkinTd",
"yjK5vJ63KkY",
"ej1UECVNFs8",
"V2O6yzT0yTA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new method for policy optimization in reinforcement learning. Building off of prior connections to probabilistic inference, the authors propose a framework for optimizing policies using general divergences/distances by way of particle-based methods for sampling. Some theory is presented. Expe... | [
3,
5,
5,
3
] | [
3,
3,
4,
4
] | [
"iclr_2022_KUmMSZ_r28W",
"iclr_2022_KUmMSZ_r28W",
"iclr_2022_KUmMSZ_r28W",
"iclr_2022_KUmMSZ_r28W"
] |
iclr_2022_uwnOHjgUrTa | DNN Quantization with Attention | Low-bit quantization of network weights and activations can drastically reduce the memory footprint, complexity, energy consumption and latency of Deep Neural Networks (DNNs). Many different quantization methods like min-max quantization, Statistics-Aware Weight Binning (SAWB) or Binary Weight Network (BWN) have been proposed in the past. However, they still cause a considerable accuracy drop, in particular when applied to complex learning tasks or lightweight DNN architectures. In this paper, we propose a novel training procedure that can be used to improve the performance of existing quantization methods. We call this procedure \textit{DNN Quantization with Attention} (DQA). It relaxes the training problem, using a learnable linear combination of high, medium and low-bit quantization at the beginning, while converging to a single low-bit quantization at the end of the training. We show empirically that this relaxation effectively smooths the loss function and therefore helps convergence. Moreover, we conduct experiments and show that our procedure improves the performance of many state-of-the-art quantization methods on various object recognition tasks. In particular, we apply DQA with min-max, SAWB and BWN to train $2$bit quantized DNNs on the CIFAR10, CIFAR100 and ImageNet ILSVRC 2012 datasets, achieving a very good accuracy comparing to other conterparts. | Reject | This paper proposes a new learning procedure for quantizing neural networks. Basically, DQA method proposed in this paper uses attention to obtain a linear combination of the existing network quantization techniques and uses it to pursue more efficient quantization.
Overall, it seems the submission was written in haste, so there are many typos and errors. Above all, the motivation that it can be applied to various existing techniques could not be proved experimentally at all since it only covers one somehow obsolete work. In addition, as in [1], it seems necessary to quantize not only weights but also activations, or to verify in lightweight networks such as MobileNetV2 rather than ResNet.
[1] Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss, ICCV 2021 | train | [
"qat_K3Zj7RE",
"S6sxTQh2JBs",
"c_N-QtsDjZ1",
"AXAn2zvcCyv",
"-Tq_B4aYgz4",
"WD_nn2A1zz5",
"IP7lhqGn1b1",
"LPSt7RwLppj"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempt to address a challenging quantization problem, i.e., low-bit quantization. This work utilizes a learnable linear combination of high, medium, and low-bit quantization at the beginning while converging to a single low-bit quantization at the end of the training. In the quantization procedure, mul... | [
3,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
2,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2022_uwnOHjgUrTa",
"LPSt7RwLppj",
"qat_K3Zj7RE",
"IP7lhqGn1b1",
"WD_nn2A1zz5",
"iclr_2022_uwnOHjgUrTa",
"iclr_2022_uwnOHjgUrTa",
"iclr_2022_uwnOHjgUrTa"
] |
iclr_2022_Eot1M5o2Zy | AestheticNet: Reducing bias in facial data sets under ethical considerations | Facial Beauty Prediction (FBP) aims to develop a machine that can automatically evaluate facial attractiveness. Usually, these results were highly correlated with human ratings, and therefore also reflected human bias in annotations. Everyone will have biases that are usually subconscious and not easy to notice. Unconscious bias deserves more attention than explicit discrimination. It affects moral judgement and can evade moral responsibility, and we cannot eliminate it completely. A new challenge for scientists is to provide training data and AI algorithms that can withstand distorted information. Our experiments prove that human aesthetic judgements are usually biased. In this work, we introduce AestheticNet, the most advanced attractiveness prediction network, with a Pearson correlation coefficient of 0.9601, which is significantly better than the competition. This network is then used to enrich the training data with synthetic images in order to overwrite the ground truth values with fair assessments.
We propose a new method to generate an unbiased CNN to improve the fairness of machine learning. Prediction and recommender systems based on Artificial Intelligence (AI) technology are widely used in various sectors of industry, such as intelligent recruitment, security, etc. Therefore, their fairness is very important. Our research provides a practical example of how to build a fair and trustable AI. | Reject | The paper proposes a new neural network, the aestheticNet, for a bias-free facial beauty prediction.
All the reviewers agree that the work is not suitable for publication as it raised some serious ethic concerns:
* Prediction of beauty (aesthetic scores) is a potential harmful application. Well-intended as it may be, a research along these lines might be harmful.
* non-anonoymity issue: writing reveals/implies authors identity with reference to previous work
* Research integrity issues (e.g., plagiarism, dual submission), a figure is copied from previous work.
There is also a concern that the work is not novel and not interesting as such.
The authors did not respond to the concerns.
I suggest rejection. | test | [
"roLoOa75r2",
"Xtk9bACujQo",
"tKmqnqap-g7",
"pXSr7kNIHSZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Paper proposed aestheicNet in order to solve the problem of bias in beauty prediction. - Motivation for this type of work is lacking. What can be gained from beauty prediction, even if it is unbiased?\n- Hypothesis 1 is known, this is not a new hypothesis. The paper cites works (Gerlach et al, Akbari) that show th... | [
1,
1,
6,
1
] | [
5,
5,
4,
4
] | [
"iclr_2022_Eot1M5o2Zy",
"iclr_2022_Eot1M5o2Zy",
"iclr_2022_Eot1M5o2Zy",
"iclr_2022_Eot1M5o2Zy"
] |
iclr_2022_Yn4CPz_LRKO | Conditional GANs with Auxiliary Discriminative Classifier | Conditional generative models aim to learn the underlying joint distribution of data and labels, and thus realize conditional generation. Among them, auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the problem of low intra-class diversity on generated samples. In this paper, we point out that the fundamental reason is that the classifier of AC-GAN is generator-agnostic, and therefore cannot provide informative guidance to the generator to approximate the target distribution, resulting in minimization of conditional entropy that decreases the intra-class diversity. Motivated by this observation, we propose a novel conditional GAN with auxiliary \textit{discriminative} classifier (ADC-GAN) to resolve the problem of AC-GAN. Specifically, the proposed auxiliary \textit{discriminative} classifier becomes generator-aware by recognizing the labels of the real data and the generated data \textit{discriminatively}. Our theoretical analysis reveals that the generator can faithfully replicate the target distribution even without the original discriminator, making the proposed ADC-GAN robust to the hyper-parameter and stable during the training process. Extensive experimental results on synthetic and real-world datasets demonstrate the superiority of ADC-GAN on conditional generative modeling compared to competing methods. | Reject | The paper proposes a conditional generative adversarial network with an auxiliary discriminative classifier for conditional generative modeling. The auxiliary discriminative classifier can provide the discrepancy between the joint distribution of the real data and labels and that of the generated data and labels to the generator by discriminatively predicting the label of the real and generated data. Experiment results are provided to demonstrate the effectiveness of the proposed idea. The current paper receives mixed ratings after rebuttal (5, 6, 5, 8). Except that one reviewer (the Reviewer uPwH) will champion the paper with a score of 8, the concerns of the other three reviewers remain. To be specific, even though Reviewer ebJs assigns a score of 6, he/she doesn’t champion the paper because additional experiments requested are not provided by the authors, including (i) training on more datasets or higher resolutions, (ii) visualizing feature norm and grad norm as done in ReACGAN, (iii) experiments on ADC-GAN without unconditional GAN loss. The Reviewer DPgR pointed out that the paper might have a novelty issue because it bears some similarities with other works but it lacks a discussion in the revision. Additionally, Reviewer mZT7 pointed out that the authors didn’t provide a revised paper during the rebuttal, thus leading to a difficulty to assess the quality of the final paper. As a result, AC thinks that the paper is not ready to publish at the current stage and recommends a rejection. The AC urges the authors to revise their paper according to the comments provided by the reviewers, and resubmit their work in a future venue. | train | [
"bzkKdkKBDlF",
"TGwuFQXPpr",
"F9V6r9VE3YK",
"xVvICWi03tc",
"UZFrokFokuy",
"g4yYrWSi6zb",
"MJcu9y0lVnm",
"Glk94RfuJ5",
"tyY7GLw994z",
"DHiFAqzr0zH",
"sPLQC5gW_2w",
"mtl34WDRDzH",
"i5QfqaFehlT",
"xkN1-XjXWlY",
"N4Jdjox-2m",
"20CALrcGm7Y",
"LbdeIQnr2Df"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" ADC-GAN and SSGAN-LA are devised to solve problems in different fields (conditional GAN and self-supervised GANs), but these two fields can be grouped under the theme of GAN. Also, both models address the same problem, specifically the generator-agnostic optimization process of GAN. So, I stick with my previous s... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"TGwuFQXPpr",
"xVvICWi03tc",
"iclr_2022_Yn4CPz_LRKO",
"i5QfqaFehlT",
"g4yYrWSi6zb",
"20CALrcGm7Y",
"xkN1-XjXWlY",
"iclr_2022_Yn4CPz_LRKO",
"LbdeIQnr2Df",
"sPLQC5gW_2w",
"iclr_2022_Yn4CPz_LRKO",
"DHiFAqzr0zH",
"N4Jdjox-2m",
"Glk94RfuJ5",
"F9V6r9VE3YK",
"LbdeIQnr2Df",
"iclr_2022_Yn4CPz... |
iclr_2022_c4iTLTkpY5 | Personalized Heterogeneous Federated Learning with Gradient Similarity | In the conventional federated learning (FL), the local models of multiple clients are trained independently by their privacy data, and the center server generates the shared global model by aggregating local models. However, the global model often fails to adapt to each client due to statistical and systems heterogeneities, such as non-IID data and inconsistencies in clients' hardware and bandwidth. To address these problems, we propose the Subclass Personalized FL (SPFL) algorithm for non-IID data in synchronous FL and the Personalized Leap Gradient Approximation (PLGA) algorithm for the asynchronous FL. In SPFL, the server uses the Softmax Normalized Gradient Similarity (SNGS) to weight the relationship between clients, and sends the personalized global model to each client. In PLGA, the server also applies the SNGS to weight the relationship between client and itself, and uses the first-order Taylor expansion of gradient to approximate the model of the delayed clients. To the best of our knowledge, this is one of the few studies investigating explicitly on personalization in asynchronous FL. The stage strategy of ResNet is further applied to improve the performance of FL. The experimental results show that (1) in synchronous FL, the SPFL algorithm used on non-IID data outperforms the vanilla FedAvg, PerFedAvg, and FedUpdate algorithms, improving the accuracy by $1.81\!\sim\!18.46\%$ on four datasets (CIFAR10, CIFAR100, MNIST, EMNIST), while still maintaining the state of the art performance on IID data; (2) in asynchronous FL, compared with the vanilla FedAvg, PerFedAvg, and FedAsync algorithms, the PLGA algorithm improves the accuracy by $0.23\!\sim\!12.63\%$ on the same four datasets of non-IID data. | Reject | This paper proposed a personalized federated learning algorithm which takes into account the similarity of gradient of different users to update the model. Although the ideas presented are intuitive, the algorithms have fundamental limitations, for example, they may cause large overhead of memory, communication and computation, and are unsuitable for privacy-preserving machine learning. In addition, there are no rigorous analysis and the experiments are not convincing. This is a clear rejection. | train | [
"PitxTWCnZU",
"zaxp6MHIhfM",
"tg5Dv-Hd2xP",
"JgZRsfafkkf",
"XuYUVsHxPmK"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Evaluation remains the same as there were no further revisions or discussion.",
"This paper proposes gradient based weighting strategies for synchronous (SPFL) and asynch (PLGA) personalized Federated Learning (FL). Authors also provide experimental results comparing their methods to baselines on many benchmark... | [
-1,
5,
3,
3,
3
] | [
-1,
3,
4,
4,
4
] | [
"zaxp6MHIhfM",
"iclr_2022_c4iTLTkpY5",
"iclr_2022_c4iTLTkpY5",
"iclr_2022_c4iTLTkpY5",
"iclr_2022_c4iTLTkpY5"
] |
iclr_2022_IY4IsjvUhZ | Characterising the Area Under the Curve Loss Function Landscape | One of the most common metrics to evaluate neural network classifiers is the
area under the receiver operating characteristic curve (AUC). However,
optimisation of the AUC as the loss function during network
training is not a standard procedure. Here we compare minimising the cross-entropy (CE) loss
and optimising the AUC directly. In particular, we analyse the loss function
landscape (LFL) of approximate AUC (appAUC) loss functions to discover
the organisation of this solution space. We discuss various surrogates for AUC approximation and show their differences.
We find that the characteristics of the appAUC landscape are significantly
different from the CE landscape. The approximate AUC loss function improves
testing AUC, and the appAUC landscape has substantially more minima, but
these minima are less robust, with larger average Hessian eigenvalues. We provide a theoretical foundation to explain these results.
To generalise our results, we lastly provide an overview of how the
LFL can help to guide loss function analysis and selection. | Reject | The paper analyses the loss landscape induced by AUC loss. Reviewers found critical issues with the paper, and the Authors have not provided feedback. As such I have to recommend rejecting the paper. I thank the Authors for submitting the paper to the ICLR conference. I hope the reviews will be helpful in improving the paper. | val | [
"3tByD9-5-tm",
"9MucHfLbfsA",
"NLzUrGrByTO",
"xR-GLrB-ns",
"ZzFFUojXjM1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper compares using the cross-entropy and approximated AUC loss functions in neural network training with AUC evaluation. The authors analyze loss function landscapes via solution space and minima properties. Some approximated AUC loss functions show better performance than standard learning via cross-entropy... | [
5,
3,
3,
3,
6
] | [
2,
4,
4,
3,
3
] | [
"iclr_2022_IY4IsjvUhZ",
"iclr_2022_IY4IsjvUhZ",
"iclr_2022_IY4IsjvUhZ",
"iclr_2022_IY4IsjvUhZ",
"iclr_2022_IY4IsjvUhZ"
] |
iclr_2022_TuR3pmKgERp | Hyperspherical embedding for novel class classification | Deep neural networks proved to be useful to learn representations and perform classification on many different modalities of data. Traditional approaches work well on the closed set problem. For learning tasks involving novel classes, known as the open set problem, the metric learning approach has been proposed. However, while promising, common metric learning approaches require pairwise learning, which significantly increases training cost while adding additional challenges. In this paper we present a method in which the similarity of samples projected onto a feature space is enforced by a metric learning approach without requiring
pairwise evaluation. We compare our approach against known methods in different datasets, achieving results up to $81\%$ more accurate. | Reject | This paper tackles an open-set setting where new classes (with few labeled examples) are introduced after the initial pre-training on different categories. A simple approach is proposed based on a normalized softmax classifier and feature averaging to generate a classifier for the new categories. Results are shown on a few standard datasets as well as the Pl@ntnet dataset.
While reviewers found the topic and setting (as well as Pl@ntnet dataset) interesting, they had significant concerns on the novelty (t3Uk, Tp5p, AHvJ), contribution, and rigor of the empirical evaluation. Since the method is simple and largely leverages from prior works, the latter is especially important; reviewers pointed out that some of the latest in metric learning is ignored (e.g. Proxy Anchor, Tp5p and AHvJ), and no comparison is made to other classes of methods that (by the authors' admission) are very close to the setting such as open-set recognition (especially those that seek to classify new categories) and incremental learning.
Unfortunately, no rebuttal was provided by the authors, so these significant concerns remain and the paper cannot be accepted as-is. Since the reviewers did appreciate the setting and dataset, I recommend refining the paper and significantly beefing up the empirical evaluation for future resubmissions. | train | [
"ji5DgSt128h",
"3nlmViLUY05",
"5ATj7dtAfn8",
"Y65EanNT7z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents how the normalized softmax loss can be used on the open set problem. Superior results seem to be achieved without the pairwise pairwise training 1) The paper completely missed the related works. Many papers addressing this problem are ignored. Basically only two metric learning approaches are in... | [
3,
3,
3,
3
] | [
4,
3,
4,
4
] | [
"iclr_2022_TuR3pmKgERp",
"iclr_2022_TuR3pmKgERp",
"iclr_2022_TuR3pmKgERp",
"iclr_2022_TuR3pmKgERp"
] |
iclr_2022_OzXAw20k_H | Deep Learning of Intrinsically Motivated Options in the Arcade Learning Environment | Although Intrinsic Motivation allows a Reinforcement Learning agent to generate directed behaviors in an environment, even with sparse or noisy rewards, combining intrinsic and extrinsic rewards is non trivial. As an alternative to the widespread method of a weighted sum of rewards, Explore Options let the agent call an intrinsically motivated agent in order to observe and learn from interesting behaviors in the environment. Such options have only been established for simple tabular cases, and are unfit to high dimensional spaces. In this paper, we propose Deep Explore Options, revising Explore Options within the Deep Reinforcement Learning paradigm to tackle complex visual problems. Deep Explore Options can naturally learn from several unrelated intrinsic rewards, ignore harmful intrinsic rewards, learn to balance exploration, but also isolate exploitative or exploratory behaviors. In order to achieve this, we first introduce J-PER, a new transition-selection algorithm based on the interest of multiple agents. Next, we propose to consider intrinsic reward learning as an auxiliary task, with a resulting architecture achieving $50\%$ faster wall-clock speed and building a stronger, shared representation. We test Deep Explore Options on hard and easy exploration games of the Atari Suite, following a benchmarking study to ensure fairness. Our results show that not only can they learn from multiple intrinsic rewards, they are a very strong alternative to a weighted sum of rewards, convincingly beating the baselines in 4 of the 6 tested environments, and with comparable performances in the other 2. | Reject | This work gives an interesting perspective on combining options with exploration in the non-tabular case. The reviewers have raised a number of important areas for improvement (primarily missing ablations to support the claims of the paper, but also specific suggestions about improvements to the text), and feel that sufficient work is required to address these that the paper should be rejected at this time. | train | [
"0UsuqtBm-i",
"ym0kiYn4HFU",
"j4ne09H2_Yb",
"2M0lreEHa35",
"9e3UD7Pvwdt",
"vK_Eeandv4-",
"LezwGdpPY5V",
"wRGDII5UDN",
"RdF_J0UT2QL",
"QnHAG_-SIf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' clarifications and receptiveness to the feedback provided in the reviews. I hope future versions of the work will benefit from the comments from the review period. I will keep my score.",
" I've gone over the authors' response and I'm keeping my final score/recommendation for this pape... | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
4
] | [
"wRGDII5UDN",
"QnHAG_-SIf",
"QnHAG_-SIf",
"RdF_J0UT2QL",
"wRGDII5UDN",
"LezwGdpPY5V",
"iclr_2022_OzXAw20k_H",
"iclr_2022_OzXAw20k_H",
"iclr_2022_OzXAw20k_H",
"iclr_2022_OzXAw20k_H"
] |
iclr_2022_jJJWwrMrEsx | Truth Table Deep Convolutional Neural Network, A New SAT-Encodable Architecture - Application To Complete Robustness |
With the expanding role of neural networks, the need for formal verification of their behavior, interpretability and human post-processing has become critical in many applications. In 2018, it has been shown that Binary Neural Networks (BNNs) have an equivalent representation in boolean logic and can be formally analyzed using logical reasoning tools such as SAT or MaxSAT solvers. This formulation is powerful as it allows us to address a vast range of questions: existential, probabilistic, explanation generation, etc. However, to date, only BNNs can be transformed into a SAT formula and their strong binary constraints limit their natural accuracy. Moreover, the corresponding SAT conversion method intrinsically leads to formulas with a large number of variables and clauses, impeding interpretability as well as formal verification scalability. In this work, we introduce Truth Table Deep Convolutional Neural Networks (TT-DCNNs), a new family of SAT-encodable models featuring real-valued weights and real intermediate values as well as a highly interpretable conversion method. The TT-DCNN architecture enables for the first time all the logical classification rules to be extracted from a performant neural network which can be then easily interpreted by anyone familiar with the domain. Therefore, this allows integrating human knowledge in post-processing as well as enumerating all possible inputs/outputs prior to deployment in production. We believe our new architecture paves the way between eXplainability AI (XAI) and formal verification. First, we experimentally show that TT-DCNNs offer a better tradeoff between natural accuracy and formal verification than BNNs. Then, in the robustness verification setting, we demonstrate that TT-DCNNs outperform the verifiable accuracy of BNNs with a comparable computation time. Finally, we also drastically decrease the number of clauses and variables, enabling the usage of general SAT solvers and exact model counting solvers. Our developed real-valued network has general applications and we believe that its demonstrated robustness constitutes a suitable response to the rising demand for functional formal verification. | Reject | I think there is good research behind this paper, but the presentation issues make it difficult to argue for acceptance.
On the positive side, the paper has made a clear advance in terms of the ability to do full SAT-based verification of neural networks. However, there are also important issues with the paper that prevent it from being accepted:
* The paper argues for the value of the new approach for *both* verifiability and interpretability, where interpretability is measured in terms of the ability to make targeted adjustments to the network to change its behavior. These are very different goals, but they are conflated in different parts of the paper, leading to confusion, for example, from reviewer RhEH.
* The paper only compares against SAT/SMT-based verification, but completely ignores other approaches to verification that are arguably more effective for many problems. In particular, there is an emerging literature on Abstract Interpretation-based verification that is significantly more scalable than SAT-based verification and which this paper ignores.
* The paper's claims sometimes get ahead of the presented evidence, as pointed out by reviewer garj.
So overall, I think this paper needs another iteration before it is ready for acceptance. | train | [
"H4Q2YBQDMMS",
"l0dMtI9EwR",
"9C1vBhPWy9b",
"9M7vM7P_o_0",
"QEVurFrSoli",
"I_DeUyBFv32",
"LNHhEuh8CjK",
"4T0dgbPBQCI",
"ScCPg7E2L6",
"FMamCEt_YpC",
"UFrJn2jruhN",
"cCHX4Q0z95",
"zPuPkw56J4N"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply, we did not update the version as we will resubmit our paper to an another conference, taking into account your comments of course.",
" > We will take these comments into account in the next version. In particular, we will remove the strong statements about the \n> interpretability:\n>\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"l0dMtI9EwR",
"9M7vM7P_o_0",
"ScCPg7E2L6",
"QEVurFrSoli",
"I_DeUyBFv32",
"zPuPkw56J4N",
"cCHX4Q0z95",
"UFrJn2jruhN",
"FMamCEt_YpC",
"iclr_2022_jJJWwrMrEsx",
"iclr_2022_jJJWwrMrEsx",
"iclr_2022_jJJWwrMrEsx",
"iclr_2022_jJJWwrMrEsx"
] |
iclr_2022_9Vimsa_gGG5 | Initializing ReLU networks in an expressive subspace of weights | Using a mean-field theory of signal propagation, we analyze the evolution of correlations between two signals propagating forward through a deep ReLU network with correlated weights. Signals become highly correlated in deep ReLU networks with uncorrelated weights. We show that ReLU networks with anti-correlated weights can avoid this fate and have a chaotic phase where the signal correlations saturate below unity. Consistent with this analysis, we find that networks initialized with anti-correlated weights can train faster by taking advantage of the increased expressivity in the chaotic phase. An initialization scheme combining this with a previously proposed strategy of using an asymmetric initialization to reduce dead node probability shows consistently lower training times compared to various other initializations on synthetic and real-world datasets. Our study suggests that use of initial distributions with correlations in them can help in reducing training time. | Reject | The paper proposes an initialization method to initialize residual networks in an expressive subspace of weights. Although the reviewers highlighted some positive aspects, they found the contribution to be limited compared to prior work. Some reviewers also raised some concerns regarding the experimental results not backing up the claims made in the paper. The authors did not respond, so I can therefore not recommend acceptance. This will hopefully provide useful feedback for a potential revision. | train | [
"welECClKAU3",
"UZOJDmJ4wx-",
"wRzqxsLt_M8",
"q7EotLnUPsn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose an initialization method called random asymmetric anti-correlated initialization (RAAI), which initializes residual networks in an expressive subspace of weights. RAAI is a combined initialization of anti-correlated initialization (ACI) and random asymmetric initialization (RAI), where the last... | [
5,
5,
1,
3
] | [
4,
3,
3,
3
] | [
"iclr_2022_9Vimsa_gGG5",
"iclr_2022_9Vimsa_gGG5",
"iclr_2022_9Vimsa_gGG5",
"iclr_2022_9Vimsa_gGG5"
] |
iclr_2022_bpUHBc9HCU8 | A General Unified Graph Neural Network Framework Against Adversarial Attacks | Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs. However, they are reported to be vulnerable to adversarial attacks, raising numerous concerns for applying it in some risk-sensitive domains. Therefore, it is essential to develop a robust GNN model to defend against adversarial attacks. Existing studies address this issue only considering cleaning perturbed graph structure, and almost none of them simultaneously consider denoising features. As the graph and features are interrelated and influence each other, we propose a General Unified Graph Neural Network (GUGNN) framework to jointly clean the graph and denoise features of data. On this basis, we further extend it by introducing two operations and develop a robust GNN model(R-GUGNN) to defend against adversarial attacks. One operation is reconstructing the graph with its intrinsic properties, including similarity of two adjacent nodes’ features, sparsity of real-world graphs and many slight noises having small eigenvalues in perturbed graphs. The other is the convolution operation for features to find the optimal solution adopting the Laplacian smoothness and the prior knowledge that nodes with many neighbors are difficult to attack. Experiments on four real-world datasets demonstrate that R-GUGNN has greatly improved the overall robustness over the state-of-the-art baselines. | Reject | The paper studies a robust GNN against adversarial attacks on both graph structure and node features.
The reviewers agree that the paper need to improve in terms of novelty and more technical details to meet ICLR standard. | test | [
"lQ0U7vMbQPQ",
"Mm-RRj0XPn",
"1OLnbWS7xX",
"wlFuHmEEfcR",
"-sHztm1DBFR",
"IJg-7TOnLPE",
"FS0q-XL9U50",
"OLA9Z-pa-nN",
"AeVC2M1al0Z",
"U0CzhZKCffC"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply. I appreciate the positivity in your rebuttal and will look forward to an updated model and its corresponding results. ",
" Thank you for your reply. I am looking forward to an updated version!",
" Thank you for your comments. We refer to some contents of UGNN and Pro-GNN. As the perf... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"1OLnbWS7xX",
"wlFuHmEEfcR",
"U0CzhZKCffC",
"AeVC2M1al0Z",
"OLA9Z-pa-nN",
"FS0q-XL9U50",
"iclr_2022_bpUHBc9HCU8",
"iclr_2022_bpUHBc9HCU8",
"iclr_2022_bpUHBc9HCU8",
"iclr_2022_bpUHBc9HCU8"
] |
iclr_2022_zFlFjoyOW-z | Interest-based Item Representation Framework for Recommendation with Multi-Interests Capsule Network | Item representation plays an important role for recommendation, such as e-commerce, news, video, etc. It has been used by retrieval and ranking model to capture user-item relationship based on user behaviors. For recommendation systems, user interaction behaviors imply single or multi interests of the user, not only items themselves in the sequences. Existing representation learning methods mainly focus on optimizing item-based mechanism between user interaction sequences and candidate item(especially attention mechanism, sequential modeling). However, item representations learned by these methods lack modeling mechanism to reflect user interests. That is, the methods may be less effective and indirect to capture user interests. We propose a framework to learn interest-based item representations directly by introducing user Multi Interests Capsule Network(MICN). To make the framework model-agnostic, user Multi Interests Capsule Network is designed as an auxiliary task to jointly learn item-based item representations and interest-based item representations. Hence, the generic framework can be easily used to improve existing recommendation models without model redesign. The proposed approach is evaluated on multiple types of benchmarks. Furthermore, we investigate several situations on various deep neural networks, different length of behavior sequences and joint learning ratio of interest-based item representations. Experiment shows a great enhancement on performance of various recommendation models and has also validated our approach. We expect the framework could be widely used for recommendation systems. | Reject | This paper proposed a joint learning approach which combines item-based representations and interest-based representations to improve recommender systems. Overall the scores are negative among all the reviewers. The reviewers acknowledge that the proposed approach provides a simple yet effective way to improve the existing item-based representations. However, all the reviewers pointed out concerns around the motivation and limited novelty (the proposed approach mostly combines a few existing approaches together without careful examination/exploration in the experiments). Furthermore, the baselines considered in the paper are on the relatively weak side. The authors didn't provide any response. Therefore, I vote for reject. | train | [
"rmKQ-voUjqt",
"cDMefmXhZmG",
"fdNBTmdwM_",
"3QWYWqlaZYg"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an interest-based item embedding representation to enhance the performance of recommendation model by jointly learning iitem-based item representations and interest-based item representations. Strength:\n1. Novel model to jointly learn of item-based and interest-based representation of the item ... | [
3,
3,
1,
3
] | [
3,
4,
4,
5
] | [
"iclr_2022_zFlFjoyOW-z",
"iclr_2022_zFlFjoyOW-z",
"iclr_2022_zFlFjoyOW-z",
"iclr_2022_zFlFjoyOW-z"
] |
iclr_2022_TscS0R8QzfG | PDAML: A Pseudo Domain Adaptation Paradigm for Subject-independent EEG-based Emotion Recognition | Domain adaptation (DA) and domain generalization (DG) methods have been successfully adopted to alleviate the domain shift problem caused by the subject variability of EEG signals in subject-independent affective brain-computer interfaces (aBCIs). Usually, the DA methods give relatively promising results than the DG methods but require additional computation resources each time a new subject comes. In this paper, we first propose a new paradigm called Pseudo Domain Adaptation (PDA), which is more suitable for subject-independent aBCIs. Then we propose the pseudo domain adaptation via meta-learning (PDAML) based on PDA. The PDAML consists of a feature extractor, a classifier, and a sum-decomposable structure called domain shift governor. We prove that a network with a sum-decomposable structure can compute the divergence between different domains effectively in theory. By taking advantage of the adversarial learning and meta-learning, the governor helps PDAML quickly generalize to a new domain using the target data through a few self-adaptation steps in the test phase. Experimental results on the public aBICs dataset demonstrate that our proposed method not only avoids the additional computation resources of the DA methods but also reaches a similar generalization performance of the state-of-the-art DA methods. | Reject | The experimental part of the work has been reported by all reviewers as too limited and not convincing enough.
At this point this work cannot be endorsed for publication at ICLR. | train | [
"UhAjcVG_Rbf",
"s3UWZscslK2",
"icC-UTzKL-_",
"UzeN-IEg0P0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present an incremental improvement to previously published approaches by Zhao et al., and their SEED dataset, which is currently not available, since a link provided in the manuscript lands on an error page (see below), so a provided code is also unfortunately useless: http://bcmi.sjtu.edu.cn/∼seed/ind... | [
5,
5,
3,
3
] | [
5,
5,
5,
4
] | [
"iclr_2022_TscS0R8QzfG",
"iclr_2022_TscS0R8QzfG",
"iclr_2022_TscS0R8QzfG",
"iclr_2022_TscS0R8QzfG"
] |
iclr_2022_ROpoUxw23oP | Differentiable Hyper-parameter Optimization | Hyper-parameters are widely present in machine learning.
Concretely, large amount of hyper-parameters exist in network layers, such as kernel size, channel size and the hidden layer size, which directly affect performance of the model.
Thus, hyper-parameter optimization is crucial for machine learning. Current hyper-parameter optimization always requires multiple training sessions, resulting in a large time consuming.
To solve this problem, we propose a method to fine-tune neural network's hyper-parameters efficiently in this paper, where optimization completes in only one training session.
We apply our method for the optimization of various neural network layers' hyper-parameters and compare it with multiple benchmark hyper-parameter optimization models.
Experimental results show that our method is commonly 10 times faster than traditional and mainstream methods such as random search, Bayesian optimization and many other state-of-art models. It also achieves higher quality hyper-parameters with better accuracy and stronger stability. | Reject | The paper presents a gradient-based hyperparameter optimization method, wherein a differentiable reparameterization is proposed for various popular CNN hyperparameters including kernel size, number of channels and hidden layer size.
All reviewers have pointed out the lack of novelty (such reparameterizations are standard) and lack of convincing experiments.
The authors didn't write any rebuttal.
Overall, there is a large consensus among the reviewers that this paper is not ready for publication at ICLR. | train | [
"hPTgQy8n-XV",
"BGIKel-ZFZZ",
"k6MQg-k9Txi",
"BjKUyY7YDbn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\nThis paper introduces a method for hyper-parameter optimization (HPO) for deep neural networks. Its main idea is to replace hyper-parameters by trainable parameters, which can be included in the training process of the network itself.\nThe proposed method is applied to hyper-parameter tunning for two types of ne... | [
5,
3,
3,
1
] | [
3,
5,
4,
5
] | [
"iclr_2022_ROpoUxw23oP",
"iclr_2022_ROpoUxw23oP",
"iclr_2022_ROpoUxw23oP",
"iclr_2022_ROpoUxw23oP"
] |
iclr_2022_SVcEx6SC_NL | Adversarial Robustness as a Prior for Learned Representations | An common goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations. | Reject | The paper looks at the favorable properties of feature representations of an adversarially robust model, which are interesting but not surprising, especially in the context of much existing literature has talked about this. All reviewers gave negative scores. The main issues are: 1) The paper only provides experimental demonstration of this phenomenon without going into a more detailed explanation of the phenomenon. This is not enough when the observations, in question, are not very novel and have already been explored in various forms in past published literature. 2) limited novelty since the current submission does not introduce a new approach or algorithm or theoretical results. The paper also lacks comparison/discussion of recent works. Thus, I cannot recommend accepting the paper to ICLR. | train | [
"LiY420eEDtp",
"38u9tLBbQTv",
"pqKMCOzCDkI",
"dUYgmF4Z5fn"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper looks at favorable properties of feature representations of an adversarially robust model. In particular, the authors look at a model trained with PGD training with an $\\ell_p$ adversary. In terms of favourable properties, the authors look at representation inversion and feature manipulation and with e... | [
3,
5,
5,
3
] | [
5,
2,
4,
4
] | [
"iclr_2022_SVcEx6SC_NL",
"iclr_2022_SVcEx6SC_NL",
"iclr_2022_SVcEx6SC_NL",
"iclr_2022_SVcEx6SC_NL"
] |
iclr_2022_LQnyIk5dUA | ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computations | We propose ZeroSARAH -- a novel variant of the variance-reduced method SARAH (Nguyen et al., 2017) -- for minimizing the average of a large number of nonconvex functions $\frac{1}{n}\sum_{i=1}^{n}f_i(x)$. To the best of our knowledge, in this nonconvex finite-sum regime, all existing variance-reduced methods, including SARAH, SVRG, SAGA and their variants, need to compute the full gradient over all $n$ data samples at the initial point $x^0$, and then periodically compute the full gradient once every few iterations (for SVRG, SARAH and their variants). Note that SVRG, SAGA and their variants typically achieve weaker convergence results than variants of SARAH: $n^{2/3}/\epsilon^2$ vs. $n^{1/2}/\epsilon^2$. Thus we focus on the variant of SARAH. The proposed ZeroSARAH and its distributed variant D-ZeroSARAH are the \emph{first} variance-reduced algorithms which \emph{do not require any full gradient computations}, not even for the initial point. Moreover, for both standard and distributed settings, we show that ZeroSARAH and D-ZeroSARAH obtain new state-of-the-art convergence results, which can improve the previous best-known result (given by e.g., SPIDER, SARAH, and PAGE) in certain regimes. Avoiding any full gradient computations (which are time-consuming steps) is important in many applications as the number of data samples $n$ usually is very large. Especially in the distributed setting, periodic computation of full gradient over all data samples needs to periodically synchronize all clients/devices/machines, which may be impossible or unaffordable. Thus, we expect that ZeroSARAH/D-ZeroSARAH will have a practical impact in distributed and federated learning where full device participation is impractical. | Reject | There are many discussions among the reviewers for this paper and eventually none of the reviewers (including the one who gave most positive score) would like to support the publication of this paper.
Some concerns from the reviewers are as follows:
1. Missing the discussion on storage cost.
2. The improvement is limited. $G_0$ must be small and independent of $n$, hence it is not clear if it is possible to give a fair comparison between the current complexity and previous best complexity.
3. Missing the discussion on the case when $n \leq \mathcal{O}(\varepsilon^{-4})$ of the state-the-art results.
4. For the complexity results in terms of $\varepsilon$, it requires $\varepsilon$ to be arbitrarily small. The authors should also discuss this point for comparing with their result.
5. Some other statements in the papers are overclaimed.
Please take the comments and suggestions from the reviewers carefully to revise the paper for the future venues since they raised valid points. | train | [
"vo5ZhdP1jFg",
"Dl0K4rvBOOV",
"7NdEwC8NIpF",
"Np7V6s5mdQr",
"dGBVdRfk0cZ",
"aPWoXXgVD0d",
"Ys0pJbaEVSl",
"EM_Dx8cKsA-",
"zryESZaBgX3",
"Kqb4V4lw9n",
"HT5JXxGL28",
"t_c-oPG3Kd",
"ocCR4wHk6JG",
"7jL9_gacc-D",
"Go488VdsqiA",
"SvHaDYV65J7",
"AuWAZuXr4qr"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your helpful feedback.\n**Yes. Our ZeroSARAH achieves much better convergence results than SAGA.** The result of nonconvex SAGA can be found in e.g. [1]. \n\nAs we pointed out in the Abstract and Introduction: \n> Note that SVRG, SAGA and their variants typically achieve weaker convergence results tha... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
8
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
2
] | [
"Dl0K4rvBOOV",
"7jL9_gacc-D",
"dGBVdRfk0cZ",
"iclr_2022_LQnyIk5dUA",
"HT5JXxGL28",
"Ys0pJbaEVSl",
"t_c-oPG3Kd",
"zryESZaBgX3",
"ocCR4wHk6JG",
"iclr_2022_LQnyIk5dUA",
"Np7V6s5mdQr",
"SvHaDYV65J7",
"Go488VdsqiA",
"AuWAZuXr4qr",
"iclr_2022_LQnyIk5dUA",
"iclr_2022_LQnyIk5dUA",
"iclr_2022... |
iclr_2022_eELR-4Dk4U8 | Model-based Reinforcement Learning with a Hamiltonian Canonical ODE Network | Model-based reinforcement learning usually suffers from a high sample complexity in training the world model, especially for the environments with complex dynamics. To make the training for general physical environments more efficient, we introduce Hamiltonian canonical ordinary differential equations into the learning process, which inspires a novel model of neural ordinary differential auto-encoder (NODA). NODA can model the physical world by nature and is flexible to impose Hamiltonian mechanics (e.g., the dimension of the physical equations) which can further accelerate training of the environment models. It can consequentially empower an RL agent with the robust
extrapolation using a small amount of samples as well as the guarantee on the physical plausibility. Theoretically, we prove that NODA has uniform bounds for multi-step transition errors and value errors under certain conditions. Extensive experiments show that NODA can learn the environment dynamics effectively with a high sample efficiency, making it possible to facilitate reinforcement learning agents at the early stage. | Reject | The reviewers raised concerns and the authors have not provided a response. All reviewers concur that this paper should be rejected at this time, and I agree. | train | [
"nSK82lSmaXW",
"iaXgGFWxkA",
"v7GwgpaOLC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a variant of a Hamiltonian neural network for learning dynamics models for model-based RL. The proposed network architecture uses encoders/decoders to map the system state x = [q, p] from the observations s and vice versa. In the latent state, a standard HNN is used to compute dx/dt to increment... | [
3,
3,
3
] | [
5,
3,
4
] | [
"iclr_2022_eELR-4Dk4U8",
"iclr_2022_eELR-4Dk4U8",
"iclr_2022_eELR-4Dk4U8"
] |
iclr_2022_KLh86DknDj7 | Discovering Classification Rules for Interpretable Learning with Linear Programming | Rules embody a set of if-then statements which include one or more conditions to classify a subset of samples in a dataset. In various applications such classification rules are considered to be interpretable by the decision makers. We introduce two new algorithms for interpretability and learning. Both algorithms take advantage of linear programming, and hence, they are scalable to large data sets. The first algorithm extracts rules for interpretation of trained models that are based on tree/rule ensembles. The second algorithm generates a set of classification rules through a column generation approach. The proposed algorithms return a set of rules along with their optimal weights indicating the importance of each rule for classification. Moreover, our algorithms allow assigning cost coefficients, which could relate to different attributes of the rules, such as; rule lengths, estimator weights, number of false negatives, and so on. Thus, the decision makers can adjust these coefficients to divert the training process and obtain a set of rules that are more appealing for their needs. We have tested the performances of both algorithms on a collection of datasets and presented a case study to elaborate on optimal rule weights. Our results show that a good compromise between interpretability and accuracy can be obtained by the proposed algorithms. | Reject | The reviewers are rather critical about the paper and the authors did not take a part in the discussion phase. Let me also add that the paper ignores a vast number of papers dealing with a similar problem. The column generation algorithm is a core of LPBooting also used for rule learning ("Rule Learning with Monotonicity Constraints", "The Linear Programming Set Covering Machine"). There are many other papers also using linear relaxation of integer programming to build rule models. Logical analysis of Data is also a well-known method being close to such approaches. There is also a plenty other rule learning systems that should be compared in the experimental study such as Ripper, Slipper, MLRules, or Ender (to mention only a few of them). | train | [
"nW3TE6PvsxH",
"hhC9GLyLO0A",
"yUZnX4dgRK4",
"mCvzMlKdL0s"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors and Reviewers,\n\nLet me first thank you for supporting ICLR 2022: Authors for submitting their contributions, and Reviewers for going through them and sending their comments and remarks!\n\nAs the discussion phase has just begun, let me ask Authors to answer all questions appearing in reviews and to... | [
-1,
3,
6,
3
] | [
-1,
2,
3,
3
] | [
"iclr_2022_KLh86DknDj7",
"iclr_2022_KLh86DknDj7",
"iclr_2022_KLh86DknDj7",
"iclr_2022_KLh86DknDj7"
] |
iclr_2022_5ueTHF0yAlZ | Improving greedy core-set configurations for active learning with uncertainty-scaled distances | We scale perceived distances of the core-set algorithm by a factor of uncertainty and search for low-confidence configurations, finding significant improvements in sample efficiency across CIFAR10/100 and SVHN image classification, especially in larger acquisition sizes. We show the necessity of our modifications and explain how the improvement is due to a probabilistic quadratic speed-up in the convergence of core-set loss, under assumptions about the relationship of model uncertainty and misclassification. | Reject | The paper presents an improvement to the core-set active learning algorithm by leveraging distance measures weighted by uncertainty scores and using beam search instead of greedy search.
The reviewers agreed that the paper provides a nice theoretical analysis as well as motivation for the proposal, as well an ablation that shows the proposal indeed empirically outperforms the original core-set algorithm. However, the reviewers also agreed that additional important comparisons would make the paper more convincing, including Bayesian core-set algorithms as well as other recent proposals based on the original core-set algorithm. | train | [
"Xr3PHv9MKq1",
"Iqk1yFlwN6",
"2hL3Pt3O-II",
"hhdc80Cuwmx",
"G1aW3Psnk_r",
"By0gCcpBAh3",
"Z7CA5_TbZl",
"RhHiurNX0Fn",
"R9JLTi8CN2"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for providing answers to a number of my questions. My concerns (re experimental results and others) remain given that there have been no further addition to the paper, and retain my score.",
" Our original inquiry and experimental designs intended only to show significant improvement ov... | [
-1,
-1,
-1,
-1,
-1,
3,
8,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"Iqk1yFlwN6",
"RhHiurNX0Fn",
"R9JLTi8CN2",
"Z7CA5_TbZl",
"By0gCcpBAh3",
"iclr_2022_5ueTHF0yAlZ",
"iclr_2022_5ueTHF0yAlZ",
"iclr_2022_5ueTHF0yAlZ",
"iclr_2022_5ueTHF0yAlZ"
] |
iclr_2022_jGmNTfiXwGb | Learning Predictive, Online Approximations of Explanatory, Offline Algorithms | In this work, we introduce a general methodology for approximating offline algorithms in online settings. By encoding the behavior of offline algorithms in graphs, we train a multi-task learning model to simultaneously detect behavioral structures which have already occurred and predict those that may come next. We demonstrate the methodology on both synthetic data and historical stock market data, where the contrast between explanation and prediction is particularly stark. Taken together, our work represents the first general and end-to-end differentiable approach for generating online approximations of offline algorithms. | Reject | ## A Brief Summary
This paper uses offline algorithms that can see the entire time-series to approximate the online algorithms that can only view the past time-series. The way this is done is basically, the offline algorithm is used to provide discrete class targets to train the online algorithm. The paper presents results on synthetic and historical stock market data.
## Reviewer s1H9
**Strengths:**
- Practical problem.
- Novel approach.
- Clear presentation.
**Weaknesses:**
- No other baselines.
- No theoretical guarantees behind the approach.
- Writing could be improved.
## Reviewer EgW9
**Strengths:**
- Clear writing.
- Interesting research direction.
**Weaknesses:**
- The primary claim seems incorrect and unclear.
- Due to the unclarity about the primary claim of this paper, it is difficult to evaluate the paper.
- Lack of baselines.
- The lack of discussions of the related works.
## Reviewer gii5
**Strengths:**
- Interesting and novel approach.
**Weaknesses:**
- Difficult to evaluate, with no empirical baselines or theoretical evidence.
- The datasets used in the paper are not used in the literature before. Authors should provide experimental results on datasets from the literature as well.
- The paper needs to compare against the other baselines discussed in the related works.
- More ablations and analysis on the proposed algorithm is required.
- Unsubstantiated claims regarding being SOTA on the task, since the paper doesn't compare against any other baselines on these datasets.
- The paper can be restructured to improve the flow and clarity.
## Reviewer zoKR
**Strengths:**
- Novel and interesting research topic.
- Bridging classical algorithms and ML.
- Clearly written.
**Weaknesses:**
- Lack of motivation for the problem.
- The approach only works with offline algorithms that work on time-segmented data.
## Reviewer aaFn
**Strengths:**
- Novel algorithm.
**Weaknesses:**
- Potentially overfitting to the offline data.
- Data hungry approach.
- Confusion related to the occurrence moments of predicted future actions.
- Section 2 is difficult to understand.
## Key Takeaways and Thoughts
Overall, I think the problem setup is very interesting. However, as pointed out by reviewers gii5 and EgW5, due to the lack of baselines, it is tough to compare the proposed algorithm against other approaches, and this paper's evaluation is challenging. I would recommend the authors include more ablations in the future version of the paper and baselines and address the other issues pointed out above by the reviewers. | test | [
"cX04F6N4mVs",
"yIIn2AP2uCW",
"svrhrKwHTxQ",
"-anyJVDAJEU",
"UNh70JQo6Li",
"b5cRDvUGPn",
"rZBUYWyIpNA",
"17eT71OxJp0",
"mVZHAUeCOGo",
"x7mFpGo4jdI",
"uaqzyl2Vh6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. As there has been nothing added, my score will not change. However, I again strongly encourage the authors to make the changes they've outlined above, as that is necessary for considering acceptance of the paper.",
"This paper considers the problem of an offline algorithm that opera... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
3,
3,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
2
] | [
"UNh70JQo6Li",
"iclr_2022_jGmNTfiXwGb",
"uaqzyl2Vh6",
"yIIn2AP2uCW",
"x7mFpGo4jdI",
"mVZHAUeCOGo",
"17eT71OxJp0",
"iclr_2022_jGmNTfiXwGb",
"iclr_2022_jGmNTfiXwGb",
"iclr_2022_jGmNTfiXwGb",
"iclr_2022_jGmNTfiXwGb"
] |
iclr_2022_7kOsYRp4EmB | Improving Meta-Continual Learning Representations with Representation Replay | Continual learning often suffers from catastrophic forgetting. Recently, meta-continual learning algorithms use meta-learning to learn how to continually learn. A recent state-of-the-art is online aware meta-learning (OML). This can be further improved by incorporating experience replay (ER) into its meta-testing. However, the use of ER only in meta-testing but not in meta-training suggests that the model may not be optimally meta-trained. In this paper, we remove this inconsistency in the use of ER and improve continual learning representations by integrating ER also into meta-training. We propose to store the samples' representations, instead of the samples themselves, into the replay buffer. This ensures the batch nature of ER does not conflict with the online-aware nature of OML. Moreover, we introduce a meta-learned sample selection scheme to replace the widely used reservoir sampling to populate the replay buffer. This allows the most significant samples to be stored, rather than relying on randomness. Class-balanced modifiers are further added to the sample selection scheme to ensure each class has sufficient samples stored in the replay buffer. Experimental results on a number of real-world meta-continual learning benchmark data sets demonstrate that the proposed method outperforms the state-of-the-art. Moreover, the learned representations have better clustering structures and are more discriminative. | Reject | Addressing the problem of catastrophic forgetting in continual learning, this paper extends OML to use experience replay (ER) during training, instead of the original approach which uses ER during test phase only. The paper proposes a policy for samples replacement from the reservoir. Experiments show the superiority of the approach in three standard benchmarks compared with several baselines.
Reviewers were unanimously concerned that the technical contribution of the paper is not sufficient. The authors addressed several issues, including experiments to compare with additional baselines, but the technical novelty remains limited for an ICLR publication.
The paper cannot be accepted at its current form. | train | [
"oHLJAKDjLzJ",
"U7SW3WYE15",
"oSx5wkveLvm",
"8QXD5v2d7Uy",
"nnfXfcfV59Z",
"z82lPiR0RV",
"AKdM6NV81XZ",
"_Cqf_jiyjn4",
"yPvRNZScca8",
"Hlv2tvm7iQu",
"bWSgdApriz",
"0u3GHCg6Kum"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper extends the online meta-learning (OML) model with the ER during the meta-training. Also, the paper proposes a replacement buffer policy for samples replacement from the reservoir. Instead of storing the raw samples, since the backbone model is static during the meta-test, it's better to store the feature... | [
5,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2022_7kOsYRp4EmB",
"Hlv2tvm7iQu",
"yPvRNZScca8",
"iclr_2022_7kOsYRp4EmB",
"z82lPiR0RV",
"8QXD5v2d7Uy",
"0u3GHCg6Kum",
"iclr_2022_7kOsYRp4EmB",
"bWSgdApriz",
"oHLJAKDjLzJ",
"iclr_2022_7kOsYRp4EmB",
"iclr_2022_7kOsYRp4EmB"
] |
iclr_2022__B8Jd7Nqs7R | Improved Generalization Bound for Deep Neural Networks Using Geometric Functional Analysis | Understanding how a neural network behaves in multiple domains is the key to further its explainability, generalizability, and robustness. In this paper, we prove a novel generalization bound using the fundamental concepts of geometric functional analysis. Specifically, by leveraging the covering number of the training dataset and applying certain geometric inequalities we show that a sharp bound can be obtained. To the best of our knowledge this is the first approach which utilizes covering numbers to estimate such generalization bounds. | Reject | The paper provides a new geometric functional analysis perspective for the generalization bounds for neural networks. As the AC, I actually quite liked the twist the authors are providing for this particular work. Unfortunately, the current presentation is too crude to provide an elementary picture for the developments and I strongly encourage the authors to revise the paper for the next deadline based on the remarks from the reviewers. | train | [
"nUud5o6M9cO",
"meqdsUQa5-",
"WydK4YN3qXP",
"xoVQbZeWKff",
"OidUeuH_nlh",
"p-EM-maAdqk",
"bmJ2rhVHTqw",
"8NYm8ABJmxC"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Firstly , we are thankful to the reviewer for carefully going through the paper regarding the approximation part , it is true that for unbounded input approximation might not hold true , however in reality when dealing with practical data sets the data points are bounded and finite so we can assume that all dat... | [
-1,
-1,
-1,
3,
6,
1,
3,
5
] | [
-1,
-1,
-1,
3,
3,
4,
4,
4
] | [
"OidUeuH_nlh",
"bmJ2rhVHTqw",
"8NYm8ABJmxC",
"iclr_2022__B8Jd7Nqs7R",
"iclr_2022__B8Jd7Nqs7R",
"iclr_2022__B8Jd7Nqs7R",
"iclr_2022__B8Jd7Nqs7R",
"iclr_2022__B8Jd7Nqs7R"
] |
iclr_2022_Pobz_8y2Q2_ | BANANA: a Benchmark for the Assessment of Neural Architectures for Nucleic Acids | Machine learning has always played an important role in bioinformatics and recent applications of deep learning have allowed solving a new spectrum of biologically relevant tasks.
However, there is still a gap between the ``mainstream'' AI and the bioinformatics communities. This is partially due to the format of bioinformatics data, which are typically difficult to process and adapt to machine learning tasks without deep domain knowledge.
Moreover, the lack of standardized evaluation methods makes it difficult to rigorously compare different models and assess their true performance.
To help to bridge this gap, and inspired by work such as SuperGLUE and TAPE, we present BANANA, a benchmark consisting of six supervised classification tasks designed to assess language model performance in the DNA and RNA domains. The tasks are defined over three genomics and one transcriptomics languages (human DNA, bacterial 16S gene, nematoda ITS2 gene, human mRNA) and measure a model's ability to perform whole-sequence classification in a variety of setups.
Each task was built from readily available data and is presented in a ready-to-use format, with defined labels, splits, and evaluation metrics.
We use BANANA to test state-of-the-art NLP architectures, such as Transformer-based models, observing that, in general, self-supervised pretraining without external corpora is beneficial in every task. | Reject | While all reviewers applaud the motivation to bridge the gap between machine learning and bioinformatics communities, they also raise a number of concerns regarding the choice of tasks and of baselines, and about the accuracy in their description. They feel the paper is not ready to be published in its current form, and we hope that their comments will help the authors prepare a revised version for the future. | train | [
"SeI-ig2SZOz",
"dfOFaWlAbSo",
"BGJ9qt-I0lA",
"Ll4CadFNwyS",
"Trcv1kFM_gV",
"NZ6nFBeMtEU",
"vc2OJ7987LB",
"PQXNsZsh7w_",
"qOUcDJeNgan",
"mVCyJ1n_j0",
"vMNHz3ylLAj",
"eT61PJRmpv8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This contribution introduces a benchmark for the evaluation of neural\nnetworks for supervised classification of DNA and RNA molecules. The\nbenchmark consists of six datasets covering DNA and RNA as well as\ndifferent learning tasks (binary classification, multiclass,\nhierarchical), sequence lengths and sample s... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"iclr_2022_Pobz_8y2Q2_",
"PQXNsZsh7w_",
"Trcv1kFM_gV",
"eT61PJRmpv8",
"vMNHz3ylLAj",
"mVCyJ1n_j0",
"qOUcDJeNgan",
"SeI-ig2SZOz",
"iclr_2022_Pobz_8y2Q2_",
"iclr_2022_Pobz_8y2Q2_",
"iclr_2022_Pobz_8y2Q2_",
"iclr_2022_Pobz_8y2Q2_"
] |
iclr_2022_-e7awdzWsOc | Towards Structured Dynamic Sparse Pre-Training of BERT | Identifying algorithms for computational efficient unsupervised training of large language models is an important and active area of research.
In this work, we develop and study a straightforward, dynamic always-sparse pre-training approach for BERT language modeling, which leverages periodic compression steps based on magnitude pruning followed by random parameter re-allocation.
This approach enables us to achieve Pareto improvements in terms of the number of floating-point operations (FLOPs) over statically sparse and dense models across a broad spectrum of network sizes.
Furthermore, we demonstrate that training remains FLOP-efficient when using coarse-grained block sparsity, making it particularly promising for efficient execution on modern hardware accelerators. | Reject | All of the reviewers believe the paper should not be accepted, and I concur with their recommendation for the reasons they mention.
Four of the reviewers (vEBH, idrP, KoFV, 5k4c) believe the technique proposed in this paper is not particularly novel. Rather, the novelty is that it is being used on a BERT model rather than the computer vision models that are typically the starting point for pruning work. They also argue that the paper was not particularly thorough in its comparison to other pruning techniques (specifically dynamic sparsity techniques), which is essential for pruning work given how crowded and noisy the space is. Finally, they rightfully note that the paper does not look at the real-world speedups attainable on conventional hardware (GPUs and TPUs), the latter of which has no support for sparsity and the former of which (NVIDIA Ampere) has limited support for specific kinds of sparsity and especially limited support for sparse training.
The reviewers also raised several more specific methodological issues with evaluation (e.g., using the MLM loss rather than fine-tuning as a basis for evaluation), but the above concerns alone were enough to convince me that the paper does not merit acceptance at this time. | train | [
"_aohDBznts",
"BkmwY90ffDg",
"6fJBYBu_AWp",
"O9eV2YLWxpC",
"kNPjXlhlHNX",
"YqygIYdFdsG",
"28G_j2M01Hz",
"_yGVtkLaZ7X",
"wxPBYGcj3tR",
"mu4K90r0Nc"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful review and constructive comments. \n\n> The novelty of this work is a bit limited. There is no new algorithm proposed to get better results for this sparse training of BERT. \n\nWhile the techniques we employ are known, we believe the successful application of these techniques to lar... | [
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4,
2
] | [
"mu4K90r0Nc",
"wxPBYGcj3tR",
"_yGVtkLaZ7X",
"28G_j2M01Hz",
"YqygIYdFdsG",
"iclr_2022_-e7awdzWsOc",
"iclr_2022_-e7awdzWsOc",
"iclr_2022_-e7awdzWsOc",
"iclr_2022_-e7awdzWsOc",
"iclr_2022_-e7awdzWsOc"
] |
iclr_2022_uHq5rHHektz | Contextual Fusion For Adversarial Robustness | Mammalian brains handle complex reasoning tasks in a gestalt manner by integrating information from regions of the brain that are specialized to individual sensory modalities. This allows for improved robustness and better generalization ability. In contrast, deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations. While many methods exist for detecting and defending against adversarial attacks, they do not generalize across a range of attacks and negatively affect performance on clean, unperturbed data. We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN. We tested the benefits of the fusion approach on preserving adversarial robustness for human perceivable (e.g., Gaussian blur) and network perceivable (e.g., gradient-based) attacks for CIFAR-10 and MS COCO data sets. For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data and without need to perform adversarial retraining. Our fused model revealed improvements for Gaussian blur type perturbations as well. The increase in performance from fusion approach depended on the variability of the image contexts; larger increases were seen for classes of images with larger differences in their contexts. We also demonstrate the effect of regularization to bias the classifier decision in the presence of a known adversary. We propose that this biologically inspired approach to integrate information across multiple modalities provides a new way to improve adversarial robustness that can be complementary to current state of the art approaches. | Reject | This manuscript proposes an information fusion approach to improve adversarial robustness. Reviewers agree that the problem studied is timely and the approach is interesting. However, note concerns about the novelty compared to closely related work, the quality of the presentation, the strength of the evaluated attacks compared to the state of the art, among other concerns. There is no rebuttal. | test | [
"4ytoj0x9NVH",
"BTXbtojcXTN",
"NJizVw-fjik",
"aUGuOQujuw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, authors studies the problem of adversarial training and tries to leverage a fusion-based method against adversarial attacks. This method fuses features from foreground and background extracted by pre-trained models and test its performance against both Gaussian blur and gradient-based attacks. The a... | [
3,
1,
3,
3
] | [
4,
5,
4,
4
] | [
"iclr_2022_uHq5rHHektz",
"iclr_2022_uHq5rHHektz",
"iclr_2022_uHq5rHHektz",
"iclr_2022_uHq5rHHektz"
] |
iclr_2022_SbV8J9JHb6 | Soteria: In search of efficient neural networks for private inference | In the context of ML as a service, our objective is to protect the confidentiality of the users’ queries and the server's model parameters, with modest computation and communication overhead. Prior solutions primarily propose fine-tuning cryptographic methods to make them efficient for known fixed model architectures. The drawback with this line of approach is that the model itself is never designed to efficiently operate with existing cryptographic computations. We observe that the network architecture, internal functions, and parameters of a model, which are all chosen during training, significantly influence the computation and communication overhead of a cryptographic method, during inference.Thus, we propose SOTERIA — a training method to construct model architectures that are by-design efficient for private inference. We use neural architecture search algorithms with the dual objective of optimizing the accuracy of the model and the overhead of using cryptographic primitives for secure inference. Given the flexibility of modifying a model during training, we find accurate models that are also efficient for private computation. We select garbled circuits as our underlying cryptographic primitive, due to their expressiveness and efficiency. We empirically evaluate SOTERIA on MNIST and CIFAR10 datasets, to compare with the prior work on secure inference. Our results confirm that SOTERIA is indeed effective in balancing performance and accuracy. | Reject | The authors propose a new way of addressing the ML as a service problem through using garbled circuits. As the reviewers point out the novelty is limited and comparison to existing work is not complete. The authors have also not responded to the reviews. | train | [
"8Gm4fDhXGdR",
"VW4rPLFVWpD",
"6A2bJwxPYBN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper develops a method for private neural-network-inference via utilization of Yao's garbled circuits (GC) protocol. In order to keep the computation complexity manageable - the main practical hurdle for GC - the paper proposes utilization of a neural network architecture search, coupled with restricting wei... | [
3,
5,
5
] | [
2,
4,
3
] | [
"iclr_2022_SbV8J9JHb6",
"iclr_2022_SbV8J9JHb6",
"iclr_2022_SbV8J9JHb6"
] |
iclr_2022_WNTscnQd1s | Sparsistent Model Discovery | Discovering the partial differential equations underlying spatio-temporal datasets from very limited and highly noisy observations is of paramount interest in many scientific fields. However, it remains an open question to know when model discovery algorithms based on sparse regression can actually recover the underlying physical processes. In this work, we show the design matrices used to infer the equations by sparse regression can violate the irrepresentability condition (IRC) of the Lasso, even when derived from analytical PDE solutions (i.e. without additional noise). Sparse regression techniques which can recover the true underlying model under violated IRC conditions are therefore required, leading to the introduction of the randomised adaptive Lasso. We show once the latter is integrated within the deep learning model discovery framework DeepMod, a wide variety of nonlinear and chaotic canonical PDEs can be recovered: (1) up to $\mathcal{O}(2)$ higher noise-to-sample ratios than state-of-the-art algorithms, (2) with a single set of hyperparameters, which paves the road towards truly automated model discovery. | Reject | The reviewers and AC all agree that the paper considers an important problem but that several concerns remain which makes the present submission of limited novelty.
We strongly encourage the authors to revise their manuscript to incorporate the reviewers comments as this will significantly strengthen the significance of their work.
In particular it will be important to strengthen the theoretical analysis and expand the empirical evaluation, including incorporating an ablation study and considering settings of various difficulty, noise level, etc. | train | [
"1WeMFbS5kN",
"CsoOA0oKPHa",
"RDgXKsujm0I",
"4MmjR5IxMRa",
"6l5_XMB8LAC",
"3OTdlQKS4Ua",
"ntgDC753Isc",
"iGdQx3KEztr",
"5Ho_9WZLer_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper discusses the unidentifiability of lasso-based differential equation discovery, and proposes to use a more stable lasso variant (randomized adaptive lasso), which can more accurately identify PDE coefficients. The method is well-motivated and adresses a major bottleneck in differential discovery. The met... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_WNTscnQd1s",
"3OTdlQKS4Ua",
"1WeMFbS5kN",
"5Ho_9WZLer_",
"iGdQx3KEztr",
"ntgDC753Isc",
"iclr_2022_WNTscnQd1s",
"iclr_2022_WNTscnQd1s",
"iclr_2022_WNTscnQd1s"
] |
iclr_2022_844kbKgwDL | Predicting subscriber usage: Analyzing multi-dimensional time-series using Convolutional Neural Networks | Companies operating under the subscription model typically invest significant resources attempting to predict customer's feature usage. These predictions can be used to fuel growth: It may allow these companies to target individual customers -- for example to convert non-paying consumers to begin paying for for enhanced services -- or to identify customers not maximizing their subscription product.
This assistance can avoid an increase in the churn rate, and for some consumers may increase their usage.
In this work, we develop a deep learning model to predict the product usage of a given consumer, based on historical usage. We adapt a Convolutional Neural Network to time-series data followed by Auxiliary Output, and demonstrate that this enhanced model effectively predicts future change in usage. | Reject | ICLR is selective and reviewers are not sufficiently enthusiastic about this paper. In particular, they point out closely related methods that should be cited and compared to as baselines. The reviews are of good quality, and the authors did not respond. | train | [
"OodaRcqcL_L",
"D7tV-EgOTT",
"1wFNxBYNw3T",
"Q8jOIK9aD3y",
"EVDLeUtT6XS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This is an application paper about utilizing CNN for multi-dimensional time series from subscriber usage. The authors proposed to use auxiliary layers to produce multi-step predictions. Overall, this paper lacks technical and theoretical novelty. The presentation needs significant improvements and the experiment s... | [
3,
3,
3,
3,
3
] | [
5,
5,
5,
4,
3
] | [
"iclr_2022_844kbKgwDL",
"iclr_2022_844kbKgwDL",
"iclr_2022_844kbKgwDL",
"iclr_2022_844kbKgwDL",
"iclr_2022_844kbKgwDL"
] |
iclr_2022_keeCvPPd3vL | Improved Image Generation via Sparsity | The interest of the deep learning community in image synthesis has grown massively in recent years. Nowadays, deep generative methods, and especially Generative Adversarial Networks (GANs), are leading to state-of-the-art performance, capable of synthesizing images that appear realistic. While the efforts for improving the quality of the generated images are extensive, most attempts still consider the generator part as an uncorroborated ``black-box''. In this paper, we aim to provide a better understanding and design of the image generation process. We interpret existing generators as implicitly relying on sparsity-inspired models. More specifically, we show that generators can be viewed as manifestations of the Convolutional Sparse Coding (CSC) and its Multi-Layered version (ML-CSC) synthesis processes. We leverage this observation by explicitly enforcing a sparsifying regularization on appropriately chosen activation layers in the generator, and demonstrate that this leads to improved image synthesis. Furthermore, we show that the same rationale and benefits apply to generators serving inverse problems, demonstrated on the Deep Image Prior (DIP) method. | Reject | This paper introduces sparse modeling-inspired regularizations to improve deep neural network-based image generators. Experimental results on both (low-resolution) image synthesis and deep image prior-based inverse problems are used to validate the proposed method.
The majority of the reviewers were against the acceptance of the paper. As summarized by Reviewer tsoA: "There are shortcomings in the overall concept as well as its evaluation. The findings suggest that this might be a promising avenue of research, but it would need to be taken further. At present, the paper boils down too much into simply adding a simple regularizer at the end and observing that it somewhat improves some metrics in a limited number of scenarios. Due to the limitations of the evaluation, it remains unclear whether the proposed improvement carries over to state of the art models and datasets. Similarly, the promised elucidation of the purpose of the feature values never really materializes." The AC agrees with that summarization and recommends rejection. | train | [
"DNtDwd4Z1k",
"0Rmg8PSCQ6g",
"ZASFeB8TLkA",
"xjQJyVtrO1V",
"h2YUR-wMEVo",
"A_z2vMMpukG",
"8hz-yFuhT-r",
"k8Q4IUOaIP",
"cD5efAhdQXE",
"uqv938AK9n",
"tSiQaUMLM1X",
"6RfTKpMXMA",
"JcF6vrLEqq"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the response.\n\nThe last layer of image generators is actually a multiplication by a CSC dictionary (these are mathematically equivalent). However, the CSC and the ML-CSC require the signal to be sparse before the multiplication. We analyze existing architectures, trained without any re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"xjQJyVtrO1V",
"ZASFeB8TLkA",
"cD5efAhdQXE",
"k8Q4IUOaIP",
"A_z2vMMpukG",
"JcF6vrLEqq",
"6RfTKpMXMA",
"tSiQaUMLM1X",
"uqv938AK9n",
"iclr_2022_keeCvPPd3vL",
"iclr_2022_keeCvPPd3vL",
"iclr_2022_keeCvPPd3vL",
"iclr_2022_keeCvPPd3vL"
] |
iclr_2022_0DecTiJFbm | A New Perspective on Fluid Simulation: An Image-to-Image Translation Task via Neural Networks | Standard numerical methods for creating simulation models in the field of fluid dynamics are designed to be close to perfection, which results in high computational effort and high computation times in many cases. Unfortunately, there is no mathematical way to decrease this correctness in cases where only approximate predictions are needed. For such cases, we developed an approach based on Neural Networks that is much less time-consuming but nearly as accurate as the numerical model for a human observer. We show that we can keep our results stable and nearly indistinguishable from their numerical counterparts over tenth to hundreds of time steps. | Reject | The paper formulates fluid simulation as an image to image prediction task and proposes to solve the problem using a cGAN formulation. The objective is to develop fast approximate solutions for the modeling of fluid dynamics, here Navier Stokes for incompressible flows. The images correspond to the discretization of velocity and pressure fields. Experiments are performed on a simulation for a Karman vortex street.
All the reviewers expressed concerns w.r.t. the absence of references and comparisons with closely related work in the recent but abundant literature on NN for modeling PDE dynamics, the lack of novelty and the insufficient experimental design, description and discussion. | train | [
"FtTQXI0sMGZ",
"sULyuLWs2dK",
"vgfkm8cvhfJ",
"9uFwLXkbhey"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper reform the task to solve a PDE into an image translation problem and used cGAN to solve the PDE. Although the authors reformed the problem as image translation, using a cGAN (instead of GAN, or other architecture) to generate PDE's result seems to be a trivial idea. Indeed, explicitly declaring the prob... | [
3,
1,
3,
1
] | [
4,
5,
5,
5
] | [
"iclr_2022_0DecTiJFbm",
"iclr_2022_0DecTiJFbm",
"iclr_2022_0DecTiJFbm",
"iclr_2022_0DecTiJFbm"
] |
iclr_2022_dYUdt59fJ0e | Yformer: U-Net Inspired Transformer Architecture for Far Horizon Time Series Forecasting | Time series data is ubiquitous in research as well as in a wide variety of industrial applications. Effectively analyzing the available historical data and providing insights into the far future allows us to make effective decisions. Recent research has witnessed the superior performance of transformer-based architectures, especially in the regime of far horizon time series forecasting. However, the current state of the art sparse Transformer architectures fail to couple down- and upsampling procedures to produce outputs in a similar resolution as the input. We propose the Yformer model, based on a novel Y-shaped encoder-decoder architecture that (1) uses direct connection from the downscaled encoder layer to the corresponding upsampled decoder layer in a U-Net inspired architecture, (2) Combines the downscaling/upsampling with sparse attention to capture long-range effects, and (3) stabilizes the encoder-decoder stacks with the addition of an auxiliary reconstruction loss. Extensive experiments have been conducted with relevant baselines on four benchmark datasets, demonstrating an average improvement of 19.82, 18.41 percentage MSE and 13.62, 11.85 percentage MAE in comparison to the current state of the art for the univariate and the multivariate settings respectively. | Reject | This paper presents Yformer to perform long sequence time series forecasting based on a Y-shaped encoder-decoder architecture. Inspired by the U-Net architecture, the key idea of this paper is to improve the prediction resolution by employing skip connection and to stabilize the encoder and decoder by reconstructing the recent past. The experiment results on two datasets named ETT and ECL partially showed the effectiveness of the proposed method.
Reviewers have common concerns about the overall technical novelty, presentation quality, and experiment details. The authors only provided a rebuttal to one reviewer and most concerns from the other three reviewers were not addressed in the rebuttal and discussion phase. The final scores were unanimously below the acceptance bar.
AC read the paper and agreed that, while the paper has some merit such as an effective Yformer model for the particular problem setup, the reviewers' concerns are reasonable and need to be addressed in a more convincing way. The weaknesses are quite obvious and will be questioned again by the next set of reviewers, so the authors are required to substantially revise their work before resubmitting. | val | [
"FMwqYlFaw-1",
"DYqiNQ2GbJC",
"jGQFv9vdyaF",
"KSdi0bFXBSV",
"6QIdhdOrg-",
"0c1jBq_qBK-",
"DUZ1PFGSxRz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed feedback on my comments. Although the authors have addressed my comments, unfortunately, I also agree with some of the concerns and comments from the other reviewers. Since the authors did not provide a rebuttal for the other reviewers, I am maintaining my original score.",
"Recent wo... | [
-1,
3,
-1,
5,
-1,
3,
3
] | [
-1,
4,
-1,
3,
-1,
4,
3
] | [
"DYqiNQ2GbJC",
"iclr_2022_dYUdt59fJ0e",
"KSdi0bFXBSV",
"iclr_2022_dYUdt59fJ0e",
"DYqiNQ2GbJC",
"iclr_2022_dYUdt59fJ0e",
"iclr_2022_dYUdt59fJ0e"
] |
iclr_2022_2z5h4hY-LQ | GAETS: A Graph Autoencoder Time Series Approach Towards Battery Parameter Estimation | Lithium-ion batteries are powering the ongoing transportation electrification revolution. Lithium-ion batteries possess higher energy density and favourable electrochemical properties which make it a preferable energy source for electric vehicles. Precise estimation of battery parameters (Charge capacity, voltage etc) is vital to estimate the available range in an electric vehicle. Graph-based estimation techniques enable us to understand the variable dependencies underpinning them to improve estimates. In this paper we employ Graph Neural Networks for battery parameter estimation, we introduce a unique graph autoencoder time series estimation approach. Variables in battery measurements are known to have an underlying relationship with each other in a certain causal structure. Therefore, we include ideas from the field of causal structure learning as a regularisation to our learned adjacency matrix technique. We use graph autoencoder based on a non-linear version of NOTEARS Zheng et al. (2018) as this allowed us to perform gradient-descent in learning the structure (instead of treating it as a combinatorial optimisation problem). The proposed architecture outperforms the state-of-the-art Graph Time Series (GTS) Shang et al. (2021a) architecture for battery parameter estimation. We call our method GAETS (Graph AutoEncoder Time Series). | Reject | The paper proposes to apply graph neural networks to predict battery state of charge. The main concern is the lack of technical novelty, since the main work is a straightforward application of existing works. The work could be better suited for a more application-oriented venue. | train | [
"m9jxEqw3eav",
"sB0VAyc4mJ9",
"UGxRaNJfv6b",
"P06KkeI1zql"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a graph neural network based forecasting algorithm for battery state of charge. Building on prior work, the paper proposes a regularization based on reconstruction loss. Evaluation on a battery dataset shows improvements. The graph auto encoder used in the paper forecasts the state of charge usi... | [
3,
3,
3,
5
] | [
4,
4,
3,
3
] | [
"iclr_2022_2z5h4hY-LQ",
"iclr_2022_2z5h4hY-LQ",
"iclr_2022_2z5h4hY-LQ",
"iclr_2022_2z5h4hY-LQ"
] |
iclr_2022_-3yxxvDis3L | How to Improve Sample Complexity of SGD over Highly Dependent Data? | Conventional machine learning applications typically assume that data samples are independently and identically distributed (i.i.d.). However, many practical scenarios naturally involve a data-generating process that produces highly dependent data samples, which are known to heavily bias the stochastic optimization process and slow down the convergence of learning. In this paper, we conduct a fundamental study on how to facilitate the convergence of SGD over highly dependent data using different popular update schemes. Specifically, with a $\phi$-mixing model that captures both exponential and polynomial decay of the data dependence over time, we show that SGD with periodic data-subsampling achieves an improved sample complexity over the standard SGD in the full spectrum of the $\phi$-mixing data dependence. Moreover, we show that by fully utilizing the data, mini-batch SGD can further substantially improve the sample complexity with highly dependent data. Numerical experiments validate our theory. | Reject | Dear Authors,
This paper eventually received mostly negative reviews (scores 5, 3, 5), with one mildly positive review (score 6). All reviews were particularly informative, offering detailed and expert feedback. I was hoping for author engagement, but unfortunately, no rebuttal was submitted.
In general, the reviewers and me found the paper well written, on a timely topic, but of a very limited theoretical novelty. Well-articulated details of this can be found in the reviews and I would recommend the authors to consider them carefully in their revision. I have no option but to reject this work.
The main reason for rejection in this case is therefore limited theoretical novelty. However, this is a solid paper that is of publishable quality, albeit perhaps in a somehow lesser venue, at least in its current form.
Kind regards,
Area Chair | train | [
"iG97scjLo99",
"KaFF2hIJgoU",
"LSDuAj4W3oX",
"sza0p_EktTG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In stochastic convex optimization (SCO) characterized by the following objective for convex $f$, $$\\min_w f(w) := \\mathbb{E}_{\\xi\\sim \\mu}[F(w; \\xi)],$$ we typically assume repeated access to the noise distribution $\\mu$, for sampling i.i.d. random variables $(\\xi_t)_t$. This access can be used to obtain u... | [
5,
6,
3,
5
] | [
4,
4,
5,
3
] | [
"iclr_2022_-3yxxvDis3L",
"iclr_2022_-3yxxvDis3L",
"iclr_2022_-3yxxvDis3L",
"iclr_2022_-3yxxvDis3L"
] |
iclr_2022_rOGm97YR22N | Mixed-Memory RNNs for Learning Long-term Dependencies in Irregularly Sampled Time Series | Recurrent neural networks (RNNs) with continuous-time hidden states are a natural fit for modeling irregularly sampled time series. These models, however, face difficulties when the input data possess long-term dependencies. We prove that similar to standard RNNs, the underlying reason for this issue is the vanishing or exploding of the gradient during training. This phenomenon is expressed by the ordinary differential equation (ODE) representation of the hidden state, regardless of the ODE solver's choice. We provide a solution by equipping arbitrary continuous-time networks with a memory compartment separated from its time-continuous state. This way, we encode a continuous-time dynamical flow within the RNN, allowing it to respond to inputs arriving at arbitrary time-lags while ensuring a constant error propagation through the memory path. We call these models Mixed-Memory-RNNs (mmRNNs). We experimentally show that Mixed-Memory-RNNs outperform recently proposed RNN-based counterparts on non-uniformly sampled data with long-term dependencies. | Reject | After carefully reading the reviews and the rebuttal I feel the paper fails slightly short.
Unfortunately some of the issues that I have are aligned with the feedback from reviewer 6YwU and pULY.
A significant part of the paper is the formalism and theory introduced by this work, followed then by the empirical evaluation. The theory I feel is not sufficiently well formulated. I understand this is a complex topic, and one can only make minimal statements about a system (particularly when learning is involved). And I understand that the authors are looking at a slightly different phenomena, and not the traditional vanishing/exploding gradient problem, where they consider a per-unit scenario. And I believe one can make a case that this alternative definition holds value and should be investigated.
However, I believe being more explicit of this alternate view, and make sure that one does not go into the theory with the wrong preconception of what these results are about is important. And secondly making sure the claims are adequate is important and not overly strong (or over claiming). I think this is important particularly in such works, dealing with systems that do not allow a full mathematical analysis. In particular, just to give some examples:
1. Thm 3, pointed out by the reviewers as well. I don't understand the point of this thm. It basically says that around initialization things are well behaved. The same can be said or proven for many other methods. You argue that this is different, as in other models beside initialization forgetting is not controlled, while you could potentially control it by a forgetting gate. However this is not a theoretical, precise argument. The forgetting gate is learned as well. If we go back to the LSTM scenario, LSTM suffer for vanishing gradients. Also Gers et al. paper does not prove that trying to preserve error has to harm learning (it provides some empirical evidence that is the case, but there have been many other things that affected this results). The point here is not that forget gates are not useful, nor that the gating mechanism proposed by LSTM are not extremely useful. They are. Is that the Thm 3 can not prove or show that using mmRNN is a better way of mitigating (and trading of) vanishing gradient than another model. You do that through your empirical evidence, and I think that is how most of ML works. But is not clear what the point of the theorem is.
2. I do not understand how one reasons theoretically about epsilon in Def 1. I don't see how an empirical observation by Gers et al resolves this. It justifies maybe why vanishing gradients are not always problematic, but that should not affect the definition of what vanishing now means. In the current form, if T goes to infinite, even if technically the network does not suffer from vanishing gradients, the gradients go to 0. Or at least T and epsilon should somehow be tied together to make the definition work.
The issue of defining the vanishing / exploding gradient per unit is also that now is not clear what is problematic or not. Probably having exploding gradient for any given unit is bad, as it might affect the overall gradient. But having a few units suffering from vanishing gradient, is that problematic? This things need to be quantified better.
I think overall to me the problem is that some of this mathematical statements do not seem to be strong enough or contextualized enough to be properly understood by the reader. I would have understood the formalism if it was trying to correct some misconception in the community, case in which it is important to formalize just to be precise. But I don't think this is what is happening here. As in stands it just feels sloppy.
And I think this retracts considerably from the empirical side of the work and reduces the space you had to give it enough attention. Which should have played main stage. I think the empirical work would have benefited from more analysis (showcasing some of the arguments you were making using the theory), which would have made for a much stronger and convincing paper. The current framing of the paper is unfortunately not the right one. | train | [
"0NZqV_t25E6",
"-LVYWrC4N20",
"SBYreBlzfVr",
"umOkuypCPlN",
"II-Wuiq_tUo",
"tK1QbU5qnSB",
"U0kHfauW_3v",
"FJCjZa0lkD4",
"iOrqccO8ip1",
"fyN_-yu8TYS",
"bnl4yWLWMKu",
"m6S-dAT5JV",
"YMpL87IBwT",
"FDAPgBfsI6P",
"QPRgOc-SaIk",
"RUtwupDYG-j",
"z2mvEXDtl5h"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the follow up, for the clarifications, and acknowledgements of the sensitive nature of review communications. For the record, I do think this paper should be published.",
" \nWe really appreciate your engagement in this matter as it will help communicate and highlight our opinion better. Our react... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
3,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"-LVYWrC4N20",
"SBYreBlzfVr",
"umOkuypCPlN",
"II-Wuiq_tUo",
"bnl4yWLWMKu",
"iOrqccO8ip1",
"RUtwupDYG-j",
"iclr_2022_rOGm97YR22N",
"YMpL87IBwT",
"z2mvEXDtl5h",
"QPRgOc-SaIk",
"FDAPgBfsI6P",
"FJCjZa0lkD4",
"iclr_2022_rOGm97YR22N",
"iclr_2022_rOGm97YR22N",
"iclr_2022_rOGm97YR22N",
"iclr... |
iclr_2022_fYor2QIp_3 | An Effective GCN-based Hierarchical Multi-label classification for Protein Function Prediction | We propose an effective method to improve Protein Function Prediction (PFP) utilizing hierarchical features of Gene Ontology (GO) terms. Our method consists of a language model for encoding the protein sequence and a Graph Convolutional Network (GCN) for representing Go terms. To reflect the hierarchical structure of GO to GCN, we employ node(GO term)-wise representations containing the whole hierarchical information. Our algorithm shows effectiveness in a large-scale graph by expanding the GO graph compared to previous models. Experimental results show that our method outperformed state-of-the-art PFP approaches. | Reject | The paper proposes a method to predict protein functions from Gene Ontology (GO) and protein sequences. The protein sequences are embedded with a pretrained protein language model (SeqVec) and the GO network is modelled with a graph convolutional neural network.
Reviewers found the paper well-written and structured. At the same time, they found the novelty of the paper limited. Two reviewers pointed out that the paper is very similar to DeepGOA, which the authors cite but don't compare against. Overall, there is consensus among the reviewers that the paper is not suitable for ICLR.
The authors didn't submit a rebuttal.
We encourage the authors to take into account reviewer comments to improve the paper. Since it is more on the application side, perhaps a computational biology conference / workshop would be more appropriate for this paper. | train | [
"MybJqyOqE2I",
"3NEqaf_fta5",
"TDra8kjtY0r",
"KTHPxdEjQqU"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a model to predict Gene Ontology (GO) term annotations for protein function. The model uses an existing method, SeqVec [2], to encode the protein sequence and a GCN on the Gene Ontology (GO) DAGs to encode the structure of term relationships. Like in DeepGOA[1], the graph is weighted by functio... | [
1,
3,
3,
3
] | [
5,
4,
5,
5
] | [
"iclr_2022_fYor2QIp_3",
"iclr_2022_fYor2QIp_3",
"iclr_2022_fYor2QIp_3",
"iclr_2022_fYor2QIp_3"
] |
iclr_2022_rbFPSQHlllm | AutoMO-Mixer: An automated multi-objective multi-layer perspecton Mixer model for medical image based diagnosis | Medical image based diagnosis is one of the most challenging things which is vital to human life. Accurately identifying the patient's status through medical images plays an important role in treatment of diseases. Deep learning has achieved great success in medical image analysis. Particularly, Convolutional neural network CNN) can obtain promising performance by learning the features in a supervised way. However, since there are too many parameters to train, CNN always requires a large scale dataset to feed, while it is very difficult to collect the required amount of patient images for a particular clinical problem. Recently, MLP-Mixer (Mixer) which is developed based multiple layer perceptron (MLP) was proposed, in which the number of training parameters is greatly decreased by removing convolutions in the architecture, while it can achieve the similar performance with CNN. Furthermore, obtaining the balanced outcome between sensitivity and specificity is of great importance in patient's status identification. As such, a new automated multi-objective Mixer (AutoMO-Mixer) model was developed in this study. In AutoMO-Mixer, sensitivity and specificity were considered as the objective functions simultaneously to train the model and a Pareto-optimal Mixer model set can be obtained in the training stage. Additionally, since there are several hyperparameters to train, the Bayesian optimization was introduced. To obtain a more reliable results in testing stage, the final output was obtained by fusing the output probabilities of Pareto optimal models through the evidence reasoning (ER) approach. The experimental study demonstrated that AutoMO-Mixer can obtain better performance compared with Mixer and CNN. | Reject | The reviewers recommended a rejection. The authors of the paper did not respond. | train | [
"oRcgyv5oA-",
"SkhqZxtLAT",
"r7TJ7Iq1mHp",
"HifeiarkpTu"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This study is a simple combination of multiple-objective optimization and MLP Mixer on medical image application, with poor experiments to support any conclusion. Everything looks a direct application of existing technique. I don't understand why this study is submitted to a top conference. # Strengths\nI cannot f... | [
1,
3,
1,
3
] | [
4,
4,
5,
4
] | [
"iclr_2022_rbFPSQHlllm",
"iclr_2022_rbFPSQHlllm",
"iclr_2022_rbFPSQHlllm",
"iclr_2022_rbFPSQHlllm"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.