paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2020_H1l0O6EYDH | A NEW POINTWISE CONVOLUTION IN DEEP NEURAL NETWORKS THROUGH EXTREMELY FAST AND NON PARAMETRIC TRANSFORMS | Some conventional transforms such as Discrete Walsh-Hadamard Transform (DWHT) and Discrete Cosine Transform (DCT) have been widely used as feature extractors in image processing but rarely applied in neural networks. However, we found that these conventional transforms have the ability to capture the cross-channel correlations without any learnable parameters in DNNs. This paper firstly proposes to apply conventional transforms on pointwise convolution, showing that such transforms significantly reduce the computational complexity of neural networks without accuracy performance degradation. Especially for DWHT, it requires no floating point multiplications but only additions and subtractions, which can considerably reduce computation overheads. In addition, its fast algorithm further reduces complexity of floating point addition from O(n^2) to O(nlog n). These non-parametric and low computational properties construct extremely efficient networks in the number parameters and operations, enjoying accuracy gain. Our proposed DWHT-based model gained 1.49% accuracy increase with 79.4% reduced parameters and 48.4% reduced FLOPs compared with its baseline model (MoblieNet-V1) on the CIFAR 100 dataset. | reject | This paper presents an approach to utilize conventional frequency domain basis such as DWHT and DCT to replace the standard point-wise convolution, which can significantly reduce the computational complexity. The paper is generally well-written and easy to follow. However, the technical novelty seems limited as it is basically a simple combination of CNNs and traditional filters. Moreover, as reviewers suggested, it is our history and current consensus in the community that learned representations have significantly outperformed traditional pre-defined features or filters as the training data expands. I do understand the scientific value of revisiting and challenging that belief as commented by R1, but in order to provoke meaningful discussion, experiments on large-scale dataset like ImageNet are definitely necessary. For these reasons, I think the paper is not ready for publication at ICLR and would like to recommend rejection. | train | [
"r1gnXjEatr",
"SygVqFn9jS",
"Sylbzqh9jH",
"rke2o5nqjH",
"H1xn2Bpvjr",
"BJewUa1vsH",
"Bkx7z0H6KH",
"r1e_MoDtqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new pointwise convolution layer, which is non-parametric and can be efficient thanks to the fast conventional transforms. Specifically, it could use either DCT or DHWT to do the transforming job and explores the optimal block structure to use this new kind of PC layer. Extensive experimental ... | [
3,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2020_H1l0O6EYDH",
"r1e_MoDtqH",
"SygVqFn9jS",
"iclr_2020_H1l0O6EYDH",
"r1gnXjEatr",
"Bkx7z0H6KH",
"iclr_2020_H1l0O6EYDH",
"iclr_2020_H1l0O6EYDH"
] |
iclr_2020_SJlyta4YPS | DeepEnFM: Deep neural networks with Encoder enhanced Factorization Machine | Click Through Rate (CTR) prediction is a critical task in industrial applications, especially for online social and commerce applications. It is challenging to find a proper way to automatically discover the effective cross features in CTR tasks. We propose a novel model for CTR tasks, called Deep neural networks with Encoder enhanced Factorization Machine (DeepEnFM). Instead of learning the cross features directly, DeepEnFM adopts the Transformer encoder as a backbone to align the feature embeddings with the clues of other fields. The embeddings generated from encoder are beneficial for the further feature interactions. Particularly, DeepEnFM utilizes a bilinear approach to generate different similarity functions with respect to different field pairs. Furthermore, the max-pooling method makes DeepEnFM feasible to capture both the supplementary and suppressing information among different attention heads. Our model is validated on the Criteo and Avazu datasets, and achieves state-of-art performance. | reject | The authors address the problem of CTR prediction by using a Transformer based encoder to capture interactions between features. They suggest simple modifications to the basic Multiple Head Self Attention (MSHA) mechanism and show that they get the best performance on two publicly available datasets.
While the reviewers agreed that this work is of practical importance, they had a few objections which I have summarised below:
1) Lack of novelty: The reviewers felt that the adoption of MSHA for the CTR task was straightforward. The suggested modifications in the form of Bilinear similarity and max-pooling were viewed as incremental contributions.
2) Lack of comparison with existing work: The reviewers suggested some additional baselines (Deep and Cross) which need to be added (the authors have responded that they will do so later).
3) Need to strengthen experiments: The reviewers appreciated the ablation studies done by the authors but requested for more studies to convincingly demonstrate the effect of some components. One reviewer also pointed that the authors should control form model complexity to ensure an apples-to-apples comparison (I agree that many papers in the past have not done this but going froward I have a hunch that many reviewers will start asking for this) .
IMO, the above comments are important and the authors should try to address them in subsequent submissions.
Based on the reviewer comments and lack of any response from the authors, I recommend that the paper in it current form cannot be accepted. | train | [
"Hke6LHdoiS",
"r1x65Ediir",
"HJg8BN_jsH",
"ryxkuCyxFr",
"S1ltPsiCFB",
"SJeEerv-5B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback and address the concerns in detail below\n1.\tRelevance to ICLR\nThanks, our paper aims to use the encoder to gain better field feature representation for CTR task, which is relevant to learning representations.\n2.\tThe meaning of “DNN learns at bit-wise level.”\nThe stateme... | [
-1,
-1,
-1,
1,
3,
1
] | [
-1,
-1,
-1,
1,
3,
3
] | [
"ryxkuCyxFr",
"S1ltPsiCFB",
"SJeEerv-5B",
"iclr_2020_SJlyta4YPS",
"iclr_2020_SJlyta4YPS",
"iclr_2020_SJlyta4YPS"
] |
iclr_2020_rJeeKTNKDB | Hierarchical Graph-to-Graph Translation for Molecules | The problem of accelerating drug discovery relies heavily on automatic tools to optimize precursor molecules to afford them with better biochemical properties. Our work in this paper substantially extends prior state-of-the-art on graph-to-graph translation methods for molecular optimization. In particular, we realize coherent multi-resolution representations by interweaving the encoding of substructure components with the atom-level encoding of the original molecular graph. Moreover, our graph decoder is fully autoregressive, and interleaves each step of adding a new substructure with the process of resolving its attachment to the emerging molecule. We evaluate our model on multiple molecular optimization tasks and show that our model significantly outperforms previous state-of-the-art baselines. | reject | Two reviewers are negative on this paper while the other reviewer is slightly positive. Overall, the paper does not make the bar of ICLR. A reject is recommended. | train | [
"SJx0vc3Bsr",
"rygYKu3HjS",
"SyxQENG8oB",
"Bkep99YXqr",
"B1lPC22v5H",
"r1l4ndZq5r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your insightful comments. We would like to first clarify the difference between our method and previous junction tree approach:\n\nJunction tree method of [Jin et al. 2019]:\n - Two independently operating encoders, one for the junction tree, the other for the original graph\n - Decoding is a strictl... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
3,
3,
1
] | [
"Bkep99YXqr",
"B1lPC22v5H",
"r1l4ndZq5r",
"iclr_2020_rJeeKTNKDB",
"iclr_2020_rJeeKTNKDB",
"iclr_2020_rJeeKTNKDB"
] |
iclr_2020_HkgbKaEtvB | End-To-End Input Selection for Deep Neural Networks | Data have often to be moved between servers and clients during the inference phase. This is the case, for instance, when large amounts of data are stored on a public storage server without the possibility for the users to directly execute code and, hence, apply machine learning models. Depending on the available bandwidth, this data transfer can become a major bottleneck. We propose a simple yet effective framework that allows to select certain parts of the input data needed for the subsequent application of a given neural network. Both the associated selection masks as well as the neural network are trained simultaneously such that a good model performance is achieved while, at the same time, only a minimal amount of data is selected. During the inference phase, only the parts selected by the masks have to be transferred between the server and the client. Our experiments indicate that it is often possible to significantly reduce the amount of data needed to be transferred without affecting the model performance much. | reject | This paper proposes to address the high bandwidth cost when transferring data between server and user for machine learning applications. The input data is augment with channel and spatial mask so that the file transfer cost is reduced. While the reviewers agree that this is a well motivated and interesting problem to study, a number of concerns are raised, including loosely specified performance/size trade-off, how this work is compared to related work, low novelty relative to a few key missing references. The authors respond to Reviewers’ concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state. | train | [
"ByeiFD69iB",
"Hkxcppxqir",
"Byg88pl5ir",
"BklJZax5jS",
"H1lFl73pKS",
"SJgDHU86FS",
"rJgm2PsCFB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As mentioned, we also did the pixel-wise selection (any-mask) and stopped at 50 pixel (loss Q of 0.0638). We now have the results for Fashion-MNIST and without any fixation afterwards, we achieved 85.7% accuracy compared to 67.7% in [1].\n\nWe would like to stress that this task only covers a subset of our framewo... | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"BklJZax5jS",
"SJgDHU86FS",
"H1lFl73pKS",
"rJgm2PsCFB",
"iclr_2020_HkgbKaEtvB",
"iclr_2020_HkgbKaEtvB",
"iclr_2020_HkgbKaEtvB"
] |
iclr_2020_rylZKTNYPr | Inferring Dynamical Systems with Long-Range Dependencies through Line Attractor Regularization | Vanilla RNN with ReLU activation have a simple structure that is amenable to systematic dynamical systems analysis and interpretation, but they suffer from the exploding vs. vanishing gradients problem. Recent attempts to retain this simplicity while alleviating the gradient problem are based on proper initialization schemes or orthogonality/unitary constraints on the RNN’s recurrency matrix, which, however, comes with limitations to its expressive power with regards to dynamical systems phenomena like chaos or multi-stability. Here, we instead suggest a regularization scheme that pushes part of the RNN’s latent subspace toward a line attractor configuration that enables long short-term memory and arbitrarily slow time scales. We show that our approach excels on a number of benchmarks like the sequential MNIST or multiplication problems, and enables reconstruction of dynamical systems which harbor widely different time scales. | reject | The paper proposes an interesting idea to leave a very simple form for piecewise-linear RNN, but separate units in to two types, one of which acts as memory. The "memory" units are penalized towards the linear attractor parameters, i.e. making elements of $A$ close to 1 and off-diagonal of $W$ close to $1$.
The benchmarks are presented that confirm the efficiency of the model.
The reviewer opinion were mixed; one "1", one "3" and one "6"; the Reviewer1 is far too negative and some of his claims are not very constructive, the "positive" reviewer is very short. Finally, the last reviewer raised a question about the actual quality on the results. This is not addressed. Although there is a motivation for such partial regularization, the main practical question is how many "memory neurons" are needed. I looked through the paper - this addressed only in the supplementary, where the value of $M_{reg}$ is mentioned (=0.5 M). For $M_{reg} = M$ it is the L2 penalty; what happens if the fraction is 0.1, 0.2, ... and more? A very crucial hyperparameter (and of course, smart selection of it can not be worse than L2RNN). This study is lacking. In my opinion, one can also introduce weights and sparsity constraints on them (in order to detect the number of "memory" neurons more-or less automatically). Although I feel this paper has a potential, it is not still ready for publication and could be significantly improved. | train | [
"ByeGhKV2sr",
"r1eVVm9esr",
"S1xOXePlir",
"ByxTdjHeiH",
"SklBfmHutH",
"rJxOVbICFS",
"ByxJFNRRYB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Referee #3: \n- We added a short discussion on EM vs. sequential VAE & SGVB to our Conclusions.\n\nReferee #2:\n- We added a sentence to sect. 4.1 on the relative performance of the rPLRNN vs. initialization-based approaches, and how we think the latter would further degrade in performance with increasing noise an... | [
-1,
-1,
-1,
-1,
3,
6,
1
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iclr_2020_rylZKTNYPr",
"SklBfmHutH",
"rJxOVbICFS",
"ByxJFNRRYB",
"iclr_2020_rylZKTNYPr",
"iclr_2020_rylZKTNYPr",
"iclr_2020_rylZKTNYPr"
] |
iclr_2020_SklfY6EFDH | Representation Quality Explain Adversarial Attacks | Neural networks have been shown vulnerable to adversarial samples. Slightly perturbed input images are able to change the classification of accurate models, showing that the representation learned is not as good as previously thought. To aid the development of better neural networks, it would be important to evaluate to what extent are current neural networks' representations capturing the existing features. Here we propose a way to evaluate the representation quality of neural networks using a novel type of zero-shot test, entitled Raw Zero-Shot. The main idea lies in the fact that some features are present on unknown classes and that unknown classes can be defined as a combination of previous learned features without representation bias (a bias towards representation that maps only current set of input-outputs and their boundary). To evaluate the soft-labels of unknown classes, two metrics are proposed. One is based on clustering validation techniques (Davies-Bouldin Index) and the other is based on soft-label distance of a given correct soft-label.
Experiments show that such metrics are in accordance with the robustness to adversarial attacks and might serve as a guidance to build better models as well as be used in loss functions to create new types of neural networks. Interestingly, the results suggests that dynamic routing networks such as CapsNet have better representation while current deeper DNNs are trading off representation quality for accuracy. | reject | The reviewers found the aim of the paper interesting (to connect representation quality with adversarial examples). However, the reviewers consistently pointed out writing issues, such as inaccurate or unsubstantiated claims, which are not appropriate for a scientific venue. The reviewers also found the experiments, which are on simple datasets, unconvincing. | train | [
"SJlTgbH3iB",
"rJg1JIrniS",
"B1lEZMrhjr",
"H1lyxtDatB",
"rkgtA6nRFB",
"SkgpIc7wcB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your review and comments which helped improve the paper further.\n\n>1. The paper only evaluate robust accuracy on models without robust training.\n\nAdversarial training is specific for robustness against particular attacks and not attacks in general. For example, in Vargas and Kotyan (2019) the aut... | [
-1,
-1,
-1,
1,
3,
1
] | [
-1,
-1,
-1,
5,
4,
4
] | [
"SkgpIc7wcB",
"H1lyxtDatB",
"rkgtA6nRFB",
"iclr_2020_SklfY6EFDH",
"iclr_2020_SklfY6EFDH",
"iclr_2020_SklfY6EFDH"
] |
iclr_2020_rkxEKp4Fwr | Training Data Distribution Search with Ensemble Active Learning | Deep Neural Networks (DNNs) often rely on very large datasets for training. Given the large size of such datasets, it is conceivable that they contain certain samples that either do not contribute or negatively impact the DNN's optimization. Modifying the training distribution in a way that excludes such samples could provide an effective solution to both improve performance and reduce training time. In this paper, we propose to scale up ensemble Active Learning methods to perform acquisition at a large scale (10k to 500k samples at a time). We do this with ensembles of hundreds of models, obtained at a minimal computational cost by reusing intermediate training checkpoints. This allows us to automatically and efficiently perform a training data distribution search for large labeled datasets. We observe that our approach obtains favorable subsets of training data, which can be used to train more accurate DNNs than training with the entire dataset. We perform an extensive experimental study of this phenomenon on three image classification benchmarks (CIFAR-10, CIFAR-100 and ImageNet), analyzing the impact of initialization schemes, acquisition functions and ensemble configurations. We demonstrate that data subsets identified with a lightweight ResNet-18 ensemble remain effective when used to train deep models like ResNet-101 and DenseNet-121. Our results provide strong empirical evidence that optimizing the training data distribution can provide significant benefits on large scale vision tasks. | reject | This paper proposes an ensemble-based active learning approach to select a subset of training data that yields the same or better performance. The proposed method is rather heuristic and lacks novel technical contribution that we expect for top ML conferences. No theoretical justification is provided to argue why the proposed method works. Additional studied are needed to fully convincingly demonstrate the benefit of the proposed method in terms computational cost. | train | [
"SylO7ym2oS",
"BkxE_RGhjB",
"Hkg9taf2jr",
"HygyKZzzsB",
"Skl6MjRhKS",
"B1l1WVc0FS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate the thoughtful and helpful feedback provided by the reviewer. We address the clarifications requested in the review as follows:\n\n1. Missing detail on the build-up initialization scheme.\n\nPlease refer to Section 2.1 on Page 3 of the revised draft, where we have included additional details regardin... | [
-1,
-1,
-1,
6,
6,
1
] | [
-1,
-1,
-1,
1,
1,
3
] | [
"Skl6MjRhKS",
"B1l1WVc0FS",
"HygyKZzzsB",
"iclr_2020_rkxEKp4Fwr",
"iclr_2020_rkxEKp4Fwr",
"iclr_2020_rkxEKp4Fwr"
] |
iclr_2020_HygHtpVtPH | Laplacian Denoising Autoencoder | While deep neural networks have been shown to perform remarkably well in many machine learning tasks, labeling a large amount of supervised data is usually very costly to scale. Therefore, learning robust representations with unlabeled data is critical in relieving human effort and vital for many downstream applications. Recent advances in unsupervised and self-supervised learning approaches for visual data benefit greatly from domain knowledge. Here we are interested in a more generic unsupervised learning framework that can be easily generalized to other domains. In this paper, we propose to learn data representations with a novel type of denoising autoencoder, where the input noisy data is generated by corrupting the clean data in gradient domain. This can be naturally generalized to span multiple scales with a Laplacian pyramid representation of the input data. In this way, the agent has to learn more robust representations that can exploit the underlying data structures across multiple scales. Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach, compared to its counterpart with single-scale corruption. Besides, we also demonstrate that the learned representations perform well when transferring to other vision tasks. | reject | The main idea proposed by the work is interesting. The reviewers had several concerns about applicability and the extent of the empirical work. The authors responded to all the comments, added more experiments, and as reviewer 2 noted, the method is interesting because of its ability to handle local noise. Despite the author's helpful responses, the ratings were not increased, and it is still hard to assess the exact extent of how the proposed approach improves over state of the art. Because some concerns remained, and due to a large number of stronger papers, this paper was not accepted at this time. | train | [
"rkxhzEjk5B",
"ByglcoGsiB",
"SketDYJioB",
"BJxlkFJjiS",
"Skx-N_kooB",
"S1g7TmTQFr",
"HJgRL-32KH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a denoising auto-encoder where the input image is corrupted by adding noises to its Laplacian pyramid representation. Then a DAE is trained to predict the original data and learn a good representation of the data. By corrupting the Laplacian representation, which is multi-scale, the corruption ... | [
6,
-1,
-1,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_HygHtpVtPH",
"SketDYJioB",
"S1g7TmTQFr",
"HJgRL-32KH",
"rkxhzEjk5B",
"iclr_2020_HygHtpVtPH",
"iclr_2020_HygHtpVtPH"
] |
iclr_2020_BklHF6VtPB | Modeling Winner-Take-All Competition in Sparse Binary Projections | Inspired by the advances in biological science, the study of sparse binary projection models has attracted considerable recent research attention. The models project dense input samples into a higher-dimensional space and output sparse binary data representations after Winner-Take-All competition, subject to the constraint that the projection matrix is also sparse and binary. Following the work along this line, we developed a supervised-WTA model when training samples with both input and output representations are available, from which the optimal projection matrix can be obtained with a simple, efficient yet effective algorithm. We further extended the model and the algorithm to an unsupervised setting where only the input representation of the samples is available. In a series of empirical evaluation on similarity search tasks, the proposed models reported significantly improved results over the state-of-the-art methods in both search accuracy and running time. The successful results give us strong confidence that the work provides a highly practical tool to real world applications.
| reject | This paper proposes a WTA models for binary projection. While there are notable partial contributions, there is disagreement among the reviewers. I am most persuaded by the concern expressed that the experiments are not done on datasets that are large enough to be state-of-the-art compared to other random projection investigations. | train | [
"H1eiunniiB",
"HygyzTm3oB",
"SkgiKKhosr",
"BJgOt0Gnor",
"BklLKjNaYS",
"r1lgqMYTKB",
"HkxKdAolqH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n1. About optimal outputs: The supervised learning studied in the paper is different from a typical supervised setting (as commented in Review #3). Instead of real samples, the outputs are from a special projection obtained through matrix factorization.\nThanks for the suggestion. Real training samples also exist... | [
-1,
-1,
-1,
-1,
3,
8,
6
] | [
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"BklLKjNaYS",
"HkxKdAolqH",
"iclr_2020_BklHF6VtPB",
"r1lgqMYTKB",
"iclr_2020_BklHF6VtPB",
"iclr_2020_BklHF6VtPB",
"iclr_2020_BklHF6VtPB"
] |
iclr_2020_ByeSYa4KPS | Sparse Networks from Scratch: Faster Training without Losing Performance | We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. | reject | This paper presents a method for training sparse neural networks that also provides a speedup during training, in contrast to methods for training sparse networks which train dense networks (at normal speed) and then prune weights.
The method provides modest theoretical speedups during training, never measured in wallclock time. The authors improved their paper considerably in response to the reviews. I would be inclined to accept this paper despite not being a big win empirically, however a couple points of sloppiness pointed out (and maintained post-rebuttal) by R1 tip the balance to reject, in my opinion. Specifically:
1) "I do not agree that keeping the learning rate fixed across methods is the right approach." This seems like a major problem with the experiments to me.
2) "I would request the authors to slightly rewrite certain parts of their paper so as not to imply that momentum decreases the variance of the gradients in general." I agree. | train | [
"Bkg1FkmQqS",
"SJgPTmQ5FS",
"B1gmEirhoS",
"SklRhxNfjS",
"HyxHt63-iB",
"BJggmkWZiS",
"BJlHUQCgsS",
"BJg0hXBJiB",
"BJxWWfORYS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"In this paper, the authors propose a sparse momentum algorithm for doing efficient sparse training. The technique relies on identifying weights in a layer that do not have an effect on the error, pruning them, and redistributing and growing them across layers. The technique is compared against other recent algorit... | [
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2020_ByeSYa4KPS",
"iclr_2020_ByeSYa4KPS",
"SJgPTmQ5FS",
"Bkg1FkmQqS",
"Bkg1FkmQqS",
"Bkg1FkmQqS",
"BJxWWfORYS",
"SJgPTmQ5FS",
"iclr_2020_ByeSYa4KPS"
] |
iclr_2020_SJeItTEKvr | MULTI-LABEL METRIC LEARNING WITH BIDIRECTIONAL REPRESENTATION DEEP NEURAL NETWORKS | Multi-Label Learning task simultaneously predicting multiple labels has attracted researchers' interest for its wide application.
Metric Learning crucially determines the performance of the k nearest neighbor algorithms, the most popular framework handling the multi-label problem.
However, the existing advanced multiple-label metric learning suffers the inferior capacity and application restriction.
We propose an extendable and end-to-end deep representation approach for metric learning on multi-label data set that is based on neural networks able to operate on feature data or directly on raw image data.
We motivate the choice of our network architecture via a Bidirectional Representation learning where the label dependency is also integrated and deep convolutional networks that handle image data.
In multi-label metric learning, instances with the more different labels will be dragged the more far away, but ones with identical labels will concentrate together.
Our model scales linearly in the number of instances and trains deep neural networks that encode both input data and output labels, then, obtains a metric space for testing data.
In a number of experiments on multi-labels tasks, we demonstrate that our approach is better than related methods based on the systematic metric and its extendability.
| reject | All reviewers agreed that this submission is still premature to be accepted to ICLR2020.
We hope the review comments are useful for improving your paper for potential future submission. | val | [
"ByxiegNpYH",
"rJxYss7z5S",
"rylMTp8t5B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper addresses the problem of multi-label prediction. It proposes a method that uses a co-embedding of instances and labels into a joint embedding space in a way that related instances and labels fall close by and unrelated ones fall far away. For this purpose, embeddings from input space and label space to... | [
1,
3,
1
] | [
5,
4,
4
] | [
"iclr_2020_SJeItTEKvr",
"iclr_2020_SJeItTEKvr",
"iclr_2020_SJeItTEKvr"
] |
iclr_2020_BJedt6VKPS | Scaling Laws for the Principled Design, Initialization, and Preconditioning of ReLU Networks | Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditioning, the ratio of weight gradients to the weights. We prove that for a ReLU-based deep multilayer perceptron, a simple initialization scheme using the geometric mean of the fan-in and fan-out satisfies our scaling rule. For more sophisticated architectures, we show how our scaling principle can be used to guide design choices to produce well-conditioned neural networks, reducing guess-work. | reject | This paper proposes a new design space for initialization of neural networks motivated by balancing the singular values of the Hessian. Reviewers found the problem well motivated and agreed that the proposed method has merit, however more rigorous experiments are required to demonstrate that the ideas in this work are significant progress over current known techniques. As noted by Reviewer 2, there has been substantial prior work on initialization and conditioning that needs to be discussed as they relate to the proposed method. The AC notes two additional, closely related initialization schemes that should be discussed [1,2]. Comparing with stronger baselines on more recent modern architectures would improve this work significantly.
[1]: https://nips.cc/Conferences/2019/Schedule?showEvent=14216
[2]: https://arxiv.org/abs/1901.09321. | train | [
"r1lkMDOaKB",
"B1xH_I6MoS",
"r1xbGZBkir",
"rJgWYpNJor",
"rJepzjNJoB",
"B1e3e6yTKH",
"Hkl5cQtQ9r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a new initialization scheme for training neural networks. The initialization considers fan-in and fan-out, to regularize the range of singular values of the Hessian matrix, under several assumptions.\n\nThe proposed approach gives important insights for the problem of weight initialization in n... | [
3,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_BJedt6VKPS",
"rJepzjNJoB",
"B1e3e6yTKH",
"r1lkMDOaKB",
"Hkl5cQtQ9r",
"iclr_2020_BJedt6VKPS",
"iclr_2020_BJedt6VKPS"
] |
iclr_2020_B1xtFpVtvB | Improving the Generalization of Visual Navigation Policies using Invariance Regularization | Training agents to operate in one environment often yields overfitted models that are unable to generalize to the changes in that environment. However, due to the numerous variations that can occur in the real-world, the agent is often required to be robust in order to be useful. This has not been the case for agents trained with reinforcement learning (RL) algorithms. In this paper, we investigate the overfitting of RL agents to the training environments in visual navigation tasks. Our experiments show that deep RL agents can overfit even when trained on multiple environments simultaneously.
We propose a regularization method which combines RL with supervised learning methods by adding a term to the RL objective that would encourage the invariance of a policy to variations in the observations that ought not to affect the action taken. The results of this method, called invariance regularization, show an improvement in the generalization of policies to environments not seen during training.
| reject | All the reviewers recommend rejecting the submission. There is no basis for acceptance. | train | [
"HJcaN-MKB",
"HJxycM2hiH",
"r1xuA13cjB",
"BJxBY3o5sH",
"BklkqyTxFS",
"HkgGADihKS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary:\nThe goal of the paper is to improve generalization of RL agents to a set of known transformations of the observation. \nThe authors propose to explicitly include a term into the PPO loss function that incentivizes invariance to transformations of the environment which should not change the policy, in the... | [
3,
-1,
-1,
-1,
3,
3
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"iclr_2020_B1xtFpVtvB",
"HkgGADihKS",
"BklkqyTxFS",
"HJcaN-MKB",
"iclr_2020_B1xtFpVtvB",
"iclr_2020_B1xtFpVtvB"
] |
iclr_2020_BJgctpEKwr | RPGAN: random paths as a latent space for GAN interpretability | In this paper, we introduce Random Path Generative Adversarial Network (RPGAN) --- an alternative scheme of GANs that can serve as a tool for generative model analysis. While the latent space of a typical GAN consists of input vectors, randomly sampled from the standard Gaussian distribution, the latent space of RPGAN consists of random paths in a generator network. As we show, this design allows to associate different layers of the generator with different regions of the latent space, providing their natural interpretability. With experiments on standard benchmarks, we demonstrate that RPGAN reveals several interesting insights about roles that different layers play in the image generation process. Aside from interpretability, the RPGAN model also provides competitive generation quality and allows efficient incremental learning on new data. | reject | The paper received mixed scores: Weak Reject (R1 and R2) and Accept (R3). AC has closely read the reviews/comments/rebuttal and examined the paper. After the rebuttal, R2's concerns still remain. AC sides with R2 and feels that the generated interpretations are not convincing, and that the conclusions drawn are not fully supported. Thus the paper just falls below the acceptance threshold, unfortunately. The work has merits however and the authors should revise their paper to incorporate the constructive feedback. | train | [
"r1grar0IsS",
"BJlWqB08or",
"rJlAZHRUoB",
"B1lOQlo7cB",
"HklH2Ycr9S",
"SyeceWbLcr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your time and comments; we address the items from your weaknesses list below.\n\n1) [is not clear how interpretations can be associated with these paths.]\nTo avoid possible confusion: the interpretations are associated with buckets, not with paths. Varying active blocks in each bucket we get an unde... | [
-1,
-1,
-1,
3,
3,
8
] | [
-1,
-1,
-1,
1,
3,
5
] | [
"HklH2Ycr9S",
"B1lOQlo7cB",
"iclr_2020_BJgctpEKwr",
"iclr_2020_BJgctpEKwr",
"iclr_2020_BJgctpEKwr",
"iclr_2020_BJgctpEKwr"
] |
iclr_2020_rJxotpNYPS | DIVA: Domain Invariant Variational Autoencoder | We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant Variational Autoencoder (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations. We highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. To the best of our knowledge this has not been done before in a domain generalization setting. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further. | reject | This paper addresses the problem of domain generalization. The proposed solution, DIVA, introduces a domain invariant variational autoencoder. The latent space can be decomposed into three components: category specific, domain specific, and residual. The authors argue that each component is necessary to capture all relevant information while keeping the latent space interpretable.
This work received mixed scores. Two reviewers recommended weak reject while one reviewer recommended weak accept. There was extensive discussion between the reviewers and authors as well as amongst the reviewers. All reviewers agreed this is an important problem statement and that this work offers a compelling initial approach and experiments for domain generalization. There was disagreement as to whether the contributions as is was sufficient for acceptance. Some reviewers were concerned over similarity to [ref1], this work appears close to the time of ICLR submission and is therefore considered concurrent. However, despite this, there was significant confusion over the proposed solution and whether it is uniquely useful for domain generalization or for other areas like adaptation or transfer learning with reviewers arguing that experiments in these other settings would have helped showcase the benefits of the proposed approach. In addition, there was inconclusive evidence as to whether the two latent components were necessary.
Considering all discussions, reviews, and rebuttals the AC does not recommend this work for acceptance. The contribution and proposed solution needed substantial clarification and the experiments need additional analysis to explain under what conditions each latent component is needed either to improve performance or for interpretability.
| train | [
"Syeh4_jcjH",
"BJer72V5jH",
"B1gxPdkqiB",
"S1xs6_ntjB",
"SkxGkd2FjB",
"B1lyXD3YsH",
"BygeRUhFoB",
"B1g6yCkBFH",
"SygGzfHpFS",
"SkecDId0FB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"\n- The generative part of DIVA is modeled after what we believe is the true ground truth generative model for domain generalization datasets like Rotated MNIST and Malaria Cell Images. We argue that especially the qualitative results in Section 4.1.1, 4.1.2 and Appendix 5.1.4 show that each latent space captures ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"BJer72V5jH",
"S1xs6_ntjB",
"B1lyXD3YsH",
"SygGzfHpFS",
"SkecDId0FB",
"B1g6yCkBFH",
"iclr_2020_rJxotpNYPS",
"iclr_2020_rJxotpNYPS",
"iclr_2020_rJxotpNYPS",
"iclr_2020_rJxotpNYPS"
] |
iclr_2020_rJejta4KDS | SELF-KNOWLEDGE DISTILLATION ADVERSARIAL ATTACK | Neural networks show great vulnerability under the threat of adversarial examples.
By adding small perturbation to a clean image, neural networks with high classification accuracy can be completely fooled.
One intriguing property of the adversarial examples is transferability. This property allows adversarial examples to transfer to networks of unknown structure, which is harmful even to the physical world.
The current way of generating adversarial examples is mainly divided into optimization based and gradient based methods.
Liu et al. (2017) conjecture that gradient based methods can hardly produce transferable targeted adversarial examples in black-box-attack.
However, in this paper, we use a simple technique to improve the transferability and success rate of targeted attacks with gradient based methods.
We prove that gradient based methods can also generate transferable adversarial examples in targeted attacks.
Specifically, we use knowledge distillation for gradient based methods, and show that the transferability can be improved by effectively utilizing different classes of information.
Unlike the usual applications of knowledge distillation, we did not train a student network to generate adversarial examples.
We take advantage of the fact that knowledge distillation can soften the target and obtain higher information, and combine the soft target and hard target of the same network as the loss function.
Our method is generally applicable to most gradient based attack methods. | reject | This paper proposes an attack method to improve the transferability of adversarial examples under black-box attack settings.
Despite the simplicity of the proposed idea, reviewers and AC commonly think that the paper is far from being ready to publish in various aspects: (a) the presentation/writing quality, (b) in-depth analysis and (c) experimental results.
Hence, I recommend rejection. | train | [
"BylnNnhiYH",
"SyetfFR9sr",
"HkgJnMn5ir",
"H1go9zh9jB",
"HklOuz29sH",
"B1lF9s8for",
"ByeNLz1sFH",
"Hkl_08-49S",
"BJxX6GVatH",
"HyeTl2P2tS",
"BkljDLE3YB",
"B1g53PZiKr",
"ryehryyiFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public",
"author"
] | [
"This paper proposes distillation attacks to generate transferable targeted adversarial examples. The technique itself is pretty simple: instead of only using the raw logits L(x) to compute the cross entropy loss for optimization, they also use the distilled logits L(x)/T to generate adversarial examples. Their eva... | [
3,
-1,
-1,
-1,
-1,
-1,
1,
3,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
-1,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_rJejta4KDS",
"B1lF9s8for",
"ByeNLz1sFH",
"BylnNnhiYH",
"Hkl_08-49S",
"iclr_2020_rJejta4KDS",
"iclr_2020_rJejta4KDS",
"iclr_2020_rJejta4KDS",
"HyeTl2P2tS",
"iclr_2020_rJejta4KDS",
"B1g53PZiKr",
"iclr_2020_rJejta4KDS",
"iclr_2020_rJejta4KDS"
] |
iclr_2020_BJl6t64tvr | Revisiting the Generalization of Adaptive Gradient Methods | A commonplace belief in the machine learning community is that using adaptive gradient methods hurts generalization. We re-examine this belief both theoretically and experimentally, in light of insights and trends from recent years.
We revisit some previous oft-cited experiments and theoretical accounts in more depth, and provide a new set of experiments in larger-scale, state-of-the-art settings. We conclude that with proper tuning, the improved training performance of adaptive optimizers does not in general carry an overfitting penalty, especially in contemporary deep learning. Finally, we synthesize a ``user's guide'' to adaptive optimizers, including some proposed modifications to AdaGrad to mitigate some of its empirical shortcomings. | reject | The paper combines several recent optimizer tricks to provide empirical evidence that goes against the common belief that adaptive methods result in larger generalization errors. The contribution of this paper is rather small: no new strategies are introduced and no new theory is presented. The paper makes a good workshop paper, but does not meet the bar for publication at ICLR.
| train | [
"H1lQstr2jr",
"ryx7adBniB",
"r1x-9dS3jS",
"HklEDFu3YB",
"BkxNHyW6YB",
"BylHymY19r"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the thorough review and many good questions and suggestions.\n\n@Q1 (Theory): By “theoretically” we were referring to the discussion of theoretical examples, intended to continue the discussion from Section 3 of [Wilson et al. ‘17]. We will package the discussions of Section 5 into formal theorems.\n\n@... | [
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
5,
5,
1
] | [
"HklEDFu3YB",
"BkxNHyW6YB",
"BylHymY19r",
"iclr_2020_BJl6t64tvr",
"iclr_2020_BJl6t64tvr",
"iclr_2020_BJl6t64tvr"
] |
iclr_2020_BylTta4YvB | How Well Do WGANs Estimate the Wasserstein Metric? | Generative modelling is often cast as minimizing a similarity measure between a data distribution and a model distribution. Recently, a popular choice for the similarity measure has been the Wasserstein metric, which can be expressed in the Kantorovich duality formulation as the optimum difference of the expected values of a potential function under the real data distribution and the model hypothesis. In practice, the potential is approximated with a neural network and is called the discriminator. Duality constraints on the function class of the discriminator are enforced approximately, and the expectations are estimated from samples. This gives at least three sources of errors: the approximated discriminator and constraints, the estimation of the expectation value, and the optimization required to find the optimal potential. In this work, we study how well the methods, that are used in generative adversarial networks to approximate the Wasserstein metric, perform. We consider, in particular, the c-transform formulation, which eliminates the need to enforce the constraints explicitly. We demonstrate that the c-transform allows for a more accurate estimation of the true Wasserstein metric from samples, but surprisingly, does not | reject | There is insufficient support to recommend accepting this paper. Generally the reviewers found the technical contribution to be insufficient, and were not sufficiently convinced by the experimental evaluation. The feedback provided should help the authors improve their paper. | train | [
"rklP3uT8ir",
"SylRMdaIoH",
"ryg63wpIjH",
"SygEdKHnKS",
"H1llP9f6tB",
"H1gjYpiaFH"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for pointing out where we could improve. Below, we will address those comments, for which the explanations will also be added to the paper. Furthermore, we give arguments to why we view the contribution as a fit for ICLR.\n\nWe utilize POT’s ot.emd method to compute the ordinary optimal transport quantit... | [
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
3,
4,
1
] | [
"SygEdKHnKS",
"H1llP9f6tB",
"H1gjYpiaFH",
"iclr_2020_BylTta4YvB",
"iclr_2020_BylTta4YvB",
"iclr_2020_BylTta4YvB"
] |
iclr_2020_SJxRKT4Fwr | Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation | Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements. For example, air quality monitoring system records PM2.5, CO, etc. The resulting time-series data often has missing values due to device outages or communication errors. In order to impute the missing values, state-of-the-art methods are built on Recurrent Neural Networks (RNN), which process each time stamp sequentially, prohibiting the direct modeling of the relationship between distant time stamps. Recently, the self-attention mechanism has been proposed for sequence modeling tasks such as machine translation, significantly outperforming RNN because the relationship between each two time stamps can be modeled explicitly. In this paper, we are the first to adapt the self-attention mechanism for multivariate, geo-tagged time series data. In order to jointly capture the self-attention across different dimensions (i.e. time, location and sensor measurements) while keep the size of attention maps reasonable, we propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to process each dimension sequentially, yet in an order-independent manner. On three real-world datasets, including one our newly collected NYC-traffic dataset, extensive experiments demonstrate the superiority of our approach compared to state-of-the-art methods for both imputation and forecasting tasks.
| reject | The paper proposes a solution based on self-attention RNN to addressing the missing value in spatiotemporal data.
I myself read through the paper, followed by a discussion with the reviewers. We agree that the model is reasonable, and the results are promising. However, there is still some room for improvement:
1. The self-attention mechanism is not new. The specific way proposed in the paper is an interesting tweak of existing models, but not brand new per se. Most importantly, it is unclear if the proposed way is the optimal one and where the performance improvement comes from. As the reviewer suggested, more thorough empirical analysis should be performed for deeper insights of the model.
2. The datasets were adopted from existing work, but most of them do not have such complex models as the one proposed in the paper. Therefore, the suggestion for bigger datasets is valid.
Given the considerations above, we agree that while the paper has a lot of good materials, the current version is not ready yet. Addressing the issues above could lead to a good publication in the future. | test | [
"Hke9JJFioS",
"rJl1EgFijB",
"r1eiqZtsoB",
"B1g7bzFooS",
"ryeQH0Ooor",
"rkx9YbGcYH",
"r1loIGITtS",
"B1lAfWvh9S"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"
We appreciate R3 for pointing out relevant papers about self-attention generalization and we acknowledge that a lot of works have been done to generalize the self-attention on multi-dim data. In addition to our contribution on generalizing self-attention to multi-dim data for imputation and forecasting tasks, we ... | [
-1,
-1,
-1,
-1,
-1,
6,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
4,
1,
4
] | [
"r1loIGITtS",
"B1lAfWvh9S",
"B1lAfWvh9S",
"B1lAfWvh9S",
"rkx9YbGcYH",
"iclr_2020_SJxRKT4Fwr",
"iclr_2020_SJxRKT4Fwr",
"iclr_2020_SJxRKT4Fwr"
] |
iclr_2020_HklRKpEKDr | Deep Coordination Graphs | This paper introduces the deep coordination graph (DCG) for collaborative multi-agent reinforcement learning. DCG strikes a flexible trade-off between representational capacity and generalization by factorizing the joint value function of all agents according to a coordination graph into payoffs between pairs of agents. The value can be maximized by local message passing along the graph, which allows training of the value function end-to-end with Q-learning. Payoff functions are approximated with deep neural networks and parameter sharing improves generalization over the state-action space. We show that DCG can solve challenging predator-prey tasks that are vulnerable to the relative overgeneralization pathology and in which all other known value factorization approaches fail. | reject | This work extends previous work (Castellini et al) with parameter sharing and low-rank approximations, for pairwise communication between agents.
However the work as presented here is still considered too incremental, in particular when compared to Castellini et al.
The advances such as parameter sharing and low-rank approximation are good but not enough of a contribution. Authors' efforts to address this concern did not change reviewers' judgment.
Therefore, we recommend rejection. | test | [
"HkxPtjf2FS",
"r1xaRK0FiB",
"B1lAGtRKoB",
"Byxnj_RtsS",
"BJe9asNRYS",
"rkgKE6E0FS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces the deep coordination graph for collaborative multi-agent reinforcement learning aimed to solve predator-prey tasks by preventing relative overgeneralization during the exploration of agents. \n\nIn general, this paper gives a detailed and comprehensible depiction of the Introduction, Related... | [
6,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_HklRKpEKDr",
"HkxPtjf2FS",
"BJe9asNRYS",
"rkgKE6E0FS",
"iclr_2020_HklRKpEKDr",
"iclr_2020_HklRKpEKDr"
] |
iclr_2020_BklRFpVKPH | Demonstration Actor Critic | We study the problem of \textit{Reinforcement learning from demonstrations (RLfD)}, where the learner is provided with both some expert demonstrations and reinforcement signals from the environment. One approach leverages demonstration data in a supervised manner, which is simple and direct, but can only provide supervision signal over those states seen in the demonstrations. Another approach uses demonstration data for reward shaping. By contrast, the latter approach can provide guidance on how to take actions, even for those states are not seen in the demonstrations. But existing algorithms in the latter one adopt shaping reward which is not directly dependent on current policy, limiting the algorithms to treat demonstrated states the same as other states, failing to directly exploit supervision signal in demonstration data. In this paper, we propose a novel objective function with policy-dependent shaping reward, so as to get the best of both worlds. We present a convergence proof for policy iteration of the proposed objective, under the tabular setting. Then we develop a new practical algorithm, termed as Demonstration Actor Critic (DAC). Experiments on a range of popular benchmark sparse-reward tasks shows that our DAC method obtains a significant performance gain over five strong and off-the-shelf baselines. | reject | The paper proposes to combine RL and Imitation Learning. It defines a regularized reward function that minimizes the KL distance between the policy and the expert action. The formulation is similar to the KL regularized MDPs, but with the difference that an additional indicator function based on the support of the expert’s distribution is multiplied to the regularized term.
Several issues have been brought up by the reviewers, including:
* Comparison with pre-deep learning literature on the combination of RL and imitation learning
* Similarity to regularized MDP framework
* Assumption 1 requiring a stochastic expert policy, contradicting the policy invariance claim
* Difficulty of learning the indicator function of the support of the expert’s data distribution
Some of these issues have been addressed, but at the end of the day, one of the expert reviewers was not convinced that the problem of learning an indicator function is going to be easy at all. The reviewer believes that learning such a function requires "learning a harsh approximation of the density of visits of the expert for every state which is a quite hard task, especially in stochastic environments.”
Another issue is related to the policy invariance under the optimal expert policy. In most MDPs, the optimal policy is not stochastic and does not satisfy Assumption 1, so the optimal policy invariance proof seems to contradict Assumption 1.
Overall, it seems that even though this might become a good paper, it requires some improvements. I encourage the authors to address the reviewers’ comments as much as possible. | train | [
"B1eqTpmhsH",
"H1eg31viir",
"rkx4lgkTFB",
"rJlxxHfjsH",
"ByeEsarFoS",
"rkx02KCLoS",
"BJlxmvCUiH",
"BkxTDiALiS",
"B1g_QqCLjB",
"B1xT0uRIoS",
"ryeOBfijdB",
"rJgI4xBycr"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reply, and we will continue giving responses for each concern you have.\n\nOur response generally includes: (1) Regarding the approach of regularized MDPs; (2) Regarding indicator function learning and choice of test environment.\n\n** Regarding the approach of regularized MDPs **\nWe agree that... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
1
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"H1eg31viir",
"BJlxmvCUiH",
"iclr_2020_BklRFpVKPH",
"B1g_QqCLjB",
"iclr_2020_BklRFpVKPH",
"rkx4lgkTFB",
"rJgI4xBycr",
"ryeOBfijdB",
"rkx4lgkTFB",
"rJgI4xBycr",
"iclr_2020_BklRFpVKPH",
"iclr_2020_BklRFpVKPH"
] |
iclr_2020_S1el9TEKPB | Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets | Deep neural nets (DNNs) compression is crucial for adaptation to mobile devices. Though many successful algorithms exist to compress naturally trained DNNs, developing efficient and stable compression algorithms for robustly trained DNNs remains widely open. In this paper, we focus on a co-design of efficient DNN compression algorithms and sparse neural architectures for robust and accurate deep learning. Such a co-design enables us to advance the goal of accommodating both sparsity and robustness. With this objective in mind, we leverage the relaxed augmented Lagrangian based algorithms to prune the weights of adversarially trained DNNs, at both structured and unstructured levels. Using a Feynman-Kac formalism principled robust and sparse DNNs, we can at least double the channel sparsity of the adversarially trained ResNet20 for CIFAR10 classification, meanwhile, improve the natural accuracy by 8.69\% and the robust accuracy under the benchmark 20 iterations of IFGSM attack by 5.42\%. | reject | The paper is rejected based on unanimous reviews. | train | [
"S1gNquq_sH",
"HygV-Fc_sr",
"r1lOpO9diB",
"rklrNd5diH",
"BkxlcbZ0KH",
"SJloP0M1cr",
"HklqNcvkqB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your valuable feedback and thoughtful reviews. Below we address your concerns about our paper.\n\nQ1. The paper has limited novelty. It has already been shown in the original EnResNet paper [1] that EnResNet is more robust to adversarial attacks. Thus the only additional contribution of this paper is... | [
-1,
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"SJloP0M1cr",
"BkxlcbZ0KH",
"S1gNquq_sH",
"HklqNcvkqB",
"iclr_2020_S1el9TEKPB",
"iclr_2020_S1el9TEKPB",
"iclr_2020_S1el9TEKPB"
] |
iclr_2020_rkeZ9a4Fwr | Disentangling Improves VAEs' Robustness to Adversarial Attacks | This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as β-TCVAE (Chen et al., 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks (Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al.(2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions. | reject | This work a "Seatbelt-VAE" algorithm to improve the robustness of VAE against adversarial attacks. The proposed method is promising but the paper appears to be hastily written and leave many places to improve and clarify. This paper can be turned into an excellent paper with another round of throughout modification.
| train | [
"H1lvhhjHiB",
"SJxGc3orjS",
"BkgzrhjrsS",
"S1gFGniBoS",
"ryeBvosBir",
"r1eYj0gftS",
"SJxmteaQtS",
"BkelA7lkqr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"(2/2)\n\n\"The authors claim that there are no clear classification tasks for the datasets -- but this is not accurate as both celeb-a has clear classification tasks in the form of predicting attributes.It would have been really quite informative if adversarial accuracy on downstream tasks would have been reported... | [
-1,
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
1,
5,
4
] | [
"SJxGc3orjS",
"BkelA7lkqr",
"S1gFGniBoS",
"r1eYj0gftS",
"SJxmteaQtS",
"iclr_2020_rkeZ9a4Fwr",
"iclr_2020_rkeZ9a4Fwr",
"iclr_2020_rkeZ9a4Fwr"
] |
iclr_2020_r1l-5pEtDr | AdaX: Adaptive Gradient Descent with Exponential Long Term Memory | Adaptive optimization algorithms such as RMSProp and Adam have fast convergence and smooth learning process. Despite their successes, they are proven to have non-convergence issue even in convex optimization problems as well as weak performance compared with the first order gradient methods such as stochastic gradient descent (SGD). Several other algorithms, for example AMSGrad and AdaShift, have been proposed to alleviate these issues but only minor effect has been observed. This paper further analyzes the performance of such algorithms in a non-convex setting by extending their non-convergence issue into a simple non-convex case and show that Adam's design of update steps would possibly lead the algorithm to local minimums. To address the above problems, we propose a novel adaptive gradient descent algorithm, named AdaX, which accumulates the long-term past gradient information exponentially. We prove the convergence of AdaX in both convex and non-convex settings. Extensive experiments show that AdaX outperforms Adam in various tasks of computer vision and natural language processing and can catch up with SGD.
| reject | This paper analyzes the non-convergence issue in Adam in a simple non-convex case. The authors propose a new adaptive gradient descent algorithm based on exponential long term memory, and analyze its convergence in both convex and non-convex settings. The major weakness of this paper pointed out by many reviewers is its experimental evaluation, ranging from experimental design to missing comparison with strong baseline algorithms. I agree with the reviewers’ evaluation and thus recommend reject. | train | [
"rkeCnYcy9S",
"BJluZCr19H",
"rkxMdxqNsr",
"SkxM4h9VoS",
"H1e6eJX-oH",
"B1ekF1OesH",
"BkenmE9a_r",
"BylElFr-cH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposed a new adaptive gradient descent algorithm with exponential long term memory. The authors analyzed the non-convergence issue in Adam into a simple non-convex case. The authors also presented the convergence of the proposed AdaX in both convex and non-convex settings.\n\n- The proposed algorithm ... | [
3,
3,
-1,
-1,
-1,
-1,
1,
3
] | [
5,
5,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_r1l-5pEtDr",
"iclr_2020_r1l-5pEtDr",
"BJluZCr19H",
"BkenmE9a_r",
"rkeCnYcy9S",
"BylElFr-cH",
"iclr_2020_r1l-5pEtDr",
"iclr_2020_r1l-5pEtDr"
] |
iclr_2020_r1ezqaEFPr | Multi-Task Learning via Scale Aware Feature Pyramid Networks and Effective Joint Head | As a concise and classic framework for object detection and instance segmentation, Mask R-CNN achieves promising performance in both two tasks. However, considering stronger feature representation for Mask R-CNN fashion framework, there is room for improvement from two aspects. On the one hand, performing multi-task prediction needs more credible feature extraction and multi-scale features integration to handle objects with varied scales. In this paper, we address this problem by using a novel neck module called SA-FPN (Scale Aware Feature Pyramid Networks). With the enhanced feature representations, our model can accurately detect and segment the objects of multiple scales. On the other hand, in Mask R-CNN framework, isolation between parallel detection branch and instance segmentation branch exists, causing the gap between training and testing processes. To narrow this gap, we propose a unified head module named EJ-Head (Effective Joint Head) to combine two branches into one head, not only realizing the interaction between two tasks, but also enhancing the effectiveness of multi-task learning. Comprehensive experiments show that our proposed methods bring noticeable gains for object detection and instance segmentation. In particular, our model outperforms the original Mask R-CNN by 1~2 percent AP in both object detection and instance segmentation task on MS-COCO benchmark. Code will be available soon. | reject | All three reviewers gave scores of Weak Reject. Only a brief rebuttal was offered, which did not change the scores. Thus the paper connect be accepted. | train | [
"SJl6sgAooB",
"S1lOqdUCFS",
"SygMU1Uy5B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I apologize for a mistake in the penultimate paragraph of my above review. Of course, p(b|features)p(m|b,features) = p(m, b | features) and thus it *is* joint prediction.\n\nI am sorry for this very basic mistake!",
"This paper works on the problem of improving object detection and instance segmentation. It is r... | [
-1,
3,
3
] | [
-1,
4,
1
] | [
"SygMU1Uy5B",
"iclr_2020_r1ezqaEFPr",
"iclr_2020_r1ezqaEFPr"
] |
iclr_2020_HJlQ96EtPr | FleXOR: Trainable Fractional Quantization | Parameter quantization is a popular model compression technique due to its regular form and high compression ratio. In particular, quantization based on binary codes is gaining attention because each quantized bit can be directly utilized for computations without dequantization using look-up tables. Previous attempts, however, only allow for integer numbers of quantization bits, which ends up restricting the search space for compression ratio and accuracy. Moreover, quantization bits are usually obtained by minimizing quantization loss in a local manner that does not directly correspond to minimizing the loss function. In this paper, we propose an encryption algorithm/architecture to compress quantized weights in order to achieve fractional numbers of bits per weight and new compression configurations further optimize accuracy/compression trade-offs. Decryption is implemented using XOR gates added into the neural network model and described as tanh(x), which enable gradient calculations superior to the straight-through gradient method. We perform experiments using MNIST, CIFAR-10, and ImageNet to show that inserting XOR gates learns quantization/encrypted bit decisions through training and obtains high accuracy even for fractional sub 1-bit weights. | reject | This work studies parameter quantization using binary codes and proposes an encryption algorithm/architecture to compress quantized weights and achieve fractional numbers of bits per weight, and to perform decryption using XOR gates. The authors conduct experiments on datasets including ImageNet to evaluate their scheme.
Much of the concern from reviewers relates to baseline comparison and details around that. Specifically, R1 believes that the submission could have a bigger impact if authors could conduct more thorough experiments, e.g. compressing more widely-used and challenging architecture of ResNet-50, or trying tasks such as image detection (Mask R-CNN). The authors' responded to that and mentioned their choice of the current experimental setting is to facilitate comparison with previous works (baselines), which use similar experimental settings. Nevertheless, the baseline methods could have been attempted by the authors on broader tasks, or more widely-used architectures could have been investigated by authors on the baseline methods. As a result, R1 was not convinced. To ensure the paper receives the attention it deserves, I recommend considering a more thorough evaluation of the proposed method against baseline methods. | train | [
"ByeYRoIciS",
"H1xBls0PjH",
"BJe1TuCwiH",
"Bkev8dAwoS",
"ryxHNSeUtS",
"rkelDbynFr",
"HJenC0h6FH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"As our responses to reviewers' comments, we uploaded a revised manuscript with the following major changes.\n\n- We removed some redundant information in the paper, moved a few paragraphs and figures to Appendix, and added discussions according to the reviewers' suggestions. Now, the paper has full 8 pages.\n- We ... | [
-1,
-1,
-1,
-1,
3,
6,
3
] | [
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"iclr_2020_HJlQ96EtPr",
"ryxHNSeUtS",
"HJenC0h6FH",
"rkelDbynFr",
"iclr_2020_HJlQ96EtPr",
"iclr_2020_HJlQ96EtPr",
"iclr_2020_HJlQ96EtPr"
] |
iclr_2020_Bkg75aVKDH | Training Provably Robust Models by Polyhedral Envelope Regularization | Training certifiable neural networks enables one to obtain models with robustness guarantees against adversarial attacks. In this work, we use a linear approximation to bound model’s output given an input adversarial budget. This allows us to bound the adversary-free region in the data neighborhood by a polyhedral envelope and yields finer-grained certified robustness than existing methods. We further exploit this certifier to introduce a framework called polyhedral envelope regular- ization (PER), which encourages larger polyhedral envelopes and thus improves the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks with general activation functions and obtains comparable or better robustness guarantees than state-of-the-art methods, with very little cost in clean accuracy, i.e., without over-regularizing the model. | reject | The authors develop a new technique for training neural networks to be provably robust to adversarial attacks. The technique relies on constructing a polyhedral envelope on the feasible set of activations and using this to derive a lower bound on the maximum certified radius. By training with this as a regularizer, the authors are able to train neural networks that achieve strong provable robustness to adversarial attacks.
The paper makes a number of interesting contributions that the reviewers appreciated. However, two of the reviewers had some concerns with the significance of the contributions made:
1) The contributions of the paper are not clearly defined relative to prior work on bound propagation (Fast-Lin/KW/CROWN). In particular, the authors simply use the linear approximation derived in these prior works to obtain a bound on the radius to be certified. The authors claim faster convergence based on this, but this does not seem like a very significant contribution.
2) The improvements on the state of the art are marginal.
These were discussed in detail during the rebuttal phase and the two reviewers with concerns about the paper decided to maintain their score after reading the rebuttals, as the fundamental issues above were not
Given these concerns, I believe this paper is borderline - it has some interesting contributions, but the overall novelty on the technical side and strength of empirical results is not very high. | train | [
"rklQ05jzqH",
"SylBxgXdjH",
"r1lo3N5nsS",
"rkgE3Tl2jr",
"ryxjhbmusH",
"HJgEDWmdjB",
"HJxRMZXOor",
"Syx3S9paFr",
"SJgClQwRKH"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an approach for computing more refined estimates of robustness in comparison w/ existing linear approximation approaches that only give a yes or no answer with regard to robustness guarantees for a given lp-norm ball with radius epsilon. The nice thing is that as the linear-approximations get be... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
"iclr_2020_Bkg75aVKDH",
"iclr_2020_Bkg75aVKDH",
"rkgE3Tl2jr",
"HJgEDWmdjB",
"Syx3S9paFr",
"SJgClQwRKH",
"rklQ05jzqH",
"iclr_2020_Bkg75aVKDH",
"iclr_2020_Bkg75aVKDH"
] |
iclr_2020_ryeN5aEYDH | Deep RL for Blood Glucose Control: Lessons, Challenges, and Opportunities | Individuals with type 1 diabetes (T1D) lack the ability to produce the insulin their bodies need. As a result, they must continually make decisions about how much insulin to self-administer in order to adequately control their blood glucose levels. Longitudinal data streams captured from wearables, like continuous glucose monitors, can help these individuals manage their health, but currently the majority of the decision burden remains on the user. To relieve this burden, researchers are working on closed-loop solutions that combine a continuous glucose monitor and an insulin pump with a control algorithm in an `artificial pancreas.' Such systems aim to estimate and deliver the appropriate amount of insulin. Here, we develop reinforcement learning (RL) techniques for automated blood glucose control. Through a series of experiments, we compare the performance of different deep RL approaches to non-RL approaches. We highlight the flexibility of RL approaches, demonstrating how they can adapt to new individuals with little additional data. On over 21k hours of simulated data across 30 patients, RL approaches outperform baseline control algorithms (increasing time spent in normal glucose range from 71% to 75%) without requiring meal announcements. Moreover, these approaches are adept at leveraging latent behavioral patterns (increasing time in range from 58% to 70%). This work demonstrates the potential of deep RL for controlling complex physiological systems with minimal expert knowledge. | reject | The reviewers all believe that this paper is not yet ready for publication. All agree that this is an important application, and an interesting approach. The methodological novelty, as well as other parts of exposition, involving related work, or further discussion of what this solution means for patients, is right now not completely convincing to reviewers. My recommendation is to work on making sure the exposition best explains the methodology, and making sure this venue is the best for the submitted line of work. | train | [
"BJekTXIjjr",
"rJeucQIjoH",
"rklwCzLsoS",
"S1lWUQLsor",
"SkeuvSR2FH",
"ryeNngAwYr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for their useful feedback. Below, we respond to each reviewer, in turn. In addition, we have updated the paper to reflect their suggestions. We also include a separate post, ‘Relevance of the Application and Specific Contributions’ written to respond to issues raised by both reviewers.\n\n I... | [
-1,
-1,
-1,
-1,
3,
3
] | [
-1,
-1,
-1,
-1,
5,
3
] | [
"iclr_2020_ryeN5aEYDH",
"ryeNngAwYr",
"iclr_2020_ryeN5aEYDH",
"SkeuvSR2FH",
"iclr_2020_ryeN5aEYDH",
"iclr_2020_ryeN5aEYDH"
] |
iclr_2020_SklVqa4YwH | Realism Index: Interpolation in Generative Models With Arbitrary Prior | In order to perform plausible interpolations in the latent space of a generative model, we need a measure that credibly reflects if a point in an interpolation is close to the data manifold being modelled, i.e. if it is convincing. In this paper, we introduce a realism index of a point, which can be constructed from an arbitrary prior density, or based on FID score approach in case a prior is not available. We propose a numerically efficient algorithm that directly maximises the realism index of an interpolation which, as we theoretically prove, leads to a search of a geodesic with respect to the corresponding Riemann structure. We show that we obtain better interpolations then the classical linear ones, in particular when either the prior density is not convex shaped, or when the soap bubble effect appears. | reject | This paper introduces a realism metric for generated covariates and then leverage this metric to produce a novel method of interpolating between two real covariates. The reviewers found the method novel and were satisfied with the response form the authors to their concerns. However, Reviewer 4 did have reservations about the response to his/her points 3 and 4. Moreover, in the discussion period it was decided that while the method was well justified by intuition and theory, the empirical evaluation—which is the what matters at the end of the day—was unconvincing. | train | [
"Hye8ZAkRFH",
"rJxxdv43ir",
"rygUcygjiS",
"BJe-MngfiS",
"r1e_PhlGjH",
"B1lVssxfsH",
"SJltRSD29B",
"rJx8eoY6cH",
"BJemZ9lqcr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"This paper introduced a linear interpolation method that could be applied to the latent space of a generative model. With their method, interpolating instances generated by those generative models all maintain high quality in terms of the realism index they proposed.\n\nThis paper first introduced the quantity re... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1
] | [
1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1
] | [
"iclr_2020_SklVqa4YwH",
"iclr_2020_SklVqa4YwH",
"BJe-MngfiS",
"B1lVssxfsH",
"Hye8ZAkRFH",
"SJltRSD29B",
"iclr_2020_SklVqa4YwH",
"BJemZ9lqcr",
"iclr_2020_SklVqa4YwH"
] |
iclr_2020_rygHq6EFwr | GResNet: Graph Residual Network for Reviving Deep GNNs from Suspended Animation | The existing graph neural networks (GNNs) based on the spectral graph convolutional operator have been criticized for its performance degradation, which is especially common for the models with deep architectures. In this paper, we further identify the suspended animation problem with the existing GNNs. Such a problem happens when the model depth reaches the suspended animation limit, and the model will not respond to the training data any more and become not learnable. Analysis about the causes of the suspended animation problem with existing GNNs will be provided in this paper, whereas several other peripheral factors that will impact the problem will be reported as well. To resolve the problem, we introduce the GRESNET (Graph Residual Network) framework in this paper, which creates extensively connected highways to involve nodes’ raw features or intermediate representations throughout the graph for all the model layers. Different from the other learning settings, the extensive connections in the graph data will render the existing simple residual learning methods fail to work. We prove the effectiveness of the introduced new graph residual terms from the norm preservation perspective, which will help avoid dramatic changes to the node’s representations between sequential layers. Detailed studies about the GRESNET framework for many existing GNNs, including GCN, GAT and LOOPYNET, will be reported in the paper with extensive empirical experiments on real-world benchmark datasets. | reject | This paper studies the “suspended animation limit” of various graph neural networks (GNNs) and provides some theoretical analysis to explain its cause. To overcome the limitation, the authors propose Graph Residual Network (GRESNET) framework to involve nodes’ raw features or intermediate representations throughout the graph for all the model layers. The main concern of the reviewers is: the assumption made for theoretical analysis that the fully connected layer is identical mapping is too stringent. The paper does not gather sufficient support from the reviewers to merit acceptance, even after author response and reviewer discussion. I thus recommend reject. | train | [
"B1evBVMx5r",
"r1lMWMi2oH",
"BylYv9r2sS",
"Hygru6r3jH",
"SylLxFsojr",
"BJevIIooiS",
"r1gJduBniS",
"rylppdijiB",
"S1lx7puZtB",
"HygOfsbjtH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the causes of the empirically poor performance in deep structures that plagues existing GNNs, and identify the suspended animation problem as the main issue. In analogy to the Residual CNN network, a residual graph network is proposed to address such issue. Moreover, the underlying Markov chain p... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_rygHq6EFwr",
"iclr_2020_rygHq6EFwr",
"S1lx7puZtB",
"S1lx7puZtB",
"HygOfsbjtH",
"B1evBVMx5r",
"S1lx7puZtB",
"HygOfsbjtH",
"iclr_2020_rygHq6EFwr",
"iclr_2020_rygHq6EFwr"
] |
iclr_2020_r1e8qpVKPS | Role of two learning rates in convergence of model-agnostic meta-learning | Model-agnostic meta-learning (MAML) is known as a powerful meta-learning method. However, MAML is notorious for being hard to train because of the existence of two learning rates. Therefore, in this paper, we derive the conditions that inner learning rate α and meta-learning rate β must satisfy for MAML to converge to minima with some simplifications. We find that the upper bound of β depends on α, in contrast to the case of using the normal gradient descent method. Moreover, we show that the threshold of β increases as α approaches its own upper bound. This result is verified by experiments on various few-shot tasks and architectures; specifically, we perform sinusoid regression and classification of Omniglot and MiniImagenet datasets with a multilayer perceptron and a convolutional neural network. Based on this outcome, we present a guideline for determining the learning rates: first, search for the largest possible α; next, tune β based on the chosen value of α. | reject | This paper theoretically and empirically studies the inner and outer learning rate of the MAML algorithm and their role in convergence. While the paper presents some interesting ideas and add to our theoretical understanding of meta-learning algorithms, the reviewers raised concerns about the relevance of the theory. Further the empirical study is somewhat preliminary and doesn't compare to prior works that also try to stabilize the MAML algorithm, further bringing into question its usefulness. As such, the current form of the paper doesn't meet the bar for ICLR. | train | [
"Bkx4feTiiS",
"HylESCaqjr",
"Skl4ix9UjH",
"rklkKZYLir",
"rJeiD-FUiB",
"SJeuGWtLjS",
"H1gkAeY8jH",
"rklWoeK8jS",
"SJloDQiSFB",
"rJx4PXPnFr",
"BygN9L1ycS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewer 2,\n\nThank you for your polite and quick response. We thought we had derived the necessary and sufficient condition for the single-task case and a sufficient condition for the multi-task case. However, as you appropriately point out, (eigenvalues of Hessian) ≥ 0 is a necessary condition and (eigenv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
3
] | [
"HylESCaqjr",
"rklWoeK8jS",
"iclr_2020_r1e8qpVKPS",
"SJloDQiSFB",
"SJloDQiSFB",
"rJx4PXPnFr",
"BygN9L1ycS",
"BygN9L1ycS",
"iclr_2020_r1e8qpVKPS",
"iclr_2020_r1e8qpVKPS",
"iclr_2020_r1e8qpVKPS"
] |
iclr_2020_Syl89aNYwS | Robust saliency maps with distribution-preserving decoys | Saliency methods help to make deep neural network predictions more interpretable by identifying particular features, such as pixels in an image, that contribute most strongly to the network's prediction. Unfortunately, recent evidence suggests that many saliency methods perform poorly when gradients are saturated or in the presence of strong inter-feature dependence or noise injected by an adversarial attack. In this work, we propose a data-driven technique that uses the distribution-preserving decoys to infer robust saliency scores in conjunction with a pre-trained convolutional neural network classifier and any off-the-shelf saliency method. We formulate the generation of decoys as an optimization problem, potentially applicable to any convolutional network architecture. We also propose a novel decoy-enhanced saliency score, which provably compensates for gradient saturation and considers joint activation patterns of pixels in a single-layer convolutional neural network. Empirical results on the ImageNet data set using three different deep neural network architectures---VGGNet, AlexNet and ResNet---show both qualitatively and quantitatively that decoy-enhanced saliency scores outperform raw scores produced by three existing saliency methods. | reject | This submission proposes a method to explain deep vision models using saliency maps that are robust to certain input perturbations.
Strengths:
-The paper is clear and well-written.
-The approach is interesting.
Weaknesses:
-The motivation and formulation of the approach (e.g. coherence vs explanation and the use of decoys) was not convincing.
-The validation needs additional experiments and comparisons to recent works.
These weaknesses were not sufficiently addressed in the discussion phase. AC agrees with the majority recommendation to reject. | train | [
"BkgHTqDhiB",
"BkxFKSxniS",
"B1evTsRtsB",
"r1go5yyqiH",
"SyxsDRatsS",
"r1xwnPtItS",
"B1xzEKtJ9r",
"Bye7qbVUqH"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for responding our response. Please allow us to make some clarifications below.\n\n1. We would like to highlight the most important contribution of the paper, the decoy-enhanced saliency score. Essentially, we derived a robust saliency measure to provably address the two key lim... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"BkxFKSxniS",
"r1go5yyqiH",
"B1xzEKtJ9r",
"r1xwnPtItS",
"Bye7qbVUqH",
"iclr_2020_Syl89aNYwS",
"iclr_2020_Syl89aNYwS",
"iclr_2020_Syl89aNYwS"
] |
iclr_2020_B1xv9pEKDS | LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning | While pre-training and fine-tuning, e.g., BERT~\citep{devlin2018bert}, GPT-2~\citep{radford2019language}, have achieved great success in language understanding and generation tasks, the pre-trained models are usually too big for online deployment in terms of both memory cost and inference speed, which hinders them from practical online usage. In this paper, we propose LightPAFF, a Lightweight Pre-training And Fine-tuning Framework that leverages two-stage knowledge distillation to transfer knowledge from a big teacher model to a lightweight student model in both pre-training and fine-tuning stages. In this way the lightweight model can achieve similar accuracy as the big teacher model, but with much fewer parameters and thus faster online inference speed. LightPAFF can support different pre-training methods (such as BERT, GPT-2 and MASS~\citep{song2019mass}) and be applied to many downstream tasks. Experiments on three language understanding tasks, three language modeling tasks and three sequence to sequence generation tasks demonstrate that while achieving similar accuracy with the big BERT, GPT-2 and MASS models, LightPAFF reduces the model size by nearly 5x and improves online inference speed by 5x-7x. | reject | This paper proposes a two-stage distillation from pretrained language models, where the knowledge distillation happens in both the pre-training and the fine-tune stages. Experiments show improvement on BERT, GPT and MASS. All reviewers pointed that the novelty of the work is very limited. | train | [
"SkxmRuSTKH",
"Hkl6G1n5jS",
"SJxCUy39jB",
"Byg84RjqjH",
"S1lA8Ti9iS",
"SygGucIwYH",
"SylMM3oaYH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nSummary: This work leverages knowledge distillation both in pre-training and fine-tuning stages to learn a more compact student model that approximates the performance of a teacher model. Extensive experiments with different knowledge distillation loss functions are conducted on a number of representative langua... | [
3,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
1,
1
] | [
"iclr_2020_B1xv9pEKDS",
"SkxmRuSTKH",
"SylMM3oaYH",
"SygGucIwYH",
"iclr_2020_B1xv9pEKDS",
"iclr_2020_B1xv9pEKDS",
"iclr_2020_B1xv9pEKDS"
] |
iclr_2020_BygPq6VFvS | Enhancing Attention with Explicit Phrasal Alignments | The attention mechanism is an indispensable component of any state-of-the-art neural machine translation system. However, existing attention methods are often token-based and ignore the importance of phrasal alignments, which are the backbone of phrase-based statistical machine translation. We propose a novel phrase-based attention method to model n-grams of tokens as the basic attention entities, and design multi-headed phrasal attentions within the Transformer architecture to perform token-to-token and token-to-phrase mappings. Our approach yields improvements in English-German, English-Russian and English-French translation tasks on the standard WMT'14 test set. Furthermore, our phrasal attention method shows improvements on the one-billion-word language modeling benchmark.
| reject | This paper proposes a phrase-based attention method to model word n-grams (as opposed to single words) as the basic attention units. Multi-headed phrasal attentions are designed within the Transformer architecture to perform token-to-token and token-to-phrase mappings. Some improvements are shown in English-German, English-Russian and English-French translation tasks on the standard WMT'14 test set, and on the one-billion-word language modeling benchmark.
While the proposed approach is interesting and takes inspiration in the notion of phrases used in phrase-based machine translation, with some positive empirical results, the technical novelty of this paper is rather limited, and the experiments could be more solid. While it is understandable that lack of computational resources made it hard to experiment with larger models (e.g. Transformer-big), perhaps it would be interesting to try on language pairs with fewer resources (smaller datasets), where base models are more competitive. | train | [
"Byl2Sx7aYH",
"rkxr4LdpYH",
"BkxB9WY0KS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an extension of the attention module that explicitly incorporates phrase information. Using convolution, attention scores are obtained independently for each n-gram type, and then combined. Transformer models with the proposed phrase attention are evaluated on multiple translation tasks, as wel... | [
6,
3,
8
] | [
3,
5,
5
] | [
"iclr_2020_BygPq6VFvS",
"iclr_2020_BygPq6VFvS",
"iclr_2020_BygPq6VFvS"
] |
iclr_2020_ryedqa4FwS | MANAS: Multi-Agent Neural Architecture Search | The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this paper, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements (1/8th of state-of-the-art), and performances above those of much more computationally expensive methods.
Theoretically, we demonstrate vanishing regrets of the form O(T), with T being the total number of rounds.
Finally, aware that random search is an (often ignored) effective baseline we perform additional experiments on 3 alternative datasets and 2 network configurations, and achieve favorable results in comparison with this baseline and other competing methods. | reject | This paper introduces a NAS algorithm based on multi-agent optimization, treating each architecture choice as a bandit and using an adversarial bandit framework to address the non-stationarity of the system that results from the other bandits running in parallel.
Two reviewers ranked the paper as a weak accept and one ranked it as a weak reject. The rebuttal answered some questions, and based on this the reviewers kept their ratings. The discussion between reviewers and AC did not result in a consensus. The average score was below the acceptance threshold, but since it was close I read the paper in detail myself before deciding.
Here is my personal assessment:
"
Positives:
1. It is very nice to see some theory for NAS, as there isn't really any so far. The theory for MANAS itself does not appear to be very compelling, since it assumes that all but one bandit is fixed, i.e., that the problem is stationary, which it clearly isn't. But if I understand correctly, MANAS-LS does not have that problem. (It would be good if the authors could make these points more explicit in future versions.)
2. The absolute numbers for the experimental results on CIFAR-10 are strong.
3. I welcome the experiments on 3 additional datasets.
Negatives:
1. The paper crucially omits a comparison to random search with weight sharing (RandomNAS-WS) as introduced by Li & Talwalkar's paper "Random Search and Reproducibility for Neural Architecture Search" (https://arxiv.org/abs/1902.07638), on arXiv since February and published at UAI 2019. This method is basically MANAS without the update step, using a uniform random distribution at step 3 of the algorithm, and therefore would be the right baseline to see whether the bandits are actually learning anything. RandomNAS-WS has the same memory improvements over DARTS as MANAS, so this part is not new. Similarly, there is GDAS as another recent approach with the same low memory requirement: http://openaccess.thecvf.com/content_CVPR_2019/html/Dong_Searching_for_a_Robust_Neural_Architecture_in_Four_GPU_Hours_CVPR_2019_paper.html
This is my most important criticism.
2. I think there may be a typo somewhere concerning the runtimes of MANAS. It would be extremely surprising if MANAS truly takes 2.5 times longer when run with 20 cells and 500 epochs than when run with 8 cells and 50 epochs. It would make sense if MANAS gets 2.5 slower when just going from 8 to 20 cells, but when going from 50 to 500 epochs the cost should go up by another factor of 10. And the text states specifically that "for datasets other than ImageNet, we use 500 epochs during the search phase for architectures with 20 cells, 400 epochs for 14 cells, and 50 epochs for 8 cells". Therefore, I think either that text is wrong or MANAS got 10x more budget than DARTS.
3. Figure 2 shows that on Sport-8, MANAS actually does *significantly worse* when searching on 14 cells than on 8 cells (note the different scale of the y axis). It's also slightly better with 8 cells on MIT-67. I recommend that the authors discuss this in the text and offer some explanation, rather than have the text claim that 14 cells are better and the figure contradict this. Only for MANAS-LS, the 14-cell version actually works better.
4. The authors are unclear about whether they compare to random search or random sampling. These are two different approaches. Random sampling (as proposed by Sciuto et al, 2019) takes a single random architecture from the search space and compares to that. Standard random search iteratively samples N random architectures and evaluates them (usually on some proxy metric), selecting and retraining the best one found that way. The number N is chosen for random search to use the same computational resources as the method being compared. The authors call their method random search but then appear to be describing random sampling.
Also, with several recent papers showcasing problems in NAS evaluation (many design decisions affect NAS performance), it would be a big plus to have code available to ensure reproducibility. Many ICLR papers are submitted with an anonymized code repository, and if possible, I would encourage the authors to do this for a future version.
"
The prior rating based on the reviewers was slightly below the acceptance threshold, and my personal judgement did not push the paper above the acceptance threshold. I encourage the authors to improve the paper by addressing the reviewer's points and the points above and resubmit to a future venue. Overall, I believe this is very interesting work and am looking forward to a future version. | train | [
"rkgJ-2Pm5S",
"rJgSxhbDoH",
"SJe9p5-voH",
"ryl7Xc-DjH",
"HkxApt-DoS",
"SyeqLMkfqH",
"ryxNjXer5S"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors proposed MANAS, which is based on DARTS, by approximating the problem space by factorizing them into smaller spaces, which will be solved by multiple agents. The authors claimed that this can simplified the search space so that the joint search can be more efficient to enable us to searc... | [
6,
-1,
-1,
-1,
-1,
3,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_ryedqa4FwS",
"iclr_2020_ryedqa4FwS",
"SyeqLMkfqH",
"rkgJ-2Pm5S",
"ryxNjXer5S",
"iclr_2020_ryedqa4FwS",
"iclr_2020_ryedqa4FwS"
] |
iclr_2020_Byx55pVKDB | How the Softmax Activation Hinders the Detection of Adversarial and Out-of-Distribution Examples in Neural Networks | Despite having excellent performances for a wide variety of tasks, modern neural networks are unable to provide a prediction with a reliable confidence estimate which would allow to detect misclassifications. This limitation is at the heart of what is known as an adversarial example, where the network provides a wrong prediction associated with a strong confidence to a slightly modified image. Moreover, this overconfidence issue has also been observed for out-of-distribution data. We show through several experiments that the softmax activation, usually placed as the last layer of modern neural networks, is partly responsible for this behaviour. We give qualitative insights about its impact on the MNIST dataset, showing that relevant information present in the logits is lost once the softmax function is applied. The same observation is made through quantitative analysis, as we show that two out-of-distribution and adversarial example detectors obtain competitive results when using logit values as inputs, but provide considerably lower performances if they use softmax probabilities instead: from 98.0% average AUROC to 56.8% in some settings. These results provide evidence that the softmax activation hinders the detection of adversarial and out-of-distribution examples, as it masks a significant part of the relevant information present in the logits. | reject |
The paper investigates how the softmax activation hinders the detection of out-of-distribution examples.
All the reviewers felt that the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about theoretical justification, comparison to other existing methods, discussion of connection to existing methods and scalability to larger number of classes.
I encourage the authors to revise the draft based on the reviewers’ feedback and resubmit to a different venue.
| val | [
"ByegKbDOsS",
"ByelL-vOiH",
"Hyx9mbDdiB",
"Syx2J82otS",
"HyemRBD8cB",
"Bylx9EfY5B"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comment and feedback. We respectfully disagree that the experiments with the NN-based detector should be ignored in your evaluation. As the goal of this paper is not to compare differences in performances between methods, but to compare differences in performances when using logit vs softmax val... | [
-1,
-1,
-1,
1,
1,
3
] | [
-1,
-1,
-1,
5,
5,
5
] | [
"Syx2J82otS",
"HyemRBD8cB",
"Bylx9EfY5B",
"iclr_2020_Byx55pVKDB",
"iclr_2020_Byx55pVKDB",
"iclr_2020_Byx55pVKDB"
] |
iclr_2020_B1x996EKPS | Fast Machine Learning with Byzantine Workers and Servers | Machine Learning (ML) solutions are nowadays distributed and are prone to various types of component failures, which can be encompassed in so-called Byzantine behavior. This paper introduces LiuBei, a Byzantine-resilient ML algorithm that does not trust any individual component in the network (neither workers nor servers), nor does it induce additional communication rounds (on average), compared to standard non-Byzantine resilient algorithms. LiuBei builds upon gradient aggregation rules (GARs) to tolerate a minority of Byzantine workers. Besides, LiuBei replicates the parameter server on multiple machines instead of trusting it. We introduce a novel filtering mechanism that enables workers to filter out replies from Byzantine server replicas without requiring communication with all servers. Such a filtering mechanism is based on network synchrony, Lipschitz continuity of the loss function, and the GAR used to aggregate workers’ gradients. We also introduce a protocol, scatter/gather, to bound drifts between models on correct servers with a small number of communication messages. We theoretically prove that LiuBei achieves Byzantine resilience to both servers and workers and guarantees convergence. We build LiuBei using TensorFlow, and we show that LiuBei tolerates Byzantine behavior with an accuracy loss of around 5% and around 24% convergence overhead compared to vanilla TensorFlow. We moreover show that the throughput gain of LiuBei compared to another state–of–the–art Byzantine–resilient ML algorithm (that assumes network asynchrony) is 70%. | reject | This paper is concerned with learning in the context of so-called Byzantine failures. This is relevant for for example distributed computation of gradients of mini-batches and parameter updates. The paper introduces the concept and Byzantine servers and gives theoretical and practical results for algorithm for this setting.
The reviewers had a hard time evaluating this paper and the AC was unable to find an expert reviewer. Still, the feedback from the reviewers painted a clear picture that the paper did not do enough to communicate the novel concepts used in the paper.
Rejection is recommended with a strong encouragement to use the feedback to improve the paper for the next conference. | train | [
"S1gnPiXQjH",
"Hklqi8B45S",
"H1e7kebjor",
"SygnRcQXsH",
"SkgAzo19jr",
"SJg4t_YKsH",
"SklS6sQXsH",
"r1lEfZxCKr",
"BkxhT9ISqB"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"“In general, I miss a more clear indication of how the individual contributions are different from other methods. it's not clear to me what the real novelty of the work is.”\n>> We would like to clarify the main contributions of this work. First, utilizing filtering techniques to tolerate Byzantine servers is nove... | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
-1,
1,
-1,
-1,
-1,
-1,
-1,
1,
1
] | [
"Hklqi8B45S",
"iclr_2020_B1x996EKPS",
"SkgAzo19jr",
"BkxhT9ISqB",
"SklS6sQXsH",
"iclr_2020_B1x996EKPS",
"r1lEfZxCKr",
"iclr_2020_B1x996EKPS",
"iclr_2020_B1x996EKPS"
] |
iclr_2020_SJe9qT4YPr | RISE and DISE: Two Frameworks for Learning from Time Series with Missing Data | Time series with missing data constitute an important setting for machine learning. The most successful prior approaches for modeling such time series are based on recurrent neural networks that learn to impute unobserved values and then treat the imputed values as observed. We start by introducing Recursive Input and State Estimation (RISE), a general framework that encompasses such prior approaches as specific instances. Since RISE instances tend to suffer from poor long-term performance as errors are amplified in feedback loops, we propose Direct Input and State Estimation (DISE), a novel framework in which input and state representations are learned from observed data only. The key to DISE is to include time information in representation learning, which enables the direct modeling of arbitrary future time steps by effectively skipping over missing values, rather than imputing them, thus overcoming the error amplification encountered by RISE methods. We benchmark instances of both frameworks on two forecasting tasks, observing that DISE achieves state-of-the-art performance on both. | reject | The paper attacks the important problem of learning time series models with missing data and proposes two learning frameworks, RISE and DISE, for this problem. The reviewers had several concerns about the paper and experimental setup and agree that this paper is not yet ready for publication. Please pay careful attention to the reviewer comments and particularly address the comments related to experimental design, clarity, and references to prior work while editing the paper. | train | [
"B1lOtSHnoS",
"HJeuMBBhjS",
"HkgQaVHnir",
"BJxRerZEtH",
"Ske0bzVTYS",
"r1eYpXoJ9S"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the comments.\n\nWe agree with the reviewer that the scope of the paper concerns RNN-based models. We will include pointers and discussions to some of the works that the reviewer suggests. One of the goals of the paper is to show that an important number of recent RNN-based methods to lea... | [
-1,
-1,
-1,
3,
1,
3
] | [
-1,
-1,
-1,
4,
5,
3
] | [
"BJxRerZEtH",
"Ske0bzVTYS",
"r1eYpXoJ9S",
"iclr_2020_SJe9qT4YPr",
"iclr_2020_SJe9qT4YPr",
"iclr_2020_SJe9qT4YPr"
] |
iclr_2020_Bkx29TVFPr | An implicit function learning approach for parametric modal regression | For multi-valued functions---such as when the conditional distribution on targets given the inputs is multi-modal---standard regression approaches are not always desirable because they provide the conditional mean. Modal regression approaches aim to instead find the conditional mode, but are restricted to nonparametric approaches. Such approaches can be difficult to scale, and make it difficult to benefit from parametric function approximation, like neural networks, which can learn complex relationships between inputs and targets. In this work, we propose a parametric modal regression algorithm, by using the implicit function theorem to develop an objective for learning a joint parameterized function over inputs and targets. We empirically demonstrate on several synthetic problems that our method (i) can learn multi-valued functions and produce the conditional modes, (ii) scales well to high-dimensional inputs and (iii) is even more effective for certain unimodal problems, particularly for high frequency data where the joint function over inputs and targets can better capture the complex relationship between them. We conclude by showing that our method provides small improvements on two regression datasets that have asymmetric distributions over the targets. | reject | The paper proposes an implicit function approach to learning the modes of multimodal regression. The basic idea is interesting, and is clearly related to density estimation, which the paper does not discuss.
Based on the reviews and the fact that the authors did not submit a helpful rebuttal, I recommend rejection. | test | [
"B1lVl4iioB",
"rylHeo5oiH",
"Hkec0q9osH",
"HJlJbLHVtB",
"HklWwuchKB",
"Sye133lTKS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for reading our paper carefully. We will take into account all of your suggestions/questions/comments in the next version. We apologize that we do not have sufficient time to respond to all of your questions. \n\nWhen doing prediction: yes, we conducted experiments with/without the partial derivative term.... | [
-1,
-1,
-1,
1,
3,
6
] | [
-1,
-1,
-1,
1,
1,
3
] | [
"Sye133lTKS",
"HJlJbLHVtB",
"HklWwuchKB",
"iclr_2020_Bkx29TVFPr",
"iclr_2020_Bkx29TVFPr",
"iclr_2020_Bkx29TVFPr"
] |
iclr_2020_BJepcaEtwB | Meta-Graph: Few shot Link Prediction via Meta Learning | We consider the task of few shot link prediction, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges. We show that current link prediction methods are generally ill-equipped to handle this task---as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data. To address this challenge, we introduce a new gradient-based meta learning framework, Meta-Graph, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that Meta-Graph enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges. | reject | This paper presents a new link prediction framework in the case of small amount labels using meta learning methods. The reviewers think the problem is important, and the proposed approach is a modification of meta learning to this case. However, the method is not compared to other knowledge graph completion methods such as TransE, RotaE, Neural Tensor Factorization in benchmark dataset such as Fb15k and freebase. Adding these comparisons can make the paper more convincing. | train | [
"BylAkbkAtS",
"rJgCs-XqjS",
"r1ewLZ79iH",
"SygKZWmqoS",
"ryliCgmqoS",
"BklcpPdaKH",
"Sye_KYdRFB",
"Bkg2LZlYqr",
"HyeL_CU_5B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Overview: In this paper, a meta-learning approach is proposed to perform link prediction across multi-graphs with scarce data. To do so, each graph is treated as a link prediction \"task\". Different from the tasks in conventional meta-learning, the graphs here are generally non i.i.d. Based on the variational gra... | [
6,
-1,
-1,
-1,
-1,
6,
6,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
"iclr_2020_BJepcaEtwB",
"BklcpPdaKH",
"BylAkbkAtS",
"ryliCgmqoS",
"Sye_KYdRFB",
"iclr_2020_BJepcaEtwB",
"iclr_2020_BJepcaEtwB",
"HyeL_CU_5B",
"iclr_2020_BJepcaEtwB"
] |
iclr_2020_r1xa9TVFvH | NeuralUCB: Contextual Bandits with Neural Network-Based Exploration | We study the stochastic contextual bandit problem, where the reward is generated from an unknown bounded function with additive noise. We propose the NeuralUCB algorithm, which leverages the representation power of deep neural networks and uses the neural network-based random feature mapping to construct an upper confidence bound (UCB) of reward for efficient exploration. We prove that, under mild assumptions, NeuralUCB achieves O~(T) regret bound, where T is the number of rounds. To the best of our knowledge, our algorithm is the first neural network-based contextual bandit algorithm with near-optimal regret guarantee. Preliminary experiment results on synthetic data corroborate our theory, and shed light on potential applications of our algorithm to real-world problems. | reject | As the reviewers have pointed out and the authors have confirmed, the original version of this paper was not a significant leap beyond combining recent understanding of Neural Tangent Kernels and previous techniques for kernelized bandits. In a revision, the authors updated their draft to allow the point at which gradients are centered around, theta_0, to now equal theta_t. This seems like a more reasonable algorithm and it is satisfying that the authors were able to maintain their regret bound for this dynamic setting. However, the revision is substantial and it seems unreasonable to expect reviewers to read the revised results in detail--the reviewers also felt it may be unfair to other ICLR submissions. All reviewers believe the paper has introduced valuable contributions to the area but should go under a full review process at a future venue. A reviewer would also like to see a comparison to Kernel UCB run on the true NTK (or a good approximation thereof). | train | [
"SkgraylCYr",
"S1lfDtjXqr",
"BJxKcgi2sr",
"HkgkIJj2sB",
"H1l9psc2oH",
"HkgY5MthoS",
"rJeGYZEDjH",
"B1lvbPHXsB",
"rJeeYcLztH",
"HJg_9gTMiB",
"rJg7MgTfor",
"HJgdUk6Msr",
"r1eVXJpMjH",
"BJgJMB2U_S",
"ByeFendB_S"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"This paper proposes to use the Neural Tangent Kernel (NTK) with the Upper Confidence Bound for stochastic contextual bandits. \n- The paper instantiates Kernel UCB (Valko, 2013) with the NTK and the novelty is limited from a theoretical point of view. \n- There is no experimental comparison with Neural Linear or K... | [
3,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_r1xa9TVFvH",
"iclr_2020_r1xa9TVFvH",
"HkgkIJj2sB",
"H1l9psc2oH",
"HkgY5MthoS",
"HJgdUk6Msr",
"B1lvbPHXsB",
"HJg_9gTMiB",
"iclr_2020_r1xa9TVFvH",
"rJeeYcLztH",
"SkgraylCYr",
"S1lfDtjXqr",
"iclr_2020_r1xa9TVFvH",
"iclr_2020_r1xa9TVFvH",
"iclr_2020_r1xa9TVFvH"
] |
iclr_2020_H1ep5TNKwr | Hebbian Graph Embeddings | Representation learning has recently been successfully used to create vector representations of entities in language learning, recommender systems and in similarity learning. Graph embeddings exploit the locality structure of a graph and generate embeddings for nodes which could be words in a language, products on a retail website; and the nodes are connected based on a context window. In this paper, we consider graph embeddings with an error-free associative learning update rule, which models the embedding vector of node as a non-convex Gaussian mixture of the embeddings of the nodes in its immediate vicinity with some constant variance that is reduced as iterations progress. It is very easy to parallelize our algorithm without any form of shared memory, which makes it possible to use it on very large graphs with a much higher dimensionality of the embeddings. We study the efficacy of proposed method on several benchmark data sets in Goyal & Ferrara(2018b) and favorably compare with state of the art methods. Further, proposed method is applied to generate relevant recommendations for a large retailer. | reject | The paper learns an embedding on the nodes of the graph, iteratively aligning the vector associated to a node with that of its neighbor nodes (based on the Hebbian rule).
The reviews state that the approach is interesting though very natural/straightforward, and that it might go too far to call it "Hebbian" (Rev#2) - you might want also to see it as a Self-Organizing Map for graphs.
A main criticism was about the comparison with the state of the art (all reviewers). The authors did add empirical comparisons with the suggested VGAE and SEAL, and phrase it nicely as "our algorithm outperforms SEAL on one out of four data sets". Looking at the revised paper, this is true: the approach is outperformed by SEAL on 3 out of 4 datasets.
Another criticism regards the insufficient analysis of the results (e.g. through visualization, studying the clusters obtained along different runs, etc).
This aspect is not addressed in the revised version.
An excellent point is the scalability of the approach, which is worth emphasizing.
I thus encourage the authors to rewrite and polish the paper, improving the positioning of the proposed approach w.r.t. the state of the art, and providing a more thorough analysis of the results.
| train | [
"rJeTak3J5r",
"SJgTZcZIjH",
"H1ehHMbriH",
"ryxXOSTliH",
"H1e6HBTgsB",
"rylmfSpgoB",
"B1eMMi63FB",
"BJg_uTZ15B"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to the authors for their response. There are still significant issues with motivation, writing, and baseline comparisons (the latter noted by R3). I would encourage the authors to continue to polish and investigate their method and submit to a future conference. \n\n=====\n\nThis paper proposes an approach ... | [
1,
-1,
-1,
-1,
-1,
-1,
1,
1
] | [
1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_H1ep5TNKwr",
"H1ehHMbriH",
"H1e6HBTgsB",
"B1eMMi63FB",
"BJg_uTZ15B",
"rJeTak3J5r",
"iclr_2020_H1ep5TNKwr",
"iclr_2020_H1ep5TNKwr"
] |
iclr_2020_S1xCcpNYPr | Cost-Effective Testing of a Deep Learning Model through Input Reduction | With the increasing adoption of Deep Learning (DL) models in various applications, testing DL models is vitally important. However, testing DL models is costly and expensive, especially when developers explore alternative designs of DL models and tune the hyperparameters. To reduce testing cost, we propose to use only a selected subset of testing data, which is small but representative enough for quick estimation of the performance of DL models. Our approach, called DeepReduce, adopts a two-phase strategy. At first, our approach selects testing data for the purpose of satisfying testing adequacy. Then, it selects more testing data in order to approximate the distribution between the whole testing data and the selected data leveraging relative entropy minimization.
Experiments with various DL models and datasets show that our approach can reduce the whole testing data to 4.6\% on average, and can reliably estimate the performance of DL models. Our approach significantly outperforms the random approach, and is more stable and reliable than the state-of-the-art approach. | reject | This paper presents a method which creates a representative subset of testing examples so that the model can be tested quickly during the training. The procedure makes use of the famous HGS selection algorithm which identifies and then eliminates the redundant and obsolete test cases based on two criteria: (1) structural coverage as measured by the number of neurons activated beyond a certain threshold, and (2) distribution mismatch (as measured by KL divergence) of the last layer activations. The algorithm has two-phases: (1) a greedy subset selection based on the coverage, and (2) an iterative phase were additional test examples are added until the KL divergence (as defined above) falls below some threshold.
This approach is incremental in nature -- the resulting multi-objective optimisation problem is not a significant improvement over BOT. After the discussion phase, we believe that the advantages over BOT were not clearly demonstrated and that the main drawback of BOT (requiring the number of samples) is not hindering practical applications. Finally, the empirical evaluation is performed on very small data sets and I do not see an efficient way to apply it to larger data sets where this reduction could be significant. Hence, I will recommend the rejection of this paper. To merit acceptance to ICLR the authors need to provide a cleaner presentation (especially of the algorithms), with a focus on the incremental improvements over BOT, an empirical analysis on larger datasets, and a detailed look into the computational aspects of the proposed approach.
| train | [
"SklONH5IoH",
"SJlCuBcIoS",
"HJlcLH98iS",
"BJl6tE9IoB",
"Bylyz7Jstr",
"HkgaYdV6Fr",
"HkxAIib0Fr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"(continued reponses)\n\n3. ”Furthermore, the result does not present a strong success: the error of output distribution is much worse than the compared work.”\nResponse: Our approach is significantly better than the compared work and presents a strong success. Our reasons are as follows:\n(1) The compared work is... | [
-1,
-1,
-1,
-1,
8,
6,
3
] | [
-1,
-1,
-1,
-1,
3,
1,
4
] | [
"BJl6tE9IoB",
"Bylyz7Jstr",
"HkgaYdV6Fr",
"HkxAIib0Fr",
"iclr_2020_S1xCcpNYPr",
"iclr_2020_S1xCcpNYPr",
"iclr_2020_S1xCcpNYPr"
] |
iclr_2020_SyxC9TEtPH | Conditional Invertible Neural Networks for Guided Image Generation | In this work, we address the task of natural image generation guided by a conditioning input. We introduce a new architecture called conditional invertible neural network (cINN). It combines the purely generative INN model with an unconstrained feed-forward network, which efficiently pre-processes the conditioning input into useful features. All parameters of a cINN are jointly optimized with a stable, maximum likelihood-based training procedure. Even though INNs and other normalizing flow models have received very little attention in the literature in contrast to GANs, we find that cINNs can achieve comparable quality, with some remarkable properties absent in cGANs, e.g. apparent immunity to mode collapse. We demonstrate these properties for the tasks of MNIST digit generation and image colorization. Furthermore, we take advantage of our bidirectional cINN architecture to explore and manipulate emergent properties of the latent space, such as changing the image style in an intuitive way. | reject | The paper presents an extension of flow-based invertible generative models to a conditional setting. The key idea is fairly simple modification of the original architecture, but authors also propose techniques for down-sampling with Haar wavelets. The experimental results on class-conditional MNIST generation and colorization are promising. However, in terms of weakness, the technical novelty seems somewhat limited although it's a reasonable extension. In addition, the experimental results lack evaluation on general conditional image generation tasks with more widely used benchmarks (e.g., class-conditional generation setting for real images, such as CIFAR and ImageNet; attribute-conditional or image-to-image translation settings; etc.). In other words, colorization seems like a niche task. The baselines compared are not the strongest models. For example, the diversity of
cGANs can be significantly improved by simple plug-in modifications (e.g., DSGAN) to any existing GAN architectures, and those methods were demonstrated on broader benchmarks. So I view the experimental validation somewhat limited in scope and significance. While this work presents a reasonable extension of conditional invertible generative models with promising results, I believe that more work needs to be done to be publishable at a top-tier conference.
Diversity-Sensitive Conditional Generative Adversarial Networks
https://arxiv.org/abs/1901.09024
Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis
https://arxiv.org/abs/1903.05628
* exactly the same idea as DSGAN above.
| train | [
"r1xHGWLRKr",
"HyxHRDIKFS",
"HkxyZsriir",
"SJlIPcHojS",
"Bkg9VcrojH",
"Hylwx9BisS",
"H1xjIDIp5B",
"Hygm93nTqH"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents an invertible generative network, for conditional image generation. The model is an extension of Real NVP with a conditioning component. Experiments are performed for image generation on two tasks: class conditional generation on MNIST and image colorization conditioned on a grey scale image (l... | [
6,
6,
-1,
-1,
-1,
-1,
3,
8
] | [
3,
4,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2020_SyxC9TEtPH",
"iclr_2020_SyxC9TEtPH",
"HyxHRDIKFS",
"r1xHGWLRKr",
"H1xjIDIp5B",
"Hygm93nTqH",
"iclr_2020_SyxC9TEtPH",
"iclr_2020_SyxC9TEtPH"
] |
iclr_2020_BJlyi64FvB | Wider Networks Learn Better Features | Transferability of learned features between tasks can massively reduce the cost of training a neural network on a novel task. We investigate the effect of network width on learned features using activation atlases --- a visualization technique that captures features the entire hidden state responds to, as opposed to individual neurons alone. We find that, while individual neurons do not learn interpretable features in wide networks, groups of neurons do. In addition, the hidden state of a wide network contains more information about the inputs than that of a narrow network trained to the same test accuracy. Inspired by this observation, we show that when fine-tuning the last layer of a network on a new task, performance improves significantly as the width of the network is increased, even though test accuracy on the original task is independent of width. | reject | This paper investigated the effect of network width on learned features using activation atlases. From the current view of deep learning, the novelty of the paper is limited.
As all reviews rejected the paper and the authors gave up rebuttal, I choose to reject the paper.
| test | [
"HkgkHmv5FS",
"BklafWnaFB",
"Hyl9rKAatB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the effect of network width of the neural network and its ability to capture various intricate features of the data. In particular, the central claim of this paper is what the title claims \"Wider networks learn features that are better\". They make this claim using the visualization technique... | [
3,
1,
3
] | [
4,
4,
3
] | [
"iclr_2020_BJlyi64FvB",
"iclr_2020_BJlyi64FvB",
"iclr_2020_BJlyi64FvB"
] |
iclr_2020_H1lyiaVFwB | DUAL ADVERSARIAL MODEL FOR GENERATING 3D POINT CLOUD | Three-dimensional data, such as point clouds, are often composed of three coordinates with few featrues. In view of this, it is hard for common neural networks to learn and represent the characteristics directly. In this paper, we focus on latent space’s representation of data characteristics, introduce a novel generative framework based on AutoEncoder(AE) and Generative Adversarial Network(GAN) with extra well-designed loss. We embed this framework directly into the raw 3D-GAN, and experiments demonstrate the potential of the framework in regard of improving the performance on the public dataset compared with other point cloud generation models proposed in recent years. It even achieves state of-the-art performance. We also perform experiments on MNIST and exhibit an excellent result on 2D dataset. | reject | The paper introduces a new method for 3d point cloud generation based upon auto encoders and GANs.
Two reviewers voted for accept and one reviewer for outright reject. Both authors and reviewers posted thorough responses. Based upon these it is judged best to not accept the paper in the present. The authors should take the feedback into account in a an updated version of the paper.
Rejection is recommended. | train | [
"Hkx0sO53jB",
"B1gFP_9hir",
"HklZpwcnor",
"ByluwvqhiS",
"B1xtcdeTFH",
"SkxxhWRZqS",
"HJl0wPBLqr"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Q1(1). The problem of the point cloud generation of Achlioptas et al., (2018) is it can't generate arbitrarily many number of points. This issue has been discussed and addressed in those works ...\n\nA: No, we don’t think so. We also noticed that the method in Achlioptas et al., (2018) cannot generate arbitrarily ... | [
-1,
-1,
-1,
-1,
1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"B1gFP_9hir",
"B1xtcdeTFH",
"SkxxhWRZqS",
"HJl0wPBLqr",
"iclr_2020_H1lyiaVFwB",
"iclr_2020_H1lyiaVFwB",
"iclr_2020_H1lyiaVFwB"
] |
iclr_2020_Skgeip4FPr | Neural networks are a priori biased towards Boolean functions with low entropy | Understanding the inductive bias of neural networks is critical to explaining their ability to generalise. Here,
for one of the simplest neural networks -- a single-layer perceptron with n input neurons, one output neuron, and no threshold bias term -- we prove that upon random initialisation of weights, the a priori probability P(t) that it represents a Boolean function that classifies t points in {0,1}n as 1 has a remarkably simple form: P(t)=2−nfor0≤t<2n.
Since a perceptron can express far fewer Boolean functions with small or large values of t (low "entropy") than with intermediate values of t (high "entropy") there is, on average, a strong intrinsic a-priori bias towards individual functions with low entropy. Furthermore, within a class of functions with fixed t, we often observe a further intrinsic bias towards functions of lower complexity.
Finally, we prove that, regardless of the distribution of inputs, the bias towards low entropy becomes monotonically stronger upon adding ReLU layers, and empirically show that increasing the variance of the bias term has a similar effect. | reject | This article studies the inductive bias in a simple binary perceptron without bias, showing that if the weight vector has a symmetric distribution, then the cardinality of the support of the represented function is uniform on 0,...,2^n-1. Since the number of possible functions with support of extreme cardinality values is smaller, the result is interpreted as a bias towards such functions. Further results and experiments are presented. The reviewers found this work interesting and mentioned that it contributes to the understanding of neural networks. However, they also expressed concerns about the contribution relying crucially on 0/1 variables, and that for example with -1/1 the effect would disappear, implying that the result might not be capturing a significant aspect of neural networks. Another concern was whether the results could be generalised to other architectures. The authors agreed that this is indeed a crucial part of the analysis, and for the moment pointed at empirical evidence for the appearance of this effect in other cases. The reviewers also mentioned that the motivation was not very clear, that some of the derivations were difficult to follow (with many results presented in the appendix), and that the interpretation and implications were not sufficiently discussed (in particular, in relation to generalization, missing a more detailed discussion of training). This is a good contribution and the revision made important improvements on the points mentioned above, but not quite reaching the bar. | train | [
"SkgOxZDssr",
"HklU21DjiB",
"ryl2Kkwjir",
"SketZywssB",
"BJevQEK7jr",
"rJxN9w_TYr",
"HyxddGNAtH"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the referee for a careful reading of our paper and for the positive assessments.\n\nThe referee makes one major critique: \"The conclusion that study of such initializations can help understand the generalization power is not convincing. Despite that neural networks at initializations are biased towards l... | [
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
-1,
-1,
1,
1,
3
] | [
"HyxddGNAtH",
"BJevQEK7jr",
"BJevQEK7jr",
"rJxN9w_TYr",
"iclr_2020_Skgeip4FPr",
"iclr_2020_Skgeip4FPr",
"iclr_2020_Skgeip4FPr"
] |
iclr_2020_BJlbo6VtDH | A Generalized Framework of Sequence Generation with Application to Undirected Sequence Models | Undirected neural sequence models such as BERT (Devlin et al., 2019) have received renewed interest due to their success on discriminative natural language understanding tasks such as question-answering and natural language inference.
The problem of generating sequences directly from these models has received relatively little attention, in part because generating from such models departs significantly from the conventional approach of monotonic generation in directed sequence models. We investigate this problem by first proposing a generalized model of sequence generation that unifies decoding in directed and undirected models. The proposed framework models the process of generation rather than a resulting sequence, and under this framework, we derive various neural sequence models as special cases, such as autoregressive, semi-autoregressive, and refinement-based non-autoregressive models. This unification enables us to adapt decoding algorithms originally developed for directed sequence models to undirected models. We demonstrate this by evaluating various decoding strategies for a cross-lingual masked translation model (Lample and Conneau, 2019). Our experiments show that generation from undirected sequence models, under our framework, is competitive with the state of the art on WMT'14 English-German translation. We also demonstrate that the proposed approach enables constant-time translation with similar performance to linear-time translation from the same model by rescoring hypotheses with an autoregressive model. | reject | This paper proposes a generalized way to generate sequences from undirected sequence models.
Overall, I believe a framework like this could definitely be a valuable contribution, but as Reviewer 1 and Reviewer 3 noted, the paper is a bit lacking both in theoretical analysis and strong empirical results. I don't think that this is a bad paper at all, but it feels like the paper needs a little bit of an extra push to tighten up the argumentation and/or results before warranting publication at a premier venue such as ICLR. I'd suggest the authors continue to improve the paper and aim to re-submit at revised version at a future conference. | val | [
"ByeiMsb2sH",
"r1g_yDrjiB",
"SkxZ8anSjH",
"HklJJ62SoB",
"rJlX9i3Ssr",
"BklHgEkAFB",
"B1gITGseqB",
"ryeBboGP9B"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for following up on the review! \n\nWhat is your definition of general purpose decoder ? We are hoping to clarify it in order to understand better how decoding time constraints fit in your definition of general purpose decoder.\n\nWith a learnable position selection mechanisms, we would imagine that under d... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"r1g_yDrjiB",
"rJlX9i3Ssr",
"BklHgEkAFB",
"ryeBboGP9B",
"B1gITGseqB",
"iclr_2020_BJlbo6VtDH",
"iclr_2020_BJlbo6VtDH",
"iclr_2020_BJlbo6VtDH"
] |
iclr_2020_BkxzsT4Yvr | Deep Gradient Boosting -- Layer-wise Input Normalization of Neural Networks | Stochastic gradient descent (SGD) has been the dominant optimization method for training deep neural networks due to its many desirable properties. One of the more remarkable and least understood quality of SGD is that it generalizes relatively well
on unseen data even when the neural network has millions of parameters. We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting (DGB). The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem. Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner. The resulting weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass. When implemented as a separate input normalization layer (INN) the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers. As opposed to batch normalization (BN), INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks. | reject | The paper introduces a neat idea that an SGD update can be written as a solution of the linear least squares problem with a given backpropagated output; this is generalized to a larger batch size, giving a sort of "block" gradient-type update. Some notes that the columns of $O_t$ have to be scaled are made, but not clear why. The paper then goes into the experiments, and then gets back to the fast approximation of DGB. It really looks like bad organization of the paper, which was noted.
The reviewers agree that the actual computational improvements are marginal, and all recommend rejection. As a recommendation, I would suggest to restructure the paper for a more coherent view, and also the improvements in Top-1 are not very stimulating. The general view is interesting, but it is not clear what insight it brings. | train | [
"H1g4s6YniS",
"HJlc8kjnor",
"r1lwBV9hsS",
"rkgI6MHnKB",
"H1eGyKJTKr",
"SkeAqkjTFS",
"S1gMEunhdr",
"SkgO36cDuS"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We thank the reviewer for the comments.\n\nRegarding the MNIST results, we purposely used a dense network that is known to overfit on image classification problems in order to show the regularization effects of the alpha parameter. One can observe how for both left and right formulations the test set accuracies im... | [
-1,
-1,
-1,
3,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
5,
3,
3,
-1,
-1
] | [
"SkeAqkjTFS",
"rkgI6MHnKB",
"H1eGyKJTKr",
"iclr_2020_BkxzsT4Yvr",
"iclr_2020_BkxzsT4Yvr",
"iclr_2020_BkxzsT4Yvr",
"SkgO36cDuS",
"iclr_2020_BkxzsT4Yvr"
] |
iclr_2020_r1gzoaNtvr | Emergence of Compositional Language with Deep Generational Transmission | Recent work has studied the emergence of language among deep reinforcement learning agents that must collaborate to solve a task. Of particular interest are the factors that cause language to be compositional---i.e., express meaning by combining words which themselves have meaning. Evolutionary linguists have found that in addition to structural priors like those already studied in deep learning, the dynamics of transmitting language from generation to generation contribute significantly to the emergence of compositionality. In this paper, we introduce these cultural evolutionary dynamics into language emergence by periodically replacing agents in a population to create a knowledge gap, implicitly inducing cultural transmission of language. We show that this implicit cultural transmission encourages the resulting languages to exhibit better compositional generalization. | reject | This paper explores the emergence of language in environments that demand agents communicate, focusing on the compositionality of language, and the cultural transmission of language.
Reviewer 1 has several suggestions about new experiments that are possible. The AC does think there is value in many of the suggested experiments, if not to run, then just to acknowledge their possibility and leave for future work. The reviewers also point to some previous work that is very similar. E.g. "Ease-of-Teaching and Language Structure from Emergent Communication", Funshan Li et al | train | [
"BJgwl453sS",
"HJe3VbL3sS",
"SkgQjoTV9S",
"Bkg40O5jiB",
"SJl3j-ScjB",
"BklQy3N5iH",
"HkeqAfN9sr",
"BJeUQfre5B",
"rkx4Qvul9B"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"__Why 8 generations:__\nWe agree that additional temporal analysis would strengthen the paper. In our initial experiments we focused on the Overcomplete setting and found that agent populations tended to converge by 8 generations. We've now performed some additional experiments that analyze test accuracy over time... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
1,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
5,
4
] | [
"HJe3VbL3sS",
"SJl3j-ScjB",
"iclr_2020_r1gzoaNtvr",
"HkeqAfN9sr",
"rkx4Qvul9B",
"BJeUQfre5B",
"SkgQjoTV9S",
"iclr_2020_r1gzoaNtvr",
"iclr_2020_r1gzoaNtvr"
] |
iclr_2020_S1gXiaEYvr | Prototype Recalls for Continual Learning | Continual learning is a critical ability of continually acquiring and transferring knowledge without catastrophically forgetting previously learned knowledge. However, enabling continual learning for AI remains a long-standing challenge. In this work, we propose a novel method, Prototype Recalls, that efficiently embeds and recalls previously learnt knowledge to tackle catastrophic forgetting issue. In particular, we consider continual learning in classification tasks. For each classification task, our method learns a metric space containing a set of prototypes where embedding of the samples from the same class cluster around prototypes and class-representative prototypes are separated apart. To alleviate catastrophic forgetting, our method preserves the embedding function from the samples to the previous metric space, through our proposed prototype recalls from previous tasks. Specifically, the recalling process is implemented by replaying a small number of samples from previous tasks and correspondingly matching their embedding to their nearest class-representative prototypes. Compared with recent continual learning methods, our contributions are fourfold: first, our method achieves the best memory retention capability while adapting quickly to new tasks. Second, our method uses metric learning for classification and does not require adding in new neurons given new object classes. Third, our method is more memory efficient since only class-representative prototypes need to be recalled. Fourth, our method suggests a promising solution for few-shot continual learning. Without tampering with the performance on initial tasks, our method learns novel concepts given a few training examples of each class in new tasks. | reject | The reviewers have provided extensive comments, we encourage the authors to take them into account seriously in further iterations of this work. | train | [
"r1ebFgw3FB",
"ryxg6XY2FS",
"BkeEGgTkqr"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Paper proposes a method for continual learning. The method is based on the learning of a metric space where classes are represented by prototypes in this space. To prevent forgetting the method proposes to perform prototype recall, aiming to keep prototypes in the same location in embedding space (Fig 1b). The met... | [
3,
1,
3
] | [
4,
3,
4
] | [
"iclr_2020_S1gXiaEYvr",
"iclr_2020_S1gXiaEYvr",
"iclr_2020_S1gXiaEYvr"
] |
iclr_2020_BylNoaVYPS | Variational Autoencoders for Opponent Modeling in Multi-Agent Systems | Multi-agent systems exhibit complex behaviors that emanate from the interactions of multiple agents in a shared environment. In this work, we are interested in controlling one agent in a multi-agent system and successfully learn to interact with the other agents that have fixed policies. Modeling the behavior of other agents (opponents) is essential in understanding the interactions of the agents in the system. By taking advantage of recent advances in unsupervised learning, we propose modeling opponents using variational autoencoders. Additionally, many existing methods in the literature assume that the opponent models have access to opponent's observations and actions during both training and execution. To eliminate this assumption, we propose a modification that attempts to identify the underlying opponent model, using only local information of our agent, such as its observations, actions, and rewards. The experiments indicate that our opponent modeling methods achieve equal or greater episodic returns in reinforcement learning tasks against another modeling method. | reject | The present work addresses the problem of opponent modeling in multi-agent learning settings, and propose an approach based on variational auto-encoders (VAEs). Reviewers consider the approach natural and novel empirical results area presented to show that the proposed approach can accurately model opponents in partially observable settings. Several concerns were addressed by the authors during the rebuttal phased. A key remaining concern is the size of the contribution. Reviewers suggest that a deeper conceptual development, e.g., based on empirical insights, is required. | train | [
"SJllyhLAYr",
"Bkgq4ns3jS",
"SyeWxk2njB",
"BkxEohEDjB",
"SJlL43NPir",
"SJxB69VDsS",
"BylGZ94PsH",
"BJlAsasptH",
"rkgGaeaAFH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a variational autoencoding (VAE) framework for agent/opponent modeling in multi-agent games. Interestingly, as the authors show, it looks like it is possible to compute accurate embeddings of the opponent policies without having access to opponent observations and actions. The paper is well wri... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_BylNoaVYPS",
"BJlAsasptH",
"SJxB69VDsS",
"BJlAsasptH",
"SJllyhLAYr",
"rkgGaeaAFH",
"iclr_2020_BylNoaVYPS",
"iclr_2020_BylNoaVYPS",
"iclr_2020_BylNoaVYPS"
] |
iclr_2020_Bye4iaEFwr | Improving Dirichlet Prior Network for Out-of-Distribution Example Detection | Determining the source of uncertainties in the predictions of AI systems are important. It allows the users to act in an informative manner to improve the safety of such systems, applied to the real-world sensitive applications. Predictive uncertainties can originate from the uncertainty in model parameters, data uncertainty or due to distributional mismatch between training and test examples. While recently, significant progress has been made to improve the predictive uncertainty estimation of deep learning models, most of these approaches either conflate the distributional uncertainty with model uncertainty or data uncertainty. In contrast, the Dirichlet Prior Network (DPN) can model distributional uncertainty distinctly by parameterizing a prior Dirichlet over the predictive categorical distributions. However, their complex loss function by explicitly incorporating KL divergence between Dirichlet distributions often makes the error surface
ill-suited to optimize for challenging datasets with multiple classes. In this paper, we present an improved DPN framework by proposing a novel loss function using the standard cross-entropy loss along with a regularization term to control the sharpness of the output Dirichlet distributions from the network. Our proposed loss function aims to improve the training efficiency of the DPN framework for challenging classification tasks with large number of classes. In our experiments using synthetic and real datasets, we demonstrate that our DPN models can distinguish the distributional uncertainty from other uncertainty types. Our proposed approach significantly improves DPN frameworks and outperform the existing OOD detectors on CIFAR-10 and CIFAR-100 dataset while also being able to recognize distributional uncertainty distinctly. | reject | In this work the authors build on the Dirichlet prior network of Malinin & Gales, replacing the loss function and adding a regularization term which improve training in the setting with a significant number of classes. Improving uncertainty for deep learning is a challenging but very important problem. The reviewers of this paper gave two weak rejects (one is of low confidence) and one weak accept. They found the paper well written, easy to follow and well motivated but somewhat incremental and not entirely empirically justified. None of the reviewers were willing to strongly champion the paper for acceptance. Unfortunately as such the paper falls below the bar for acceptance. It appears that the authors significantly added to the experiments in the discussion phase and hopefully that will make the paper much stronger for a future submission. | train | [
"ryx8EaR5iH",
"BygJX234jB",
"HJeAwYnEoB",
"rJlD9u34ir",
"SJgzps3NoB",
"rkeS5u3TYH",
"SkgcPvqAYB",
"HylRSrBc9B"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We want to express our deep gratitude to all reviewers for their constructive suggestions for our paper. We have addressed all the concerns and suggestions in our revised draft and reply to each reviewer in separate comments. The list of our changes are as follows:\n\n1. \"OOD test datasets are different from ... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
1,
4
] | [
"iclr_2020_Bye4iaEFwr",
"rkeS5u3TYH",
"SkgcPvqAYB",
"HylRSrBc9B",
"rkeS5u3TYH",
"iclr_2020_Bye4iaEFwr",
"iclr_2020_Bye4iaEFwr",
"iclr_2020_Bye4iaEFwr"
] |
iclr_2020_HkgBsaVtDB | Unified recurrent network for many feature types | There are time series that are amenable to recurrent neural network (RNN) solutions when treated as sequences, but some series, e.g. asynchronous time series, provide a richer variation of feature types than current RNN cells take into account. In order to address such situations, we introduce a unified RNN that handles five different feature types, each in a different manner. Our RNN framework separates sequential features into two groups dependent on their frequency, which we call sparse and dense features, and which affect cell updates differently. Further, we also incorporate time features at the sequential level that relate to the time between specified events in the sequence and are used to modify the cell's memory state. We also include two types of static (whole sequence level) features, one related to time and one not, which are combined with the encoder output. The experiments show that the proposed modeling framework does increase performance compared to standard cells. | reject | main summary: sparse time LSTM
discussions;
reviewer 4: technical description of the proposed method insufficient,
reviewer 2, 3: same paper sent to ICLR 2019 and rejected
recommendation: rejected, based on all reviewers comments | train | [
"rJgzT_0HqS",
"H1lDRVew5B",
"r1eA6IK99H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This submitted manuscript is exactly the paper (bearing no difference) that was submitted to ICLR 2019 and also rejected. \nThis submitted manuscript is exactly the paper (bearing no difference) that was submitted to ICLR 2019 and also rejected. \nThis submitted manuscript is exactly the paper (bearing no differen... | [
1,
1,
1
] | [
3,
5,
4
] | [
"iclr_2020_HkgBsaVtDB",
"iclr_2020_HkgBsaVtDB",
"iclr_2020_HkgBsaVtDB"
] |
iclr_2020_BygIjTNtPr | ODE Analysis of Stochastic Gradient Methods with Optimism and Anchoring for Minimax Problems and GANs | Despite remarkable empirical success, the training dynamics of generative adversarial networks (GAN), which involves solving a minimax game using stochastic gradients, is still poorly understood. In this work, we analyze last-iterate convergence of simultaneous gradient descent (simGD) and its variants under the assumption of convex-concavity, guided by a continuous-time analysis with differential equations. First, we show that simGD, as is, converges with stochastic sub-gradients under strict convexity in the primal variable. Second, we generalize optimistic simGD to accommodate an optimism rate separate from the learning rate and show its convergence with full gradients. Finally, we present anchored simGD, a new method, and show convergence with stochastic subgradients. | reject | Motivated by GANs, the authors study the convergence of stochastic subgradient
descent on convex-concave minimax games.
They introduced an improved "anchored" SGD variant, that provably converges
under milder assumptions that the base algorithm.
It is applied to training GANs on MNIST and CIFAR-10, partially showing
improvements over alternative training methods.
A main point of criticism that the reviewers identify is the strength of the
assumptions needed for the analysis.
Furthermore, the experimental results were deemed weak as the reported scores
are far away from the SOTA, and only simple baselines were compared against. | train | [
"BJezYAMhsH",
"BkeJ3AM2iS",
"ryxY5AGhoS",
"S1l_PRfnsr",
"H1e3787jYB",
"BJx2N57Ctr",
"Byx_SfI1qS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This comment is in response to comments from Reviewers #2 and #3.\n\nIn our analysis, we assume convex-concavity, even though GANs are not convex-concave. However, prior theoretical papers make assumptions that are equally or further unrealistic, as we discuss in the introduction. Common assumptions are:\n - GAN i... | [
-1,
-1,
-1,
-1,
1,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"iclr_2020_BygIjTNtPr",
"Byx_SfI1qS",
"H1e3787jYB",
"BJx2N57Ctr",
"iclr_2020_BygIjTNtPr",
"iclr_2020_BygIjTNtPr",
"iclr_2020_BygIjTNtPr"
] |
iclr_2020_Syxwsp4KDB | TED: A Pretrained Unsupervised Summarization Model with Theme Modeling and Denoising | Text summarization aims to extract essential information from a piece of text and transform it into a concise version. Existing unsupervised abstractive summarization models use recurrent neural networks framework and ignore abundant unlabeled corpora resources. In order to address these issues, we propose TED, a transformer-based unsupervised summarization system with dataset-agnostic pretraining. We first leverage the lead bias in news articles to pretrain the model on large-scale corpora. Then, we finetune TED on target domains through theme modeling and a denoising autoencoder to enhance the quality of summaries. Notably, TED outperforms all unsupervised abstractive baselines on NYT, CNN/DM and English Gigaword datasets with various document styles. Further analysis shows that the summaries generated by TED are abstractive and containing even higher proportions of novel tokens than those from supervised models. | reject | This paper proposes an abstractive text summarization model that takes advantage of lead bias for pretraining on unlabeled corpora and a combination of reconstruction and theme modeling loss for finetuning. Experiments on NYT, CNN/DM, and Gigaword datasets demonstrate the benefit of the proposed approach.
I think this is an interesting paper and the results are reasonably convincing. My only concern is regarding a parallel submission that contains a significant overlap in terms contributions, as originally pointed out by R2 (https://openreview.net/forum?id=ryxAY34YwB). All of us had an internal discussion regarding this submission and agree that if the lead bias is considered a contribution of another paper this paper is not strong enough.
Due to space constraint and the above concern, along with the issue that the two submissions contain a significant overlap in terms of authors as well, I recommend to reject this paper. | train | [
"Hke_3SSAYr",
"B1xJ3ZoaFS",
"rkenYRjOsB",
"HyxfsJ2diS",
"Syeo_lh_oS",
"Bkg2Dynusr",
"rJxaA5iecS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Paper's Claims\n\nThe paper introduces a new unsupervised abstractive summarization approach called TED, using a Transformer encoder and decoder. Their main contributions are as follows:\n1) Pretraining the encoder and decoder on news articles using the first beginning as the target summary.\n2) Fine-tune on other... | [
8,
8,
-1,
-1,
-1,
-1,
6
] | [
5,
4,
-1,
-1,
-1,
-1,
1
] | [
"iclr_2020_Syxwsp4KDB",
"iclr_2020_Syxwsp4KDB",
"rJxaA5iecS",
"Hke_3SSAYr",
"B1xJ3ZoaFS",
"Hke_3SSAYr",
"iclr_2020_Syxwsp4KDB"
] |
iclr_2020_rJlDoT4twr | Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition | We introduce a unified probabilistic approach for deep continual learning based on variational Bayesian inference with open set recognition. Our model combines a joint probabilistic encoder with a generative model and a linear classifier that get shared across tasks. The open set recognition bounds the approximate posterior by fitting regions of high density on the basis of correctly classified data points and balances open set detection with recognition errors. Catastrophic forgetting is significantly alleviated through generative replay, where the open set recognition is used to sample from high density areas of the class specific posterior and reject statistical outliers. Our approach naturally allows for forward and backward transfer while maintaining past knowledge without the necessity of storing old data, regularization or inferring task labels. We demonstrate compelling results in the challenging scenario of incrementally expanding the single-head classifier for both class incremental visual and audio classification tasks, as well as incremental learning of datasets across modalities. | reject | This paper presents a unified probabilistic approach for deep continual learning by combining generative and discriminative models into one framework that solves the following problems: catastrophic forgetting, and identifying out of distribution and open set examples. The method termed, OCDVAE in the paper achieves closer or better to SOTA results in different evaluation tasks.
The reviewers had several concerns about the presentation of the paper and some errors in the equations, all of which seem to have been fixed in the latest upload made by the authors. Blind review #3 was delayed as the original reviewer refused to review the paper and this review was then obtained by someone else after the new upload of the paper, so this review looks at the new version of the paper. I would recommend the authors to incorporate suggestions provided by reviewer #3 in the final version of the paper including expanding on the related work section.
However, as of now I recommend to reject the paper. | train | [
"B1x_nF116B",
"ryeIblzQoB",
"HklBT-4PoS",
"B1lxnrAMir",
"rylqrL0MjB",
"HJlCytTfiS",
"B1xEKODCKH",
"S1eM-mFCFr"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary: This paper proposes a unified model for continual learning and aims to address the following problems:\nOut-of-train-domain dataset recognition\nCatastrophic forgetting\nThe out-of-domain or open set recognition model is not only used to detect outliers but also for sampling “representative data” of previ... | [
6,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_rJlDoT4twr",
"B1xEKODCKH",
"iclr_2020_rJlDoT4twr",
"S1eM-mFCFr",
"S1eM-mFCFr",
"iclr_2020_rJlDoT4twr",
"iclr_2020_rJlDoT4twr",
"iclr_2020_rJlDoT4twr"
] |
iclr_2020_HJgdo6VFPH | OmniNet: A unified architecture for multi-modal multi-task learning | Transformer is a popularly used neural network architecture, especially for language understanding. We introduce an extended and unified architecture that can be used for tasks involving a variety of modalities like image, text, videos, etc. We propose a spatio-temporal cache mechanism that enables learning spatial dimension of the input in addition to the hidden states corresponding to the temporal input sequence. The proposed architecture further enables a single model to support tasks with multiple input modalities as well as asynchronous multi-task learning, thus we refer to it as OmniNet. For example, a single instance of OmniNet can concurrently learn to perform the tasks of part-of-speech tagging, image captioning, visual question answering and video activity recognition. We demonstrate that training these four tasks together results in about three times compressed model while retaining the performance in comparison to training them individually. We also show that using this neural network pre-trained on some modalities assists in learning unseen tasks such as video captioning and video question answering. This illustrates the generalization capacity of the self-attention mechanism on the spatio-temporal cache present in OmniNet. | reject | This paper presents OmniNet, an architecture based on the popular transformer for learning on data from multiple modalities and predicting on multiple tasks. The reviewers found the paper well written, technically sound and empirically thorough. However, overall the scores fell below the bar for acceptance and none of the reviewers felt strongly enough to 'champion' the paper for acceptance. The primary concern cited by the reviewers was a lack of strong baselines, i.e. comparison to other methods for multi-task learning. Unfortunately, as such the recommendation is to reject. However, adding a thorough comparison to existing literature empirically and in the related work would make this a much stronger submission to a future conference. | train | [
"HJg9DUL3jr",
"rJlBVdqosH",
"BJlIfhloor",
"rJepOKliiH",
"Bkgmkx5KsB",
"B1x3IXa-sr",
"SkxPBcahKS",
"BkxUcO-2YS",
"HJlJ_jyW9r"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for addressing our responses! \n\nWe agree that the model can be compared with existing multi-task settings. However, our choice of tasks was targeted to verify the applicability of the proposed generic CNP architecture to support a single task with spatio-temporal multi-modal data. Therefore, we individ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
1,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rJlBVdqosH",
"B1x3IXa-sr",
"iclr_2020_HJgdo6VFPH",
"SkxPBcahKS",
"BkxUcO-2YS",
"HJlJ_jyW9r",
"iclr_2020_HJgdo6VFPH",
"iclr_2020_HJgdo6VFPH",
"iclr_2020_HJgdo6VFPH"
] |
iclr_2020_SkeuipVKDH | RTC-VAE: HARNESSING THE PECULIARITY OF TOTAL CORRELATION IN LEARNING DISENTANGLED REPRESENTATIONS | In the problem of unsupervised learning of disentangled representations, one of the promising methods is to penalize the total correlation of sampled latent vari-ables. Unfortunately, this well-motivated strategy often fail to achieve disentanglement due to a problematic difference between the sampled latent representation and its corresponding mean representation. We provide a theoretical explanation that low total correlation of sample distribution cannot guarantee low total correlation of the mean representation. We prove that for the mean representation of arbitrarily high total correlation, there exist distributions of latent variables of abounded total correlation. However, we still believe that total correlation could be a key to the disentanglement of unsupervised representative learning, and we propose a remedy, RTC-VAE, which rectifies the total correlation penalty. Experiments show that our model has a more reasonable distribution of the mean representation compared with baseline models, e.g.,β-TCVAE and FactorVAE. | reject | This paper highlights the problem of penalizing the total correlation of sampled latent variables for unsupervised learning of disentangled representations. Authors prove a theorem on how sample representations with bounded total correlation may have arbitrarily large total correlation when computed with the underlying mean. As a fix, the authors propose RTC-VAE method that penalizes total covariance of sampled latent variables.
R2 appreciated the simplicity of the idea, making it easy to understand and implement, but raises serious concerns on empirical evaluation of the method. Specifically, very limited datasets (initially dsprites and 3d shapes) and with no evaluation of disentanglement performance and no comparison against other disentangling methods like DIP-VAE-1. While the authors added another dataset (3d face) in their revised versions, the concerns about disentanglement performance evaluation and its comparison against baselines remained as before, and R2 was not convinced to raise the initial score.
Similarly, while R1 and R3 appreciate author's response, they believe the response was not convincing enough for them, and maintained their initial ratings.
Overall, the submission has room for improvement toward a clear evaluation of the proposed method against related baselines. | train | [
"ryxVVIjaFr",
"HJl-Rvqnsr",
"SkgJQwc3jB",
"HJgM-kc3sS",
"Skxbij0pYB",
"HyePr6MAKB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the extensions of variational autoencoders (VAEs), which take into account the total correlation of sampled distribution of latent variables. Proving a theorem that a family of distributions of sample representations with a bounded total correlation can have a mean representation of arbitraril... | [
3,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_SkeuipVKDH",
"ryxVVIjaFr",
"Skxbij0pYB",
"HyePr6MAKB",
"iclr_2020_SkeuipVKDH",
"iclr_2020_SkeuipVKDH"
] |
iclr_2020_HygtiTEYvS | Self-Supervised Policy Adaptation | We consider the problem of adapting an existing policy when the environment representation changes. Upon a change of the encoding of the observations the agent can no longer make use of its policy as it cannot correctly interpret the new observations. This paper proposes Greedy State Representation Learning (GSRL) to transfer the original policy by translating the environment representation back into its original encoding. To achieve this GSRL samples observations from both the environment and a dynamics model trained from prior experience. This generates pairs of state encodings, i.e., a new representation from the environment and a (biased) old representation from the forward model, that allow us to bootstrap a neural network model for state translation. Although early translations are unsatisfactory (as expected), the agent eventually learns a valid translation as it minimizes the error between expected and observed environment dynamics. Our experiments show the efficiency of our approach and that it translates the policy in considerably less steps than it would take to retrain the policy. | reject | The submission proposes to improve generalization in RL environments, by addressing the scenario where the observations change even though the underlying environment dynamics do not change. The authors address this by learning an adaptation function which maps back to the original representation. The approach is empirically evaluated on the Mountain Car domain.
The reviewers were unanimously unimpressed with the experiments, the baselines, and the results. While they agree that the problem is well-motivated, they requested additional evidence that the method works as described and that a simpler approach such as fine-tuning would not be sufficient.
The recommendation is to reject the paper at this time. | train | [
"HJgTpHmniH",
"BylWnJiXtH",
"SygsfAGpKB",
"Byx9n-9WqS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the Reviewers for their feedback. Unfortunately, we were not able to perform additional experiments and address all comments to the extent and depth we would like to within the rebuttal period, but we will use the feedback as a guideline for improving on our paper and results and resubmit in... | [
-1,
1,
1,
1
] | [
-1,
4,
4,
5
] | [
"iclr_2020_HygtiTEYvS",
"iclr_2020_HygtiTEYvS",
"iclr_2020_HygtiTEYvS",
"iclr_2020_HygtiTEYvS"
] |
iclr_2020_Syetja4KPH | Deep Randomized Least Squares Value Iteration | Exploration while learning representations is one of the main challenges Deep
Reinforcement Learning (DRL) faces today. As the learned representation is dependant in the observed data, the exploration strategy has a crucial role. The popular DQN algorithm has improved significantly the capabilities of Reinforcement
Learning (RL) algorithms to learn state representations from raw data, yet, it uses
a naive exploration strategy which is statistically inefficient. The Randomized
Least Squares Value Iteration (RLSVI) algorithm (Osband et al., 2016), on the
other hand, explores and generalizes efficiently via linearly parameterized value
functions. However, it is based on hand-designed state representation that requires
prior engineering work for every environment. In this paper, we propose a Deep
Learning adaptation for RLSVI. Rather than using hand-design state representation, we use a state representation that is being learned directly from the data by a
DQN agent. As the representation is being optimized during the learning process,
a key component for the suggested method is a likelihood matching mechanism,
which adapts to the changing representations. We demonstrate the importance of
the various properties of our algorithm on a toy problem and show that our method
outperforms DQN in five Atari benchmarks, reaching competitive results with the
Rainbow algorithm. | reject | This paper combines DQN and Randomized value functions for exploration.
All the reviewers agreed the paper is not yet ready for publication. The experiments lack appropriate baselines and thus it is unclear how this new approach improves exploration in Deep RL. The reviewers also found some of the algorithmic design decisions unintuitive and unexplained. The authors main response was the objective was to improve and compare against vanilla DQN. This could be a valid goal, but it requires clear motivation (perhaps the focus is on simply algorithms that are commonly used in applications or something). Even then comparisons with other methods would be of interest to quantify how much the base algorithm is improved, and to justify empirically all the design decisions that went into building such an improvement (performance vs complexity of implementation etc).
The reviewers gave nice suggestions for improvements. This is a good area of study: keep going! | train | [
"Skx30KAisB",
"SkeoYT8pKH",
"r1ldF5vsor",
"SJxHaeWjjr",
"S1eZibTYir",
"S1xaLgTtjH",
"SyxGFyTYiS",
"H1gIAox8tH",
"ryeGcQa3FB"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In contrast to other algorithms like BDQN and BootDQN, we use the same network architecture, loss function and hyper-parameters of the original DQN. The main deviation from DQN is, therefore, the exploration strategy. \n\nThank you!",
"The paper proposes to extend the popular linear-control algorithm, RLSVI, to ... | [
-1,
1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"SJxHaeWjjr",
"iclr_2020_Syetja4KPH",
"SyxGFyTYiS",
"S1eZibTYir",
"H1gIAox8tH",
"ryeGcQa3FB",
"SkeoYT8pKH",
"iclr_2020_Syetja4KPH",
"iclr_2020_Syetja4KPH"
] |
iclr_2020_rJlqoTEtDB | PowerSGD: Powered Stochastic Gradient Descent Methods for Accelerated Non-Convex Optimization | In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \emph{PowerSGD}. The proposed PowerSGD method simply raises the stochastic gradient to a certain power γ∈[0,1] during iterations and introduces only one additional parameter, namely, the power exponent γ (when γ=1, PowerSGD reduces to SGD). We further propose PowerSGD with momentum, which we term \emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods. Experiments are conducted on popular deep learning models and benchmark datasets. Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients. PowerSGD is essentially a gradient modifier via a nonlinear transformation. As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization. | reject | After reading the author's rebuttal, the reviewers still think that this is an incremental work, and the theory and experiments .are inconsistent. The authors are encouraged to consider the the reivewer's comments to improve the paper. | test | [
"BkxI2ZwqYS",
"B1gPMY5soS",
"BkeVF5UOjr",
"rke7afxSjr",
"S1eCuflBiS",
"BJgxTZgSir",
"SkgOb3BRKH",
"S1gTSur19S",
"HylGODc2_r",
"H1gpt95wdH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper investigates an SGD variant (PowerSGD) where the stochastic gradient is raised to a power of $\\gamma \\in [0,1]$. The authors introduce PowerSGD and PowerSGD with momentum (PowerSGDM). The theoretical proof of the convergence is given and experimental results show that the proposed algorithm converge... | [
3,
-1,
-1,
-1,
-1,
-1,
3,
8,
-1,
-1
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1
] | [
"iclr_2020_rJlqoTEtDB",
"BkeVF5UOjr",
"iclr_2020_rJlqoTEtDB",
"BkxI2ZwqYS",
"SkgOb3BRKH",
"S1gTSur19S",
"iclr_2020_rJlqoTEtDB",
"iclr_2020_rJlqoTEtDB",
"H1gpt95wdH",
"iclr_2020_rJlqoTEtDB"
] |
iclr_2020_S1ejj64YvS | Good Semi-supervised VAE Requires Tighter Evidence Lower Bound | Semi-supervised learning approaches based on generative models have now encountered 3 challenges: (1) The two-stage training strategy is not robust. (2) Good semi-supervised learning results and good generative performance can not be obtained at the same time. (3) Even at the expense of sacrificing generative performance, the semi-supervised classification results are still not satisfactory. To address these problems, we propose One-stage Semi-suPervised Optimal Transport VAE (OSPOT-VAE), a one-stage deep generative model that theoretically unifies the generation and classification loss in one ELBO framework and achieves a tighter ELBO by applying the optimal transport scheme to the distribution of latent variables. We show that with tighter ELBO, our OSPOT-VAE surpasses the best semi-supervised generative models by a large margin across many benchmark datasets. For example, we reduce the error rate from 14.41% to 6.11% on Cifar-10 with 4k labels and achieve state-of-the-art performance with 25.30% on Cifar-100 with 10k labels. We also demonstrate that good generative models and semi-supervised results can be achieved simultaneously by OSPOT-VAE. | reject | The paper proposes to combine a VAE model with the Optimal Transport to approximate some components of the model. The authors evaluate their approach on semi-supervised problems and claim to obtain very competitive results compared to literature. Unfortunately, the paper would benefit substantially from revisions to make it easier to follow. For this reason, the paper is not ready for publication in this venue at this time. | train | [
"BkxNP6tnKS",
"H1xZ_M3kjr",
"H1lkV3jkiS",
"r1lZjgjyjS",
"HkeHLmrhFH",
"HJgqv01pYH",
"ByeN5PlJiS",
"HJedAqxiYB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"The paper proposes to combine a VAE model with the Optimal Transport to approximate some components of the model. The authors evaluate their approach on semi-supervised problems and claim to obtain very competitive results compared to literature. Unfortunately, the paper is very unclear and hard to follow. The aut... | [
3,
-1,
-1,
-1,
3,
3,
-1,
-1
] | [
5,
-1,
-1,
-1,
4,
4,
-1,
-1
] | [
"iclr_2020_S1ejj64YvS",
"HJgqv01pYH",
"HkeHLmrhFH",
"BkxNP6tnKS",
"iclr_2020_S1ejj64YvS",
"iclr_2020_S1ejj64YvS",
"iclr_2020_S1ejj64YvS",
"iclr_2020_S1ejj64YvS"
] |
iclr_2020_B1gjs6EtDr | Efficient Content-Based Sparse Attention with Routing Transformers | Self-attention has recently been adopted for a wide range of sequence modeling
problems. Despite its effectiveness, self-attention suffers quadratic compute and
memory requirements with respect to sequence length. Successful approaches to
reduce this complexity focused on attention to local sliding windows or a small
set of locations independent of content. Our work proposes to learn dynamic
sparse attention patterns that avoid allocating computation and memory to attend
to content unrelated to the query of interest. This work builds upon two lines of
research: it combines the modeling flexibility of prior work on content-based sparse
attention with the efficiency gains from approaches based on local, temporal sparse
attention. Our model, the Routing Transformer, endows self-attention with a sparse
routing module based on online k-means while reducing the overall complexity of
attention to O(n^{1.5}d) from O(n^2d) for sequence length n and hidden dimension
d. We show that our model outperforms comparable sparse attention models on
language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on
image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers.
Code will be open-sourced on acceptance. | reject | This paper proposes a new model, the Routing Transformer, which endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention from O(n^2) to O(n^1.5). The model attained very good performance on WikiText-103 (in terms of perplexity) and similar performance to baselines (published numbers) in two other tasks.
Even though the problem addressed (reducing the quadratic complexity of self-attention) is extremely relevant and the proposed approach is very intuitive and interesting, the reviewers raised some concerns, notably:
- How efficient is the proposed approach in practice. Even though the theoretical complexity is reduced, more modules were introduced (e.g., forced clustering, mix of local heads and clustering heads, sorting, etc.)
- Why is W_R fixed random? Since W_R is orthogonal, it's just a random (generalized) "rotation" (performed on the word embedding space). Does this really provide sensible "routing"?
- The experimental section can be improved to better understand the impact of the proposed method. Adding ablations, as suggested by the reviewers, would be an important part of this work.
- Not clear why the work needs to be motivated through NMF, since the proposed method uses k-means.
Unfortunately several points raised by the reviewers (except R2) were not addressed in the author rebuttal, and therefore it is not clear if some of the raised issues are fixable in camera ready time, which prevents me from recommend this paper to be accepted.
However, I *do* think the proposed approach is very interesting and has great potential, once these points are clarified. The gains obtained in WikiText-103 are promising. Therefore, I strongly encourage the authors to resubmit this paper taking into account the suggestions made by the reviewers. | train | [
"r1gjcoEa_B",
"HJxhXJ3hjH",
"Syx4WJh3jr",
"H1xBV-VpuB",
"rklzdDukcB"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"[EDIT: After reading the other reviews and discussion among reviewers, I have decided to downgrade my score. In particular, in addition to points raised by reviewer 2 there are concerns with regard to lack of ablation studies, and major clarity issues.]\n\nThis paper proposes content-based sparse attention to redu... | [
3,
-1,
-1,
3,
6
] | [
4,
-1,
-1,
5,
3
] | [
"iclr_2020_B1gjs6EtDr",
"H1xBV-VpuB",
"H1xBV-VpuB",
"iclr_2020_B1gjs6EtDr",
"iclr_2020_B1gjs6EtDr"
] |
iclr_2020_H1l3s6NtvH | A Bayes-Optimal View on Adversarial Examples | Adversarial attacks on CNN classifiers can make an imperceptible change to an input image and alter the classification result. The source of these failures is still poorly understood, and many explanations invoke the "unreasonably linear extrapolation" used by CNNs along with the geometry of high dimensions.
In this paper we show that similar attacks can be used against the Bayes-Optimal classifier for certain class distributions, while for others the optimal classifier is robust to such attacks. We present analytical results showing conditions on the data distribution under which all points can be made arbitrarily close to the optimal decision boundary and show that this can happen even when the classes are easy to separate, when the ideal classifier has a smooth decision surface and when the data lies in low dimensions. We introduce new datasets of realistic images of faces and digits where the Bayes-Optimal classifier can be calculated efficiently and show that for some of these datasets the optimal classifier is robust and for others it is vulnerable to adversarial examples. In systematic experiments with many such datasets, we find that standard CNN training consistently finds a vulnerable classifier even when the optimal classifier is robust while large-margin methods often find a robust classifier with the exact same training data. Our results suggest that adversarial vulnerability is not an unavoidable consequence of machine learning in high dimensions, and may often be a result of suboptimal training methods used in current practice. | reject | The paper studies how adversarial robustness and Bayes optimality relate in a simple gaussian mixture setting. The paper received two recommendations for rejection and one weak accept. One of the central complaints was whether the study had any bearing on "real world" adversarial examples. I think this is a fair concern, given how limited the model appears on the surface, although perhaps the model is a good model of any local "piece" of a decision boundary in a real problem. That said, I do not agree with the strong rejection (1) in most places. The weak reject asked for some experiments. The revision produced these experiments, but I'm not sure how convincing these are since only one robust training method was used, and it's not clear that it's the best one could do among SOTA methods. For whatever reason, the reviewers did not update their scores. I am not certain that they reviewed the revision, despite my prodding. | val | [
"BylyB7wnir",
"S1x7ZrQnsB",
"B1x6oNXhjB",
"rygw74Q2iS",
"SkehB7mnsH",
"HJeV9gIkqr",
"BkeYHjwqsB",
"rJxFL5b0YB",
"rJlF6V3Osr",
"rklS7BYOjB",
"HJx7phQwjH",
"HkgDaPzvsr",
"HklATUMvir",
"rke6KrMPjS",
"SyxjYpuXqH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Is there an analog of Figure 6 for the asymmetric dataset? It would be interesting to see what the RBF SVM does on this dataset. (I understand it's the last day of rebuttals though so this might be a bit late of a question to ask).\n\n",
"We thank the reviewers for their comments and interaction during the rebut... | [
-1,
-1,
-1,
-1,
-1,
1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"SkehB7mnsH",
"iclr_2020_H1l3s6NtvH",
"BkeYHjwqsB",
"HklATUMvir",
"rJlF6V3Osr",
"iclr_2020_H1l3s6NtvH",
"HkgDaPzvsr",
"iclr_2020_H1l3s6NtvH",
"rklS7BYOjB",
"HJx7phQwjH",
"rke6KrMPjS",
"HJeV9gIkqr",
"SyxjYpuXqH",
"rJxFL5b0YB",
"iclr_2020_H1l3s6NtvH"
] |
iclr_2020_HJghoa4YDB | Temporal-difference learning for nonlinear value function approximation in the lazy training regime | We discuss the approximation of the value function for infinite-horizon discounted Markov Reward Processes (MRP) with nonlinear functions trained with the Temporal-Difference (TD) learning algorithm. We consider this problem under a certain scaling of the approximating function, leading to a regime called lazy training. In this regime the parameters of the model vary only slightly during the learning process, a feature that has recently been observed in the training of neural networks, where the scaling we study arises naturally, implicit in the initialization of their parameters. Both in the under- and over-parametrized frameworks, we prove exponential convergence to local, respectively global minimizers of the above algorithm in the lazy training regime. We then give examples of such convergence results in the case of models that diverge if trained with non-lazy TD learning, and in the case of neural networks. | reject | This paper provides convergence results for Non-linear TD under lazy training.
This paper tackles the important and challenging task of improving our theoretical understanding of deep RL. We have lots of empirical evidence Q-learning and TD can work with NNs, and even empirical work that attempts to characterize when we should expect it to fail. Such empirical work is always limited and we need theory to supplement our empirical knowledge. This paper attempts to extend recent theoretical work on the convergence of supervised training of NN to the policy evaluation setting with TD.
The main issue revolves around the presentation of the work. The reviewers found the paper difficult to read (ok for theory work). But, the paper did not clearly discuss and characterize the significance of the work: how limited is the lazy training regime, when would it be useful? Now that we have this result, do we have any more insights for algorithm design (improving nonlinear TD), or comments about when we expect NN policy evaluation to work?
This all reads like: the paper needs a better intro and discussion of the implications and limitations of the results, and indeed this is what the reviewers were looking for. Unfortunately the author response and paper submitted were lacking in this respect. Even the strongest advocates of the work found it severely lacking explanation and discussion. They felt that the paper could be accepted, but only after extensive revision.
The direction of the work is important. The work is novel, and not a small undertaking. However, to be published the authors should spend more time explaining the framework, the results, and the limitations to the reader.
| test | [
"BygGVkvh3B",
"H1gBBwcGKr",
"ryergT52oS",
"rkxTlCQjir",
"H1lHwq4cor",
"Sygne9E9jB",
"SylcnOEcsH",
"SyltdOWAYB",
"HkgtJlIJ5B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper discusses the policy evaluation problem using temporal-difference (TD) learning with nonlinear function approximation. The authors show that in the “lazy training” regime both over- and under-parametrized approximators converge exponentially fast, the former to the global minimum of the projected TD erro... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_HJghoa4YDB",
"iclr_2020_HJghoa4YDB",
"rkxTlCQjir",
"H1lHwq4cor",
"H1gBBwcGKr",
"HkgtJlIJ5B",
"SyltdOWAYB",
"iclr_2020_HJghoa4YDB",
"iclr_2020_HJghoa4YDB"
] |
iclr_2020_HkghoaNYPB | AlgoNet: C∞ Smooth Algorithmic Neural Networks | Artificial neural networks have revolutionized many areas of computer science in recent years, providing solutions to a number of previously unsolved problems.
On the other hand, for many problems, classic algorithms exist, which typically exceed the accuracy and stability of neural networks.
To combine these two concepts, we present a new kind of neural networks—algorithmic neural networks (AlgoNets).
These networks integrate smooth versions of classic algorithms into the topology of neural networks.
A forward AlgoNet includes algorithmic layers into existing architectures to enhance performance and explainability while a backward AlgoNet enables solving inverse problems without or with only weak supervision.
In addition, we present the algonet package, a PyTorch based library that includes, inter alia, a smoothly evaluated programming language, a smooth 3D mesh renderer, and smooth sorting algorithms. | reject | The paper does not provide theory or experiment to justify the various proposed relaxations. In its current form, it has very limited scope. | train | [
"rylUQbwIiB",
"BylqVJ3pKr",
"rylxfnmRFB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper describes \"AlgoNets\", which are differentiable implementations of classical algorithms. Several AlgoNets are described, including multiplication algorithm implemented in the WHILE programming language, smooth sorting, a smooth while loop, smooth finite differences and a softmedian.\n\nThe paper additi... | [
1,
3,
1
] | [
5,
4,
3
] | [
"iclr_2020_HkghoaNYPB",
"iclr_2020_HkghoaNYPB",
"iclr_2020_HkghoaNYPB"
] |
iclr_2020_Skgaia4tDH | Localized Generations with Deep Neural Networks for Multi-Scale Structured Datasets | Extracting the hidden structure of the external environment is an essential component of intelligent agents and human learning. The real-world datasets that we are interested in are often characterized by the locality: the change in the structural relationship between the data points depending on location in observation space. The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively. However, this approach is only limited to use with a simple model, since the complex model, including deep neural networks, requires a massive amount of data and extended training time. In this study, we overcome this trade-off based on the insight that the real-world dataset often shares some structural similarity between each neighborhood. We propose to utilize the embedding model for the other local structure as a weak form of supervision. Our proposed model, the Local VAE, generalize the Variational Autoencoder to have the different model parameters for each local subset and train these local parameters by the gradient-based meta-learning. Our experimental results showed that the Local VAE succeeded in learning the semantic representations for the dataset with local structure, including the 3D Shapes Dataset, and generated high-quality images. | reject | The paper presents a structured VAE, where the model parameters depend on a local structure (such as distance in feature or local space), and it uses the meta-learning framework to adjust the dependency of the model parameters to the local neighborhood.
The idea is natural, as pointed by Rev#1. It incurs an extra learning cost, as noted by Rev#1 and #2, asking for details about the extra-cost. The authors' reply is (last alinea in first reply to Rev#1): we did not comment (...) because in essence, using neighborhoods in a naive way is not affordable.
The area chair would like to know the actual computational time of Local VAE compared to that of the baselines.
More details (for instance visualization) about the results on Cars3D and NORB would also be needed to better appreciate the impact of the locality structure. The fact that the optimal value (wrt Disentanglement) is rather low ($10^{-2}$) would need be discussed, and assessed w.r.t. the standard deviation.
In summary, the paper presents a good idea. More details about its impacts on the VAE quality, and its computation costs, are needed to fully appreciate its merits. | train | [
"BkgAaYiqjB",
"ByeLZ2__sH",
"H1x-OkFIor",
"rJeYNJtLsB",
"r1xisR_LoS",
"SJl0wftOtB",
"Bygxx5IpFH",
"HklJK-wRKB"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"To address R1's concern, we conducted additional experiments on neighborhood construction. In appendix D, we compared the performance of the l2 distance on input space and the synthetic neighborhood by sampling. We only conducted the experiment on a specific hyperparameter due to the time limit.",
"Dear reviewer... | [
-1,
-1,
-1,
-1,
-1,
3,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
1
] | [
"ByeLZ2__sH",
"iclr_2020_Skgaia4tDH",
"SJl0wftOtB",
"HklJK-wRKB",
"Bygxx5IpFH",
"iclr_2020_Skgaia4tDH",
"iclr_2020_Skgaia4tDH",
"iclr_2020_Skgaia4tDH"
] |
iclr_2020_Syg6jTNtDH | Learning Numeral Embedding | Word embedding is an essential building block for deep learning methods for natural language processing. Although word embedding has been extensively studied over the years, the problem of how to effectively embed numerals, a special subset of words, is still underexplored. Existing word embedding methods do not learn numeral embeddings well because there are an infinite number of numerals and their individual appearances in training corpora are highly scarce.
In this paper, we propose two novel numeral embedding methods that can handle the out-of-vocabulary (OOV) problem for numerals. We first induce a finite set of prototype numerals using either a self-organizing map or a Gaussian mixture model. We then represent the embedding of a numeral as a weighted average of the prototype number embeddings. Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training.
We evaluated our methods and showed its effectiveness on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling. | reject | This paper proposes better methods to handle numerals within word embeddings.
Overall, my impression is that this paper is solid, but not super-exciting. The scope is a little bit limited (to only numbers), and it is not by any means the first paper to handle understanding numbers within word embeddings. A more thorough theoretical and empirical comparison to other methods, e.g. Spithourakis & Riedel (2018) and Chen et al. (2019), could bring the paper a long way.
I think this paper is somewhat borderline, but am recommending not to accept because I feel that the paper could be greatly improved by making the above-mentioned comparisons more complete, and thus this could find a better place as a better paper in a new venue. | train | [
"B1lamhMaYr",
"SyxgFyX2oH",
"Byxsm63YjB",
"HkeI9QB5oS",
"HkewD9d5jB",
"Hke6V2XEFB",
"B1lHzT5atH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I have read the author response. Thank you for responding to my concerns.\n\nOriginal review:\nThis paper presents a word embedding approach for numbers. The method is based on finding prototype numbers, and then representing numbers as a weighted average of the prototype embeddings, where the weights are based ... | [
3,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_Syg6jTNtDH",
"iclr_2020_Syg6jTNtDH",
"B1lHzT5atH",
"B1lamhMaYr",
"Hke6V2XEFB",
"iclr_2020_Syg6jTNtDH",
"iclr_2020_Syg6jTNtDH"
] |
iclr_2020_Hke0oa4KwS | Empirical confidence estimates for classification by deep neural networks | How well can we estimate the probability that the classification predicted by a deep neural network is correct (or in the Top 5)? It is well-known that the softmax values of the network are not estimates of the probabilities of class labels. However, there is a misconception that these values are not informative. We define the notion of implied loss and prove that if an uncertainty measure is an implied loss, then low uncertainty means high probability of correct (or Top-k) classification on the test set. We demonstrate empirically that these values can be used to measure the confidence that the classification is correct. Our method is simple to use on existing networks: we proposed confidence measures for Top-k which can be evaluated by binning values on the test set. | reject | The paper proposes to model uncertainty using expected Bayes factors, and empirically show that the proposed measure correlates well with the probability that the classification is correct.
All the reviewers agreed that the idea of using Bayes factors for uncertainty estimation is an interesting approach. However, the reviewers also found the presentation a bit hard to follow. While the rebuttal addressed some of these concerns, there were still some remaining concerns (see R3's comments).
I think this is a really promising direction of research and I appreciate the authors' efforts to revise the draft during the rebuttal (which led to some reviewers increasing the score). This is a borderline paper right now but I feel that the paper has the potential to turn into a great paper with another round of revision. I encourage the authors to revise the draft and resubmit to a different venue. | val | [
"B1egi4UoYH",
"Hkgur3Lg9B",
"rkewbIiu9B",
"B1ggfQUviS",
"H1lJSf8PiB",
"B1lsKbLPjr",
"r1eEulUvsH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Summary: This paper proposes an uncertainty measure called an implied loss. The authors suggest that it is a simple way to quantify the uncertainty of the model. It is suggested that \"Low implied loss (uncertainty) means a high probability of correct classification on the test set.\". They suggest that the analys... | [
1,
6,
6,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2020_Hke0oa4KwS",
"iclr_2020_Hke0oa4KwS",
"iclr_2020_Hke0oa4KwS",
"B1egi4UoYH",
"Hkgur3Lg9B",
"rkewbIiu9B",
"iclr_2020_Hke0oa4KwS"
] |
iclr_2020_Hke12T4KPS | Using Hindsight to Anchor Past Knowledge in Continual Learning | In continual learning, the learner faces a stream of data whose distribution changes over time. Modern neural networks are known to suffer under this setting, as they quickly forget previously acquired knowledge. To address such catastrophic forgetting, state-of-the-art continual learning methods implement different types of experience replay, re-learning on past data stored in a small buffer known as episodic memory. In this work, we complement experience replay with a meta-learning technique that we call anchoring: the learner updates its knowledge on the current task, while keeping predictions on some anchor points of past tasks intact. These anchor points are learned using gradient-based optimization as to maximize forgetting of the current task, in hindsight, when the learner is fine-tuned on the episodic memory of past tasks. Experiments on several supervised learning benchmarks for continual learning demonstrate that our approach improves the state of the art in terms of both accuracy and forgetting metrics and for various sizes of episodic memories. | reject | This paper proposes a continual learning method that uses anchor points for experience replay. Anchor points are learned with gradient-based optimization to maximize forgetting on the current task. Experiments MNIST, CIFAR, and miniImageNet show the benefit of the proposed approach.
As noted by other reviewers, there are some grammatical issues with the paper.
It is missing some important details in the experiments. It is unclear to me whether the five random seeds how the datasets (tasks) are ordered in the experiments. Do the five random seeds correspond to five different dataset orderings? I think it would also be very interesting to see the anchor points that are chosen in practice. This issue is brought up by R4, and the authors responded that anchor points do not correspond to classes. Since the main idea of this paper is based on anchor points, it would be nice to analyze further to get a better understanding what they represent.
Finally, the authors only evaluate their method on image classification. While I believe the technique can be applied in other domains (e.g., reinforcement learning, natural language processing) with some modifications, without providing concrete empirical evidence in the paper, the authors need to clearly state that their proposed method is only evaluated on image classification and not sell it as a general method (yet).
The authors also miss citations to some prior work on memory-based parameter adaptation and its variants.
Regardless all of the above issues, this is still a borderline paper. However, due to space constraint, I recommend to reject this paper for ICLR. | train | [
"rkxUoOuEiH",
"Bygq0IuNsH",
"r1eAvIuNiS",
"BJe7vdO4ir",
"r1gTaPdVsH",
"BJxT9IAdYH",
"HkgF_woAYB",
"rJeuVn7q9S",
"SklsXL5_KB",
"BJlzYKuPKr"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"We thank all the reviewers for their positive reviews. We have uploaded a revision of the paper which accommodates all the reviewers' suggestions.\n\nBelow is the summary of the changes:\n\n* Added the derivation of the gradient form of the anchoring objective. \n* Updated Sec. 3 to explain the connection with met... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
-1,
-1
] | [
"iclr_2020_Hke12T4KPS",
"r1eAvIuNiS",
"rJeuVn7q9S",
"BJxT9IAdYH",
"HkgF_woAYB",
"iclr_2020_Hke12T4KPS",
"iclr_2020_Hke12T4KPS",
"iclr_2020_Hke12T4KPS",
"BJlzYKuPKr",
"iclr_2020_Hke12T4KPS"
] |
iclr_2020_Bke13pVKPS | Improved Training Speed, Accuracy, and Data Utilization via Loss Function Optimization | As the complexity of neural network models has grown, it has become increasingly important to optimize their design automatically through metalearning. Methods for discovering hyperparameters, topologies, and learning rate schedules have lead to significant increases in performance. This paper shows that loss functions can be optimized with metalearning as well, and result in similar improvements. The method, Genetic Loss-function Optimization (GLO), discovers loss functions de novo, and optimizes them for a target task. Leveraging techniques from genetic programming, GLO builds loss functions hierarchically from a set of operators and leaf nodes. These functions are repeatedly recombined and mutated to find an optimal structure, and then a covariance-matrix adaptation evolutionary strategy (CMA-ES) is used to find optimal coefficients. Networks trained with GLO loss functions are found to outperform the standard cross-entropy loss on standard image classification tasks. Training with these new loss functions requires fewer steps, results in lower test error, and allows for smaller datasets to be used. Loss function optimization thus provides a new dimension of metalearning, and constitutes an important step towards AutoML. | reject | This paper proposes a GA-based method for optimizing the loss function a model is trained on to produce better models (in terms of final performance). The general consensus from the reviewers is that the paper, while interesting, dedicates too much of its content to analyzing one such discovered loss (the Baikal loss), and that the experimental setting (MNIST and Cifar10) is too basic to be conclusive. It seems this paper can be so significantly improved with some further and larger scale experiments that it would be wrong to prematurely recommend acceptance. My recommendation is that the authors consider the reviewer feedback, run the suggested further experiments, and are hopefully in the position to submit a significantly stronger version of this paper to a future conference. | train | [
"rkgPrHM4tH",
"HJenQ5wnYS",
"SJg3SzKCFB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a framework to perform meta-learning on the loss used for\ntraining. They introduce the Baikal loss, obtained using the MNIST dataset, and\nBaikalCMA where the coefficients have been tuned. The evaluation of these loss\nfunctions is performed on the MNIST and CIFAR-10, and according to the resu... | [
3,
3,
1
] | [
4,
3,
5
] | [
"iclr_2020_Bke13pVKPS",
"iclr_2020_Bke13pVKPS",
"iclr_2020_Bke13pVKPS"
] |
iclr_2020_Sklyn6EYvH | Disentangled Representation Learning with Sequential Residual Variational Autoencoder | Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction. We propose Sequential Residual Variational Autoencoder (SR-VAE) that defines a "Residual learning" mechanism as the training regime instead of the augmented objective function. Our proposed solution deploys two important ideas in a single framework: (1) learning from the residual between the input data and the accumulated reconstruction of sequentially added latent variables; (2) decomposing the reconstruction into decoder output and a residual term. This formulation encourages the disentanglement in the latent space by inducing explicit dependency structure, and reduces the bottleneck of VAE by adding the residual term to facilitate reconstruction. More importantly, SR-VAE eliminates the hyperparameter tuning, a crucial step for the prior state-of-the-art performance using the objective function augmentation approach. We demonstrate both qualitatively and quantitatively that SR-VAE improves the state-of-the-art unsupervised disentangled representation learning on a variety of complex datasets. | reject | This paper that defines a “Residual learning” mechanism as the training regime for variational autoencoder. The method gradually activates individual latent variables to reconstruct residuals.
There are two main concerns from the reviewers. First, residual learning is a common trick now, hence authors should provide insights on why residual learning works for VAE. The other problem is computational complexity. Currently, reviews argue that it seems not really fair to compare to a bruteforce parameter search. The authors’ rebuttal partially addresses these problems but meet the standard of the reviewers.
Based on the reviewers’ comments, I choose to reject the paper.
| val | [
"HJgdDxAatS",
"rJxUWI5xsB",
"HJggpH9xiH",
"Hkx6kS9gjr",
"rylCNq2htS",
"SyxsGxuG5r"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Overview:\nAuthors introduce a new VAE-based method for learning disentangled representations.\nThe main idea is to apply a “residual learning mechanism”, which resembles an autoregressive model, but here conditioning between sequential steps is done in both latent and input spaces. Namely, for each input, the met... | [
3,
-1,
-1,
-1,
8,
3
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2020_Sklyn6EYvH",
"rylCNq2htS",
"HJgdDxAatS",
"SyxsGxuG5r",
"iclr_2020_Sklyn6EYvH",
"iclr_2020_Sklyn6EYvH"
] |
iclr_2020_Hkex2a4FPr | On Variational Learning of Controllable Representations for Text without Supervision | The variational autoencoder (VAE) has found success in modelling the manifold of natural images on certain datasets, allowing meaningful images to be generated while interpolating or extrapolating in the latent code space, but it is unclear whether similar capabilities are feasible for text considering its discrete nature. In this work, we investigate the reason why unsupervised learning of controllable representations fails for text. We find that traditional sequence VAEs can learn disentangled representations through their latent codes to some extent, but they often fail to properly decode when the latent factor is being manipulated, because the manipulated codes often land in holes or vacant regions in the aggregated posterior latent space, which the decoding network is not trained to process. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method significantly outperforms unsupervised baselines and is competitive with strong supervised approaches on text style transfer. Furthermore, when switching the latent factor (e.g., topic) during a long sentence generation, our proposed framework can often complete the sentence in a seemingly natural way -- a capability that has never been attempted by previous methods. | reject | This paper analyzes the behavior of VAE for learning controllable text representations and uses this insight to introduce a method to constrain the posterior space by introducing a regularization term and a structured reconstruction term to the standard VAE loss. Experiments show the proposed method improves over unsupervised baselines, although it still underperforms supervised approaches in text style transfer.
The paper had some issues with presentation, as pointed out by R1 and R3. In addition, it missed citations to many prior work. Some of these issues had been addressed after the rebuttal, but I still think it needs to be more self contained (e.g., include details of evaluation protocols in the appendix, instead of citing another paper).
In an internal discussion, R1 still has some concerns regarding whether the negative log likelihood is less affected by manipulations in the constrained space compared to beta-VAE. In particular, the concern is about whether the magnitude of the manipulation is comparable across models, which is also shared by R3. R1 also think some of the generated samples are not very convincing.
This is a borderline paper with some interesting insights that tackles an important problem. However, due to its shortcoming in the current state, I recommend to reject the paper. | train | [
"SyeCjXKOir",
"ryxHGlrmsB",
"SkxB5krGir",
"SyxRLJVfjS",
"SJxEZ07ziB",
"H1x3gffxir",
"r1gTTOdaFB",
"BkeZKd3atr",
"BJxK6Ipc9r"
] | [
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewers and all,\n\nThanks for the constructive comments from all the reviewers. An updated version has been uploaded considering all the reviewers’ concerns. We believe this version is much clearer in written with less ambiguity, which would not be possible without all the useful feedback from the reviewe... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2020_Hkex2a4FPr",
"H1x3gffxir",
"BkeZKd3atr",
"BJxK6Ipc9r",
"r1gTTOdaFB",
"iclr_2020_Hkex2a4FPr",
"iclr_2020_Hkex2a4FPr",
"iclr_2020_Hkex2a4FPr",
"iclr_2020_Hkex2a4FPr"
] |
iclr_2020_HygW26VYwS | Attention Privileged Reinforcement Learning for Domain Transfer | Applying reinforcement learning (RL) to physical systems presents notable challenges, given requirements regarding sample efficiency, safety, and physical constraints compared to simulated environments. To enable transfer of policies trained in simulation, randomising simulation parameters leads to more robust policies, but also in significantly extended training time. In this paper, we exploit access to privileged information (such as environment states) often available in simulation, in order to improve and accelerate learning over randomised environments. We introduce Attention Privileged Reinforcement Learning (APRiL), which equips the agent with an attention mechanism and makes use of state information in simulation, learning to align attention between state- and image-based policies while additionally sharing generated data. During deployment we can apply the image-based policy to remove the requirement of access to additional information. We experimentally demonstrate accelerated and more robust learning on a number of diverse domains, leading to improved final performance for environments both within and outside the training distribution. | reject | This paper tackles the problem of transferring an RL policy learned in simulation to the real world (sim2real). More specifically, the authors address the situation where the agent can access privileged information available during simulation, for example access to exact states instead of compressed representations. They perform experiments in various simulated domains where different aspects of the environment are modified to evaluate generalization.
Major concerns remain following the rebuttal. First, it is not clear how realistic it is to assume access to such privileged information in practice. Second, the experiments are not convincing since the algorithms do not appear to have reached convergence in the presented results. Finally, a sim2real work would highly benefit from real-world experiments.
In light of the above issues, I recommend to reject this paper. | train | [
"rJxee2j0tB",
"Syxx3LI3oS",
"HJxhuQEniB",
"rklw_naooS",
"SkgOApDioB",
"SkeU0j8iiB",
"rylDk-LjoS",
"SylbBQXojH",
"Skgcwy1osr",
"B1x4yO05sr",
"Skxf5LA5sr",
"HksyIAqjr",
"ryew8BLsFH",
"B1gwk6-RFH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Summary - Building on top of the domain randomization principle (used to train policies robust to domain-variations) to learn policies which transfer well to new domains, the paper proposes an approach to improve and speed-up learning / training over randomized environments. The paper operates in a settings where ... | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2020_HygW26VYwS",
"HJxhuQEniB",
"SkgOApDioB",
"Skxf5LA5sr",
"rylDk-LjoS",
"SylbBQXojH",
"B1x4yO05sr",
"Skgcwy1osr",
"B1gwk6-RFH",
"ryew8BLsFH",
"rJxee2j0tB",
"iclr_2020_HygW26VYwS",
"iclr_2020_HygW26VYwS",
"iclr_2020_HygW26VYwS"
] |
iclr_2020_Byl-264tvr | Improving End-to-End Object Tracking Using Relational Reasoning | Relational reasoning, the ability to model interactions and relations between objects, is valuable for robust multi-object tracking and pivotal for trajectory prediction. In this paper, we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts for permutation invariance in its relational reasoning. We explore a number of permutation invariant architectures and show that multi-headed self-attention outperforms the provided baselines and better accounts for complex physical interactions in a challenging toy experiment. We show on three real-world tracking datasets that adding relational reasoning capabilities in this way increases the tracking and trajectory prediction performance, particularly in the presence of ego-motion, occlusions, crowded scenes, and faulty sensor inputs. To the best of our knowledge, MOHART is the first fully end-to-end multi-object tracking from vision approach applied to real-world data reported in the literature. | reject | The authors propose an end-to-end object tracker by exploiting the attention mechanism. Two reviewers recommend rejection, while the last reviewer is more positive. The concerns brought up are novelty (last reviewer), and experiments (second reviewer). Furthermore, the authors seem to overclaim their contribution. There indeed are end-to-end multi-object trackers, see Frossard & Urtasun's work for example. This work needs to be cited, and possibly a comparison is needed. Since the paper did not receive favourable reviews and there are additional citations missing, this paper cannot be accepted in current form. The authors are encouraged to strengthen their work and resubmit to a future venue. | val | [
"rJl6YjZhjB",
"rJlOWZYujS",
"S1guPxOuor",
"BkltsJddiH",
"rkeGsCPuiB",
"BkgBQO1aFH",
"S1xCmqbaFr",
"S1lqFdj0tS"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Couldn't agree more! :)",
"Agreed re structural novelty. Also I actually meant to write \"The [architectural] novelty is\"... to be clear, I don't think we as a community should be optimizing for architectural novelty, which is part of the reason I voted to accept this paper. Good empirical evaluations of novel ... | [
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
1,
3,
1
] | [
"rJlOWZYujS",
"rkeGsCPuiB",
"S1lqFdj0tS",
"S1xCmqbaFr",
"BkgBQO1aFH",
"iclr_2020_Byl-264tvr",
"iclr_2020_Byl-264tvr",
"iclr_2020_Byl-264tvr"
] |
iclr_2020_rJxG3pVKPB | Translation Between Waves, wave2wave | The understanding of sensor data has been greatly improved by advanced deep learning methods with big data. However, available sensor data in the real world are still limited, which is called the opportunistic sensor problem. This paper proposes a new variant of neural machine translation seq2seq to deal with continuous signal waves by introducing the window-based (inverse-) representation to adaptively represent partial shapes of waves and the iterative back-translation model for high-dimensional data. Experimental results are shown for two real-life data: earthquake and activity translation. The performance improvements of one-dimensional data was about 46 % in test loss and that of high-dimensional data was about 1625 % in perplexity with regard to the original seq2seq.
| reject | The paper considers the task of sequence to sequence modelling with multivariate, real-valued time series.
The authors propose an encoder-decoder based architecture that operates on fixed windows of the original signals.
The reviewers unanimously criticise the lack of novelty in this paper and the lack of comparison to existing baselines.
While Rev #1 positively highlights human evaluation contained in the experiments, they nevertheless do not think this paper is good enough for publication as is.
The authors did not submit a rebuttal.
I therefore recommend to reject the paper. | test | [
"S1x8MOPqYr",
"S1ekz4gaKS",
"H1ltkbSRKH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a variant of sequence to sequence models that operates on the fixed segments (waves). The paper contains significant flaws both in its writing and its experimental setup.\n\n1) Authors write: \"For example in human activity logs, the video data can be missing in bathrooms by ethical reasons but... | [
1,
1,
3
] | [
5,
3,
4
] | [
"iclr_2020_rJxG3pVKPB",
"iclr_2020_rJxG3pVKPB",
"iclr_2020_rJxG3pVKPB"
] |
iclr_2020_SJeX2aVFwH | Project and Forget: Solving Large Scale Metric Constrained Problems | Given a set of distances amongst points, determining what metric representation is most “consistent” with the input distances or the metric that captures the relevant geometric features of the data is a key step in many machine learning algorithms. In this paper, we focus on metric constrained problems, a class of optimization problems with metric constraints. In particular, we identify three types of metric constrained problems: metric nearness Brickell et al. (2008), weighted correlation clustering on general graphs Bansal et al. (2004), and metric learning Bellet et al. (2013); Davis et al. (2007). Because of the large number of constraints in these problems, however, researchers have been forced to restrict either the kinds of metrics learned or the size of the problem that can be solved.
We provide an algorithm, PROJECT AND FORGET, that uses Bregman projections with cutting planes, to solve metric constrained problems with many (possibly exponentially) inequality constraints. We also prove that our algorithm converges to the global optimal solution. Additionally, we show that the optimality error (L2 distance of the current iterate to the optimal) asymptotically decays at an exponential rate. We show that using our method we can solve large problem instances of three types of metric constrained problems, out-performing all state of the art methods with respect to CPU times and problem sizes. | reject | Quoting from Reviewer2: "The paper considers the problem of optimizing convex functions under metric constraints. The main challenge is that expressing all metric constraints on n points requiries O(n^3) constraints. The paper proposes a “project and forget” approach which is essentially is based on cyclic Bregman projections but with a twist that some of the constraints are forgotten." The reviewers were split on this submission, with two arguing for weak acceptance and one arguing for rejection. Purely based on scores, this paper is borderline. It was pointed out by multiple reviewers that the method is not very novel. In particular it effectively works as an active set method. It appears to be very effective in this setting, but the basic algorithm does not differ in structure from any active set method, for which removal of inactive constraints is considered standard (see even the wikipedia page on active set methods). | train | [
"rkgtJEumsS",
"SJgf9Wu7iH",
"HkgXat1QoS",
"HylCsY1Qsr",
"SyeUWqymiH",
"rygwy91QsB",
"SkeMSMYctS",
"H1lYA92yiB",
"ryxpijikjH",
"S1xchXF0tB",
"B1l78StUcB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for taking the time and reading our work. We agree that the main strength of our paper lies in the experimental results that we have obtained. However, that is not to say that the theoretical results are unimportant. \n\nThe Bregman method has existed for a long time, and lots of research work has been d... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
-1,
-1,
3,
4
] | [
"B1l78StUcB",
"SkeMSMYctS",
"S1xchXF0tB",
"S1xchXF0tB",
"S1xchXF0tB",
"S1xchXF0tB",
"iclr_2020_SJeX2aVFwH",
"ryxpijikjH",
"SkeMSMYctS",
"iclr_2020_SJeX2aVFwH",
"iclr_2020_SJeX2aVFwH"
] |
iclr_2020_SJlVn6NKPB | Representation Learning for Remote Sensing: An Unsupervised Sensor Fusion Approach | In the application of machine learning to remote sensing, labeled data is often scarce or expensive, which impedes the training of powerful models like deep convolutional neural networks. Although unlabeled data is abundant, recent self-supervised learning approaches are ill-suited to the remote sensing domain. In addition, most remote sensing applications currently use only a small subset of the multi-sensor, multi-channel information available, motivating the need for fused multi-sensor representations. We propose a new self-supervised training objective, Contrastive Sensor Fusion, which exploits coterminous data from multiple sources to learn useful representations of every possible combination of those sources. This method uses information common across multiple sensors and bands by training a single model to produce a representation that remains similar when any subset of its input channels is used. Using a dataset of 47 million unlabeled coterminous image triplets, we train an encoder to produce semantically meaningful representations from any possible combination of channels from the input sensors. These representations outperform fully supervised ImageNet weights on a remote sensing classification task and improve as more sensors are fused. | reject | The authors present a method for learning representations of remote sensing images from multiple views. The main ideas is to use the InfoNCE loss to learn from multiple views of the data.
The reviewers had a few concerns about this work which were not adequately addressed by the authors. I have summarised these below and would strongly recommend that the authors address these in subsequent submissions:
1) Experiments on a single dataset and a very specific task: Authors should present a more convincing argument about why the chosen dataset and task are challenging and important to demonstrate the main ideas presented in their work. Further, they should also report results on additional datasets suggested by the reviewers.
2) Comparisons with existing works: The reviewers suggested several existing works for comparison. The authors agreed that these were relevant and important but haven't done this comparison yet. Without such a comparison it is hard to evaluate the main contributions of this work.
Based on the above objections raised by the reviewers, I recommend that the paper should not be accepted. | train | [
"SylM88mHoB",
"B1xNtSz1qr",
"H1xLeX13iB",
"HJxXkzajir",
"H1gDeb6sjH",
"H1e3XLhVtr"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper presents an approach to create unsupervised representations of remote sensing images. The essential idea is to enforce similarity between representations of multiple views obtained by subsetting channels from multiple co-terminus sensor outputs. This is implemented by training with the InfoNCE loss on h... | [
3,
3,
-1,
-1,
-1,
3
] | [
3,
4,
-1,
-1,
-1,
4
] | [
"iclr_2020_SJlVn6NKPB",
"iclr_2020_SJlVn6NKPB",
"H1e3XLhVtr",
"SylM88mHoB",
"B1xNtSz1qr",
"iclr_2020_SJlVn6NKPB"
] |
iclr_2020_H1gS364FwS | Event extraction from unstructured Amharic text | In information extraction, event extraction is one of the types that extract the specific knowledge of certain incidents from texts. Event extraction has been done on different languages texts but not on one of the Semitic language Amharic. In this study, we present a system that extracts an event from unstructured Amharic text. The system has designed by the integration of supervised machine learning and rule-based approaches together. We call it a hybrid system. The model from the supervised machine learning detects events from the text, then, handcrafted rules and the rule-based rules extract the event from the text. The hybrid system has compared with the standalone rule-based method that is well known for event extraction. The study has shown that the hybrid system has outperformed the standalone rule-based method. For the event extraction, we have been extracting event arguments. Event arguments identify event triggering words or phrases that clearly express the occurrence of the event. The event argument attributes can be verbs, nouns, occasionally adjectives such as ሰርግ/wedding and time as well. | reject | This paper performs event extraction from Amharic texts. To this end, authors prepared a novel Amharic corpus and used a hybrid system of rule-based and learning-based systems.
Overall, while all reviewers admit the importance of addressing low-resource language and the value of the novel Amharic corpus, they are not satisfied with the quality of the current paper as a scientific work.
Most importantly, although the attempt of even extraction might be new on Amharic, there have been many works on other languages. It should be clearly presented what are the non-trivial language-specific challenges on Amharic and how they are solved, otherwise it seems just an engineering of existing techniques on a new dataset. Also, all reviewers are fairly concerned about the presentation and clarity of the paper. Unfortunately, no revised paper is uploaded and we cannot confirm how authors' response is reflected. For those reasons, I would like to recommend rejection.
| test | [
"rkxB8SMZcS",
"SygDYQNctS",
"Byg8zj4RFS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies extracting events from unstructured text, specifically on the low resource language Amharic. The paper proposes to combine rule-based based with a learning-based approach. \n\nStrength\n-\tLooks at the low resource language Amharic and mines a large corpus of text.\n-\tThe paper compares 3 learni... | [
3,
1,
1
] | [
1,
5,
4
] | [
"iclr_2020_H1gS364FwS",
"iclr_2020_H1gS364FwS",
"iclr_2020_H1gS364FwS"
] |
iclr_2020_HkxU2pNYPH | Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation | Neural conditional text generation systems have achieved significant progress in recent years, showing the ability to produce highly fluent text. However, the inherent lack of controllability in these systems allows them to hallucinate factually incorrect phrases that are unfaithful to the source, making them often unsuitable for many real world systems that require high degrees of precision. In this work, we propose a novel confidence oriented decoder that assigns a confidence score to each target position. This score is learned in training using a variational Bayes objective, and can be leveraged at inference time using a calibration technique to promote more faithful generation. Experiments on a structured data-to-text dataset -- WikiBio -- show that our approach is more faithful to the source than existing state-of-the-art approaches, according to both automatic metrics and human evaluation. | reject | This paper proposes to improve the faithfulness of data-to-text generation models, through an attention-based confidence measure and a variational approach for learning the model. There is some reviewer disagreement on this paper. All agree that the problem is important and ideas interesting, while some reviewers feel that the methods are insufficiently justified and/or the results unconvincing. In addition, there is not much technical novelty here from a machine learning perspective; the contribution is to a specific task. Overall I think this paper would fit in much better in an NLP conference/journal. | train | [
"SJe9qyVjjB",
"H1gi3YfjoS",
"H1xAgeQjir",
"BJedkFyoir",
"ryxoxHnqsr",
"HkeiSvhqiB",
"HJxVII3qoS",
"Skl_CSncsH",
"Bke0N0jcir",
"HklYjjdaKH",
"BJePCEiCKH",
"rJgi9SljqS",
"HylX_BRscS"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the comments!\n\nWe will consider further updating our paper to reflect the additional results and discussions here.\n\nRegarding human eval, just a quick notice that we have reported inter-annotator agreement in Section 5.1: We assigned 5 raters who are well-trained on this task, and conducted 5-way an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
1,
4
] | [
"H1xAgeQjir",
"iclr_2020_HkxU2pNYPH",
"Skl_CSncsH",
"iclr_2020_HkxU2pNYPH",
"HylX_BRscS",
"HklYjjdaKH",
"BJePCEiCKH",
"rJgi9SljqS",
"iclr_2020_HkxU2pNYPH",
"iclr_2020_HkxU2pNYPH",
"iclr_2020_HkxU2pNYPH",
"iclr_2020_HkxU2pNYPH",
"iclr_2020_HkxU2pNYPH"
] |
iclr_2020_Byxv2pEKPH | Farkas layers: don't shift the data, fix the geometry | Successfully training deep neural networks often requires either {batch normalization}, appropriate {weight initialization}, both of which come with their own challenges. We propose an alternative, geometrically motivated method for training. Using elementary results from linear programming, we introduce Farkas layers: a method that ensures at least one neuron is active at a given layer. Focusing on residual networks with ReLU activation, we empirically demonstrate a significant improvement in training capacity in the absence of batch normalization or methods of initialization across a broad range of network sizes on benchmark datasets. | reject | This paper proposes a new normalization scheme that attempts to prevent all units in a ReLU layer from being dead. The experimental results show that this normalization can effectively be used to train deep networks, though not as well as batch normalization. A significant issue is that the paper does not sufficiently establish that their explanation for the success of Farkas layer is valid. For example, do networks usually have layers with only inactive units in practice? | train | [
"B1ehhpMvoS",
"SkxPiafDoH",
"SJl7qaGwoS",
"B1lkFTxotH",
"rklkPJJnKS",
"B1xqcPXpKr"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the thorough critiques of our paper; we will address the comments in order.\n1) Although they are not the same exactly, it was simply to provide context.\n\n2) Agreed though the scaling propagates forward?\n\n3) This would be interesting --- would you be able to provide a source?\n\n4) Agreed!\n\n5) ... | [
-1,
-1,
-1,
1,
3,
3
] | [
-1,
-1,
-1,
4,
4,
5
] | [
"B1lkFTxotH",
"rklkPJJnKS",
"B1xqcPXpKr",
"iclr_2020_Byxv2pEKPH",
"iclr_2020_Byxv2pEKPH",
"iclr_2020_Byxv2pEKPH"
] |
iclr_2020_Hyx_h64Yvr | Kronecker Attention Networks | Attention operators have been applied on both 1-D data like texts and higher-order data such as images and videos. Use of attention operators on high-order data requires flattening of the spatial or spatial-temporal dimensions into a vector, which is assumed to follow a multivariate normal distribution. This not only incurs excessive requirements on computational resources, but also fails to preserve structures in data. In this work, we propose to avoid flattening by developing Kronecker attention operators (KAOs) that operate on high-order tensor data directly. KAOs lead to dramatic reductions in computational resources. Moreover, we analyze KAOs theoretically from a probabilistic perspective and point out that KAOs assume the data follow matrix-variate normal distributions. Experimental results show that KAOs reduce the amount of required computational resources by a factor of hundreds, with larger factors for higher-dimensional and higher-order data. Results also show that networks with KAOs outperform models without attention, while achieving competitive performance as those with original attention operators. | reject | This submission has been assessed by three reviewers who scored it as 3/3/3. The main criticism includes lack of motivation for sections 3.1 and 3.2, comparisons to mere regular self-attention without encompassing more works on this topic, a connection between Theorem 1 and the rest of the paper seems missing. Finally, there exists a strong resemblance to another submission by the same authors which is also raises the questions about potentially a dual submission. Even excluding the last argument, lack of responses to reviewers does not help this case. Thus, this paper cannot be accepted by ICLR2020. | train | [
"Syl2rxLq_H",
"BkgW4ZBRYH",
"Syg9xOqCKB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors propose to reduce the memory and computation complexity of attention networks applied to 2D data (images) by using, in the attention operator, the mean over the rows and columns of images instead of its vectorized version.\nThe paper is relatively well written, based on intuitive ideas a... | [
3,
3,
3
] | [
4,
4,
1
] | [
"iclr_2020_Hyx_h64Yvr",
"iclr_2020_Hyx_h64Yvr",
"iclr_2020_Hyx_h64Yvr"
] |
iclr_2020_rJeO3aVKPB | Faster Neural Network Training with Data Echoing | In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce “data echoing,” which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or “echoes”) intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network. | reject | This paper presents a simple trick of taking multiple SGD steps on the same data to improve distributed processing of data and reclaim idle capacity. The underlying ideas seems interesting enough, but the reviewers had several concerns.
1. The method is a simple trick (R2). I don't think this is a good reason to reject the paper, as R3 also noted, so I think this is fine.
2. There are not clear application cases (R3). The authors have given a reasonable response to this, in indicating that this method is likely more useful for prototyping than for well-developed applications. This makes sense to me, but both R3 and I felt that this was insufficiently discussed in the paper, despite seeming quite important to arguing the main point.
3. The results look magical, or too good to be true without additional analysis (R1 and R3). This concerns me the most, and I'm not sure that this point has been addressed by the rebuttal. In addition, it seems that extensive hyperparameter tuning has been performed, which also somewhat goes against the idea that "this is good for prototyping". If it's good for prototyping, then ideally it should be a method where hyperparameter tuning is not very necessary.
4. The connections with theoretical understanding of SGD are not well elucidated (R1). I also agree this is a problem, but perhaps not a fatal one -- very often simple heuristics prove effective, and then are analyzed later in follow-up papers.
Honestly, this paper is somewhat borderline, but given the large number of good papers that have been submitted to ICLR this year, I'm recommending that this not be accepted at this time, but certainly hope that the authors continue to improve the paper towards a final publication at a different venue.
| train | [
"SkltZkCOtr",
"Sklz2DCVjH",
"ryxqrDRNoS",
"BJlamDAVsS",
"HklnaKqnYH",
"S1eXhCVAKH"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses the use of data echoing (re-passing data fetched from drive or cloud) to maximize GPU usage and reduce reliance on data transportation time. The schemes basically are: reusing data at the example level, after data augmentation, or after batching. The experiments measure how much fresh data is ... | [
6,
-1,
-1,
-1,
3,
3
] | [
1,
-1,
-1,
-1,
1,
4
] | [
"iclr_2020_rJeO3aVKPB",
"HklnaKqnYH",
"S1eXhCVAKH",
"SkltZkCOtr",
"iclr_2020_rJeO3aVKPB",
"iclr_2020_rJeO3aVKPB"
] |
iclr_2020_r1ltnp4KwS | Training Interpretable Convolutional Neural Networks towards Class-specific Filters | Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks. However, CNNs still suffer from the problem of filter ambiguity – an intricate many-to-many mapping relationship between filters and features, which undermines the models’ interpretability. To interpret CNNs, most existing works attempt to interpret a pre-trained model, while neglecting to reduce the filter ambiguity hidden behind. To this end, we propose a simple but effective strategy for training interpretable CNNs. Specifically, we propose a novel Label Sensitive Gate (LSG) structure to enable the model to learn disentangled filters in a supervised manner, in which redundant channels experience a periodical shutdown as flowing through a learnable gate varying with input labels. To reduce redundant filters during training, LSG is constrained with a sparsity regularization. In this way, such training strategy imposes each filter’s attention to just one or few classes, namely class-specific. Extensive experiments demonstrate the fabulous performance of our method in generating sparse and highly label- related representation of the input. Moreover, comparing to the standard training strategy, our model displays less redundancy and stronger interpretability.
| reject | The paper proposes a method to make the filters of the last conv layer more class-specific. The motivation for this is to improve upon the interpretability of the CNN, which is empirically shown by comparing the class activation maps (CAMs) of regular CNN and the proposed LSG-CNN. While the idea is interesting, one of the concerns from reviewers is about limited applicability of the method, at least the way it is shown in experiments -- a concern that I tend to agree with. As primary goal of the work is improving interpretability of CNNs, authors should test LSG-CNN with some more recent methods for producing the saliency maps other than CAM to convincingly establish the value of the method. Authors also mention lack of hyperparameter tuning and the use of SGD with limited training epochs as a reason for the drop in accuracy. It will be worth spending some effort so the accuracy matches the standard benchmarks -- this will help in arguing more convincingly about practical benefit of the method. | train | [
"BJgVZnT2YB",
"ryg_yQunjr",
"BJl3nmOnjr",
"SkgcifuniS",
"Ske7OZ_hsB",
"H1ljWbOnoS",
"ByxTY1O2sH",
"Hygar1OhiB",
"BJgxfkd2sB",
"S1eMnQEpFr",
"Syx3XXYAtS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Contributions: The paper proposes a novel Label Sensitive Gate (LSG) structure to enable the model to learn disentangled filters in a supervised manner. The novelty of the paper is to introduce the Label-Sensitive Gate path during the training, on top of the standard training path. This encourages the filters to e... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"iclr_2020_r1ltnp4KwS",
"BJgVZnT2YB",
"BJgVZnT2YB",
"BJgVZnT2YB",
"S1eMnQEpFr",
"Syx3XXYAtS",
"Syx3XXYAtS",
"Syx3XXYAtS",
"Syx3XXYAtS",
"iclr_2020_r1ltnp4KwS",
"iclr_2020_r1ltnp4KwS"
] |
iclr_2020_Bylthp4Yvr | Dropout: Explicit Forms and Capacity Control | We investigate the capacity control provided by dropout in various machine learning problems. First, we study dropout for matrix sensing, where it induces a data-dependent regularizer that, in expectation, equals the weighted trace-norm of the product of the factors. In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks. These developments enable us to give concrete generalization error bounds for the dropout algorithm in both matrix completion as well as training deep neural networks. We evaluate our theoretical findings on real-world datasets, including MovieLens, Fashion MNIST, and CIFAR-10. | reject | The authors study dropout for matrix sensing and deep learning, and show that dropout induces a data-dependent regularizer in both cases. In both cases, dropout controls quantities that yield generalization bounds.
Reviewers raised several concerns, and several of these were vehemently rebutted. The rhetoric of the back and forth slid into unfortunate territory, in my opinion, and I'd prefer not to see this sort of thing happen. On the one hand, I can sympathize with the reviewers trying to argue that (un)related work is not related work. On the other hand, it's best to be generous, or you run into this sort of mess.
In the end, even the expert reviewers were unswayed. I suspect the next version of this paper may land more smoothly.
While many of the technical issues are rebutted, one that caught my attention pertained to the empirical work. Reviewer #4 noticed that the empirical evaluations do not meet the sample complexity requirements for the bounds to be valid (nevermind loose). The response suggests this is simply a fact of making the bounds looser, but I suspect it may also change their form in this regime, potentially erasing the empirical findings. I suggest the authors carefully consider whether all assumptions are met, and relay this more carefully to readers. | train | [
"rJgHz8Xy9B",
"ryeOFjNadB",
"SygPaUS2oH",
"ByetGck3oS",
"SkgNNDsisB",
"ryeEjpcosH",
"Hygx0n5sjB",
"BJeqs6mSiH",
"H1lxhcqd9r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Post Discussion Update: \n\nThe authors vehemently disagree with my critiques about discussion of / attribution to prior work. They seem to think that the differences from Cavazza et al. [AIStats 2018] would be obvious to \"even someone who has taken a basic course in machine learning.\" However, both Reviewer #... | [
1,
1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2020_Bylthp4Yvr",
"iclr_2020_Bylthp4Yvr",
"ByetGck3oS",
"Hygx0n5sjB",
"ryeOFjNadB",
"H1lxhcqd9r",
"iclr_2020_Bylthp4Yvr",
"iclr_2020_Bylthp4Yvr",
"iclr_2020_Bylthp4Yvr"
] |
iclr_2020_BJxt2aVFPr | Optimizing Data Usage via Differentiable Rewards | To acquire a new skill, humans learn better and faster if a tutor, based on their current knowledge level, informs them of how much attention they should pay to particular content or practice problems. Similarly, a machine learning model could potentially be trained better with a scorer that “adapts” to its current learning state and estimates the importance of each training data instance. Training such an adaptive scorer efficiently is a challenging problem; in order to precisely quantify the effect of a data instance at a given time during the training, it is typically necessary to first complete the entire training process. To efficiently optimize data usage, we propose a reinforcement learning approach called Differentiable Data Selection (DDS). In DDS, we formulate a scorer network as a learnable function of the training data, which can be efficiently updated along with the main model being trained. Specifically, DDS updates the scorer with an intuitive reward signal: it should up-weigh the data that has a similar gradient with a dev set upon which we would finally like to perform well. Without significant computing overhead, DDS delivers strong and consistent improvements over several strong baselines on two very different tasks of machine translation and image classification. | reject | The paper proposes an iterative learning method that jointly trains both a model and a scorer network that places a non-uniform weights on data points, which estimates the importance of each data point for training. This leads to significant improvement on several benchmarks. The reviewers mostly agreed that the approach is novel and that the benchmark results were impressive, especially on Imagenet. There were both clarity issues about methodology and experiments, as well as concerns about several technical issues. The reviewers felt that the rebuttal resolved the majority of minor technical issues, but did not sufficiently clarify the more significant methodological concerns. Thus, I recommend rejection at this time. | val | [
"rkgz2PwHYB",
"HklQA0wciB",
"Byed72zfiS",
"BygqK5GfiB",
"BJerJhMGiB",
"Byl59sfzjB",
"r1lqmp15YS",
"Skgt9dG-9H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an iterative method that jointly trains the model and a scorer network that places a non-uniform distribution over data sets. The paper proposes a gradient method to learn the scorer network based on reinforcement learning, which is novel as to what the reviewer knows.\n\nThere are several conc... | [
3,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
1,
3
] | [
"iclr_2020_BJxt2aVFPr",
"Byed72zfiS",
"r1lqmp15YS",
"Skgt9dG-9H",
"Byl59sfzjB",
"rkgz2PwHYB",
"iclr_2020_BJxt2aVFPr",
"iclr_2020_BJxt2aVFPr"
] |
iclr_2020_B1xq264YvH | Encoder-Agnostic Adaptation for Conditional Language Generation | Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient. | reject | This paper proposes a method to use a pretrained language model for language generation with arbitrary conditional input (images, text). The main idea, which is called pseudo self-attention, is to incorporate the conditioning input as a pseudo history to a pretrained transformer. Experiments on class-conditional generation, summarization, story generation, and image captioning show the benefit of the proposed approach.
While I think that the proposed approach makes sense, especially for generation from multiple modalities, it would be useful to see the following comparison in the case of conditional generation from one modality (i.e., text-text such as in summarization and story generation). How does the proposed approach compare to a method that simply concatenates these input and output? In Figure 1(c), this would be having the encoder part be pretrained as well, as opposed to randomly initialized, which is possible if the input is also text. I believe this is what R2 is suggesting as well when they mentioned a GPT-2 style model, and I agree this is an important baseline.
This is a borderline paper. However, due to space constraint and the above issues, I recommend to reject the paper. | train | [
"B1lV7ivYoS",
"rJlKDSPdir",
"rkec4BPOsr",
"S1g17SDdiS",
"BygEgCVoYB",
"HyeNNy_Ttr",
"SJg1WdHqqr"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for clarifying this. I agree that the IMDb classification result is an evidence that the approach preserves adequacy, and the improvement comes not just from fluency.",
"We thank the reviewer for their positive comments and their questions.\n\nIn response to the reviewer’s cons: \n\n1. We agree that it... | [
-1,
-1,
-1,
-1,
8,
8,
3
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"rJlKDSPdir",
"BygEgCVoYB",
"HyeNNy_Ttr",
"SJg1WdHqqr",
"iclr_2020_B1xq264YvH",
"iclr_2020_B1xq264YvH",
"iclr_2020_B1xq264YvH"
] |
iclr_2020_BJg9hTNKPH | Behavior Regularized Offline Reinforcement Learning | In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, much recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting. | reject | This paper is an empirical studies of methods to stabilize offline (ie, batch) RL methods where the dataset is available up front and not collected during learning. This can be an important setting in e.g. safety critical or production systems, where learned policies should not be applied on the real system until their performance and safety is verified. Since policies leave the area where training data is present, in such settings poor performance or divergence might result, unless divergence from the reference policy is regularized. This paper studies various methods to perform such regularization.
The reviewers are all very happy about the thoroughness of the empirical work. The work only studies existing methods (and combination thereof), so the novelty is limited by design. The paper was also considered well written and easy to follow. The results were very similar between the considered regularizers, which somehow limits the usefulness of the paper as practical guideline (although at least now we know that perhaps we do not need to spend a lot of time choosing the best between these). Bigger differences were observed between "value penalties" versus "policy regularization". This seems to correspond to theoretical observations by Neu et al (https://arxiv.org/abs/1705.07798, 2017), which is not cited in the manuscript. Although unpublished, I think that work is highly relevant for the current manuscript, and I'd strongly recommend the authors to consider its content. Some minor comments about the paper are given below.
On the balance, the strong point of the paper is the empirical thoroughness and clarity, whereas novelty, significance, and theoretical analysis are weaker points. Due to the high selectivity of ICLR, I unfortunately have to recommend rejection for this manuscript.
I have some minor comments about the contents of the paper:
- The manuscript contains the line: "Under this definition, such a behavior policy πb is always well-defined even
if the dataset was collected by multiple, distinct behavior policies". Wouldn't simply defining the behavior as a mixture of the underlying behavior policies (when known) work equally well?
- The paper mentions several earlier works that regularize policies update using the KL from a reference policy (or to a reference policy). The paper of Peters is cited in this context, although there the constraint is actually on the KL divergence between state-action distributions, resulting in a different type of regularization. | test | [
"Hyed4LRcFB",
"r1e_5WpioS",
"BkgzrZ6jsS",
"rkx1y-aisB",
"rkxX9dh3YB",
"r1gZvxq1qB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a unifying framework, BRAC, which summarizes the idea and evaluates the effectiveness of recently proposed offline reinforcement learning algorithms, specifically BEAR, BCQ, and KL control. The authors generalize existing offline RL approaches to an actor-critic algorithm that regularizes the l... | [
6,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
4,
3
] | [
"iclr_2020_BJg9hTNKPH",
"Hyed4LRcFB",
"rkxX9dh3YB",
"r1gZvxq1qB",
"iclr_2020_BJg9hTNKPH",
"iclr_2020_BJg9hTNKPH"
] |
iclr_2020_Hkgs3aNYDS | Quantum Expectation-Maximization for Gaussian Mixture Models | The Expectation-Maximization (EM) algorithm is a fundamental tool in unsupervised machine learning. It is often used as an efficient way to solve Maximum Likelihood (ML) and Maximum A Posteriori estimation problems, especially for models with latent variables. It is also the algorithm of choice to fit mixture models: generative models that represent unlabelled points originating from k different processes, as samples from k multivariate distributions. In this work we define and use a quantum version of EM to fit a Gaussian Mixture Model. Given quantum access to a dataset of n vectors of dimension d, our algorithm has convergence and precision guarantees similar to the classical algorithm, but the runtime is only polylogarithmic in the number of elements in the training set, and is polynomial in other parameters - as the dimension of the feature space, and the number of components in the mixture. We generalize further the algorithm by fitting any mixture model of base distributions in the exponential family. We discuss the performance of the algorithm on datasets that are expected to be classified successfully by those algorithms, arguing that on those cases we can give strong guarantees on the runtime. | reject | The reviewers were unanimous that this submission is not ready for publication at ICLR in its current form.
Concerns raised include a significant lack of clarity, and the paper not being self-contained. | train | [
"BJxGPh-ssB",
"SJxuKx2Kjr",
"SyealAiFjB",
"rygJ8kqOjB",
"H1guyyqdjH",
"rylIIlEMjH",
"SyxeCwobsB",
"SJlz8BM9KS",
"r1xbMt9AFH",
"HJgrkphAcS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In the analysis of the algorithm, we take into the account the computational cost for what we call post-selection, which is the process of taking an algorithm that outputs the correct outcome with some probability and making this probability go to 1 through amplitude amplification. This is NOT the notion of instan... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
1,
1
] | [
"SJxuKx2Kjr",
"H1guyyqdjH",
"rygJ8kqOjB",
"SJlz8BM9KS",
"SJlz8BM9KS",
"SyxeCwobsB",
"HJgrkphAcS",
"iclr_2020_Hkgs3aNYDS",
"iclr_2020_Hkgs3aNYDS",
"iclr_2020_Hkgs3aNYDS"
] |
iclr_2020_Bkln2a4tPB | Customizing Sequence Generation with Multi-Task Dynamical Systems | Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent code z that specifies the customization to the
individual data sequence. This enables style transfer, interpolation and morphing within generated sequences. We show the MTDS can improve predictions via latent code interpolation, and avoid the long-term performance degradation of standard RNN approaches. | reject | This work proposes a dynamical systems model to allow the user to better control sequence generation via the latent z. Reviewers all agreed the that the proposed method is quite interesting. However, reviewers also felt that current evaluations were weak and were ultimately unconvinced by the author rebuttal. I recommend the authors resubmit with a stronger set of experiments as suggested by Reviewers 2 and 3. | train | [
"rkxX8W0k5r",
"HJl92RbisH",
"S1ge6oWsor",
"Bke44FZsoS",
"S1l4pwZijS",
"r1xY4V9cFH",
"S1laleLf9r"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a multi-task dynamical system for sequence generation. The model learns a number of parameters that represents the latent code z. The learned model can generate the customized individual data sequence and provide the smooth interpolation in the sequence space. The experiments on the synthetic d... | [
6,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
1
] | [
"iclr_2020_Bkln2a4tPB",
"r1xY4V9cFH",
"rkxX8W0k5r",
"S1laleLf9r",
"iclr_2020_Bkln2a4tPB",
"iclr_2020_Bkln2a4tPB",
"iclr_2020_Bkln2a4tPB"
] |
iclr_2020_S1x63TEYvr | Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading | Multi-hop text-based question-answering is a current challenge in machine comprehension.
This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions.
In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.
LQR-net is composed of an association of \textbf{reading modules} and \textbf{reformulation modules}.
The purpose of the reading module is to produce a question-aware representation of the document.
From this document representation, the reformulation module extracts essential elements to calculate an updated representation of the question.
This updated question is then passed to the following hop.
We evaluate our architecture on the \hotpotqa question-answering dataset designed to assess multi-hop reasoning capabilities.
Our model achieves competitive results on the public leaderboard and outperforms the best current \textit{published} models in terms of Exact Match (EM) and F1 score.
Finally, we show that an analysis of the sequential reformulations can provide interpretable reasoning paths. | reject | This paper proposes a novel approach, Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require multi-hop reasoning capabilities. Experimental results on the HotPotQA dataset achieve competitive results and outperform the top system in terms of exact match and F1 scores. However, reviewers note the limited setting of the experiments on the unrealistic, closed-domain setting of this dataset and suggested experimenting with other data (such as complex WebQuesitons). Reviewers were also concerned about the scalability of the system due to the significant amount of computations. They also noted several previous studies were not included in the paper. Authors acknowledged and made changes according to these suggestions. They also included experiments only on the open-domain subset of the HotPotQA in their rebuttal, unfortunately the results are not as good as before. Hence, I suggest rejecting this paper. | train | [
"Hkgqc5-2ir",
"SyxwD5bhor",
"BkeJz9WniS",
"ryeFFYW3jH",
"Bkg7wynJYB",
"HJlGvnVTKS",
"B1gKAOzGcS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer 3 for its positive feedback.\n\nWe cannot explicitly reconstruct an approximate form of the question in the intermediate hop and it can be the objective of future work.\nHowever, in Section 4.5 and Appendix A we observe the attention of the previous layer that highlights the main parts of th... | [
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Bkg7wynJYB",
"HJlGvnVTKS",
"B1gKAOzGcS",
"iclr_2020_S1x63TEYvr",
"iclr_2020_S1x63TEYvr",
"iclr_2020_S1x63TEYvr",
"iclr_2020_S1x63TEYvr"
] |
iclr_2020_SJeC2TNYwB | Unsupervised Out-of-Distribution Detection with Batch Normalization | Likelihood from a generative model is a natural statistic for detecting out-of-distribution (OoD) samples. However, generative models have been shown to assign higher likelihood to OoD samples compared to ones from the training distribution, preventing simple threshold-based detection rules. We demonstrate that OoD detection fails even when using more sophisticated statistics based on the likelihoods of individual samples. To address these issues, we propose a new method that leverages batch normalization. We argue that batch normalization for generative models challenges the traditional \emph{i.i.d.} data assumption and changes the corresponding maximum likelihood objective. Based on this insight, we propose to exploit in-batch dependencies for OoD detection. Empirical results suggest that this leads to more robust detection for high-dimensional images. | reject | The authors observe that batch normalization using the statistics computed from a *test* batch significantly improves out-of-distribution detection with generative models. Essentially, normalizing an OOD test batch using the test batch statistics decreases the likelihood of that batch and thus improves detection of OOD examples. The reviewers seemed concerned with this setting and they felt that it gives a significant advantage over existing methods since they typically deal with single test example. The reviewers thus wanted empirical comparisons to methods designed for this setting, i.e. traditional statistical tests for comparing distributions. Despite some positive discussion, this paper unfortunately falls below the bar for acceptance. The authors added significant experiments and hopefully adding these and additional analysis providing some insight into how the batchnorm is helping would make for a stronger submission to a future conference. | train | [
"rylGjLzaKB",
"HylnIU9AYr",
"SyxETFqe9H"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper attempts to address the problem of out-of-distribution detection with generative models. To do this they assume they are given batches of OOD examples or batches of in-distribution examples, and they detect whether the batch is in- or out-of-distribution. Normally we try to detect if an example is in- o... | [
1,
6,
1
] | [
5,
3,
4
] | [
"iclr_2020_SJeC2TNYwB",
"iclr_2020_SJeC2TNYwB",
"iclr_2020_SJeC2TNYwB"
] |
iclr_2020_B1lCn64tvS | Improving SAT Solver Heuristics with Graph Networks and Reinforcement Learning | We present GQSAT, a branching heuristic in a Boolean SAT solver trained with value-based reinforcement learning (RL) using Graph Neural Networks for function approximation. Solvers using GQSAT are complete SAT solvers that either provide a satisfying assignment or a proof of unsatisfiability, which is required for many SAT applications. The branching heuristic commonly used in SAT solvers today suffers from bad decisions during their warm-up period, whereas GQSAT has been trained to examine the structure of the particular problem instance to make better decisions at the beginning of the search. Training GQSAT is data efficient and does not require elaborate dataset preparation or feature engineering to train. We train GQSAT on small SAT problems using RL interfacing with an existing SAT solver. We show that GQSAT is able to reduce the number of iterations required to solve SAT problems by 2-3X, and it generalizes to unsatisfiable SAT instances, as well as to problems with 5X more variables than it was trained on. We also show that, to a lesser extent, it generalizes to SAT problems from different domains by evaluating it on graph coloring. Our experiments show that augmenting SAT solvers with agents trained with RL and graph neural networks can improve performance on the SAT search problem. | reject | SAT is NP-complete (Karp, 1972) due its intractable exhaustive search. As such, heuristics are commonly used to reduce the search space. While usually these heuristics rely on some in-domain expert knowledge, the authors propose a generic method that uses RL to learn a branching heuristic. The policy is parametrized by a GNN, and at each step selects a variable to expand and the process repeats until either a satisfying assignment has been found or the problem has been proved unsatisfiable. The main result of this is that the proposed heuristic results in fewer steps than VSIDS, a commonly used heuristic.
All reviewers agreed that this is an interesting and well-presented submission. However, both R1 and R2 (rightly according to my judgment) point that at the moment the paper seems to be conducting an evaluation that is not entirely fair. Specifically, VSIDS has been implemented within a framework optimized for running time rather than number of iterations, whereas the proposed heuristic is doing the opposite. Moreover, the proposed heuristic is not stressed-test against larger datasets. So, the authors take a heuristic/framework that has been optimized to operate specifically well on large datasets (where running time is what ultimately makes the difference) scale it down to a smaller dataset and evaluate it on a metric that the proposed algorithm is optimized for. At the same time, they do not consider evaluation in larger datasets and defer all concerns about scalability to the one of industrial use vs answering ML questions related to whether or not it is possible to “stretch existing RL techniques to learn a branching heuristic”. This is a valid point and not all techniques need to be super scalable from iteration day 0, but this being ML, we need to make sure that our evaluation criteria are fair and that we are comparing apples to apples in testing hypotheses. As such, I do not feel comfortable suggesting acceptance of this submission, but I do sincerely hope the authors will take the reviewers' feedback and improve the evaluation protocols of their manuscript, resulting in a stronger future submission. | test | [
"Bkl9G2J3iS",
"rJgvOImjiB",
"ryeQjEGwiB",
"rkg944GDoB",
"Hyg1VmGDjr",
"HJgSqc7qFS",
"BJgFQsXaFr",
"BkeMdpNkcr",
"B1x6fKnmKS",
"BkxC0_RlYB"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We believe that our experimental results are useful for the ICLR community and do not agree that the experimental results \"aren't that significant\". GQSAT data efficiency and zero-shot generalisation to the problems 5x larger in state-action space size and even more (if we consider the horizon length as the meas... | [
-1,
-1,
-1,
-1,
-1,
8,
3,
3,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
-1,
-1
] | [
"rJgvOImjiB",
"Hyg1VmGDjr",
"HJgSqc7qFS",
"BkeMdpNkcr",
"BJgFQsXaFr",
"iclr_2020_B1lCn64tvS",
"iclr_2020_B1lCn64tvS",
"iclr_2020_B1lCn64tvS",
"BkxC0_RlYB",
"iclr_2020_B1lCn64tvS"
] |
iclr_2020_S1ekaT4tDB | Why Convolutional Networks Learn Oriented Bandpass Filters: A Hypothesis | It has been repeatedly observed that convolutional architectures when applied to
image understanding tasks learn oriented bandpass filters. A standard explanation
of this result is that these filters reflect the structure of the images that they have
been exposed to during training: Natural images typically are locally composed
of oriented contours at various scales and oriented bandpass filters are matched
to such structure. The present paper offers an alternative explanation based not
on the structure of images, but rather on the structure of convolutional architectures.
In particular, complex exponentials are the eigenfunctions of convolution.
These eigenfunctions are defined globally; however, convolutional architectures
operate locally. To enforce locality, one can apply a windowing function to the
eigenfunctions, which leads to oriented bandpass filters as the natural operators
to be learned with convolutional architectures. From a representational point of
view, these filters allow for a local systematic way to characterize and operate on
an image or other signal. | reject | This paper proposes an alternative explanation of the emergence of oriented bandpass filters in convolutional networks: rather than reflecting observed structure in images, these filters would be a consequence of the convolutional architecture itself and its eigenfunctions.
Reviewers agree that the mathematical angle taken by the paper is interesting, however they also point out that crucial prior work making the same points exists, and that more thorough insights and analyses would be needed to make a more solid paper.
Given the closeness to prior work, we cannot recommend acceptance in this form. | train | [
"BkeI98tCYB",
"HygnHAMx5B",
"SJlVT-tO9H",
"BkxHYpOFcB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This short, interesting paper provides a theoretical analysis to explain why we may expect to see bandpass oriented filters arise as a result of the convolutional structure of deep networks. The explanation boils down to the fact that the eigenfunctions of convolutions correspond to bandpass filters. This can expl... | [
3,
3,
1,
3
] | [
5,
1,
5,
3
] | [
"iclr_2020_S1ekaT4tDB",
"iclr_2020_S1ekaT4tDB",
"iclr_2020_S1ekaT4tDB",
"iclr_2020_S1ekaT4tDB"
] |
iclr_2020_Bygka64KPH | Semi-Supervised Few-Shot Learning with Prototypical Random Walks | Learning from a few examples is a key characteristic of human intelligence that inspired machine learning researchers to build data-efficient AI models. Recent progress has shown that few-shot learning can be improved with access to unlabelled data, known as semi-supervised few-shot learning(SS-FSL). We introduce an SS-FSL approach, dubbed as Prototypical Random Walk Networks (PRWN), built on top of Prototypical Networks (PN). We develop a random walk semi-supervised loss that enables the network to learn representations that are compact and well-separated. Our work is related to the very recent development on graph-based approaches for few-shot learning. However, we show that achieved compact and well-separated class embeddings can be achieved by our prototypical random walk notion without needing additional graph-NN parameters or requiring a transductive setting where collective test set is provided. Our model outperforms prior art in most benchmarks with significant improvements in some cases. For example, in a mini-Imagenet 5-shot classification task, we obtain 69.65% accuracy to the 64.59% state-of-the-art. Our model, trained with 40% of the data as labelled, compares competitively against fully supervised prototypical networks, trained on 100% of the labels, even outperforming it in the 1-shot mini-Imagenet case with 50.89% to 49.4% accuracy. We also show that our model is resistant to distractors, unlabeled data that does not belong to any of the training classes, and hence reflecting robustness to labelled/unlabelled class distribution mismatch. We also performed a challenging discriminative power test, showing a relative improvement on top of the baseline of 14% on 20 classes on mini-Imagenet and 60% on 800 classes on Ominiglot. | reject | This paper proposed a semi-supervised few-shot learning method, on top of Prototypical Networks, wherein a regularization term that involves a random walk from a prototype to unlabeled samples and back to the same prototype. SotA results were obtained in several experiments by using this method. All reviewers agreed that the novelty of the paper is not such high compared with Haeusser et al. (2017) and the analysis and the experiments could be improved. | train | [
"Hyx7le6pYH",
"HkeeGhgaYB",
"HkgzMNtnjH",
"HyeagUK2jB",
"S1euTEthiH",
"Hyx_EMtniB",
"Syg4K6RjFS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper suggests a method for semi-supervised few-shot learning. It is based on prototypical network, but in addition to the supervised loss a regularisation term that encourages unlabelled sample to be closer to the prototypes. This regularisation term is adapted from Haeusser et al. (2017) and is encouraging ... | [
3,
3,
-1,
-1,
-1,
-1,
3
] | [
4,
4,
-1,
-1,
-1,
-1,
5
] | [
"iclr_2020_Bygka64KPH",
"iclr_2020_Bygka64KPH",
"Hyx7le6pYH",
"Syg4K6RjFS",
"HkeeGhgaYB",
"iclr_2020_Bygka64KPH",
"iclr_2020_Bygka64KPH"
] |
iclr_2020_Bkle6T4YvB | From English to Foreign Languages: Transferring Pre-trained Language Models | Pre-trained models have demonstrated their effectiveness in many downstream natural language processing (NLP) tasks. The availability of multilingual pre-trained models enables zero-shot transfer of NLP tasks from high resource languages to low resource ones. However, recent research in improving pre-trained models focuses heavily on English. While it is possible to train the latest neural architectures for other languages from scratch, it is undesirable due to the required amount of compute. In this work, we tackle the problem of transferring an existing pre-trained model from English to other languages under a limited computational budget. With a single GPU, our approach can obtain a foreign BERT-base model within a day and a foreign BERT-large within two days. Furthermore, evaluating our models on six languages, we demonstrate that our models are better than multilingual BERT on two zero-shot tasks: natural language inference and dependency parsing. | reject | This paper proposes a method to transfer a pretrained language model in one language (English) to a new language. The method first learns word embeddings for the new language while keeping the the body of the English model fixed, and further refines it in a fine-tuning procedure as a bilingual model. Experiments on XNLI and dependency parsing demonstrate the benefit of the proposed approach.
R3 pointed out that the paper is missing an important baseline, which is a bilingual BERT model. The authors acknowledged this in their rebuttal and ran a preliminary experiment to obtain a first set of results. However, since the main claim of the paper depends on this new experiment, which was not finished by the end of the rebuttal period, it is difficult to accept the paper in its current state. In an internal discussion, R1 also agreed that this baseline is critical to support the paper.
As a result, I recommend to reject this paper for ICLR. I encourage the authors to update their paper with the new experiment for submission to future conferences (given consistent results). | train | [
"r1lhNKE0Yr",
"H1xVY-9Dsr",
"H1e8totDsB",
"SyxkmrNAFH",
"rylWllu1qB"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a method to efficiently transfer pre-trained english language model to bilingual language model. The obtained representations are evaluated on downstream NLP task (natural language inference and dependency parsing) with state-of-the-art performances.\n\n\nPros:\n\n- Experiments clearly show tha... | [
3,
-1,
-1,
6,
3
] | [
4,
-1,
-1,
4,
4
] | [
"iclr_2020_Bkle6T4YvB",
"rylWllu1qB",
"SyxkmrNAFH",
"iclr_2020_Bkle6T4YvB",
"iclr_2020_Bkle6T4YvB"
] |
iclr_2020_HJgepaNtDS | Learnable Group Transform For Time-Series | We propose to undertake the problem of representation learning for time-series by considering a Group Transform approach. This framework allows us to, first, generalize classical time-frequency transformations such as the Wavelet Transform, and second, to enable the learnability of the representation. While the creation of the Wavelet Transform filter-bank relies on the sampling of the affine group in order to transform the mother filter, our approach allows for non-linear transformations of the mother filter by introducing the group of strictly increasing and continuous functions. The transformations induced by such a group enable us to span a larger class of signal representations. The sampling of this group can be optimized with respect to a specific loss and function and thus cast into a Deep Learning architecture. The experiments on diverse time-series datasets demonstrate the expressivity of this framework which competes with state-of-the-art performances. | reject | This paper received two weak rejects (3) and one accept (8). In the discussion phase, the paper received significant discussion between the authors and reviewers and internally between the reviewers (which is tremendously appreciated). In particular, there was a discussion about the novelty of the contribution and ideas (AnonReviewer3 felt that the ideas presented provided an interesting new thought-provoking perspective) and the strength of the empirical results. None of the reviewers felt really strongly about rejecting and would not argue strongly against acceptance. However, AnonReviewer3 was not prepared to really champion the paper for acceptance due to a lack of confidence. Unfortunately, the paper falls just below the bar for acceptance. Taking the reviewer feedback into account and adding careful new experiments with strong results would make this a much stronger paper for a future submission. | train | [
"H1e8tSQ2jr",
"H1x2Fqg3sr",
"BygXSjZijH",
"ryxs6AZ9jS",
"H1gfQCe9sH",
"r1lr0HHtjr",
"SylIRA5uiB",
"rJxUSAqOsS",
"S1xAJ69uiB",
"r1gr5RqnYH",
"BJgjzdW3KH",
"r1xHWpnatr"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank you for your consideration and great help to improve the paper.",
"7/ I see, those bias are quite critical for applications. Thanks for the clarification\n\n8/ I see. Thanks for the clarification\n\n9/ Thanks. I've read carefully the paper and I honestly think the paper is more clear and precise.\n\nI w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"H1x2Fqg3sr",
"BygXSjZijH",
"ryxs6AZ9jS",
"H1gfQCe9sH",
"r1lr0HHtjr",
"S1xAJ69uiB",
"BJgjzdW3KH",
"r1gr5RqnYH",
"r1xHWpnatr",
"iclr_2020_HJgepaNtDS",
"iclr_2020_HJgepaNtDS",
"iclr_2020_HJgepaNtDS"
] |
iclr_2020_Bke-6pVKvB | Poisoning Attacks with Generative Adversarial Nets | Machine learning algorithms are vulnerable to poisoning attacks: An adversary can inject malicious points in the training dataset to influence the learning process and degrade the algorithm's performance. Optimal poisoning attacks have already been proposed to evaluate worst-case scenarios, modelling attacks as a bi-level optimization problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training. We propose a Generative Adversarial Net with three components: generator, discriminator, and the target classifier. This approach allows us to model naturally the detectability constrains that can be expected in realistic attacks and to identify the regions of the underlying data distribution that can be more vulnerable to data poisoning. Our experimental evaluation shows the effectiveness of our attack to compromise machine learning classifiers, including deep networks. | reject | This paper proposes a GAN-based approach to producing poisons for neural networks. While the approach is interesting and appreciated by the reviewers, it is a legitimate and recurring criticism that the method is only demonstrated on very toy problems (MNIST and Fashion MNIST). During the rebuttal stage, the authors added results on CIFAR, although the results on CIFAR were not convincing enough to change the reviewer scores; the SOTA in GANs is sufficient to generate realistic images of cars and trucks (even at the ImageNet scale), while the demonstrated images are sufficiently far from the natural image distribution on CIFAR-10 that it is not clear whether the method benefits from using a GAN. It should be noted that a range of poisoning methods exist that can effectively target CIFAR, and SOTA methods (e.g., poison polytope attacks and backdoor attacks) can even target datasets like ImageNet and CelebA. | train | [
"r1eDh8QAFB",
"B1gwOohtjr",
"SJxAr92tjS",
"S1g_5Y3KoS",
"Skgk1VhAYH",
"ByeZZKSxqB"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\nThis paper proposed a method pGAN based on Generative Adversarial Networks to generate poisoning examples in order to degrade the performance of classifiers when trained on the poisoned training data. The authors evaluated pGAN on both synthetic datasets and commonly used MNIST and Fashion MNIST datasets in mach... | [
6,
-1,
-1,
-1,
6,
3
] | [
3,
-1,
-1,
-1,
1,
5
] | [
"iclr_2020_Bke-6pVKvB",
"r1eDh8QAFB",
"Skgk1VhAYH",
"ByeZZKSxqB",
"iclr_2020_Bke-6pVKvB",
"iclr_2020_Bke-6pVKvB"
] |
iclr_2020_SylGpT4FPS | Last-iterate convergence rates for min-max optimization | While classic work in convex-concave min-max optimization relies on average-iterate convergence results, the emergence of nonconvex applications such as training Generative Adversarial Networks has led to renewed interest in last-iterate convergence guarantees. Proving last-iterate convergence is challenging because many natural algorithms, such as Simultaneous Gradient Descent/Ascent, provably diverge or cycle even in simple convex-concave min-max settings, and previous work on global last-iterate convergence rates has been limited to the bilinear and convex-strongly concave settings. In this work, we show that the Hamiltonian Gradient Descent (HGD) algorithm achieves linear convergence in a variety of more general settings, including convex-concave problems that satisfy a “sufficiently bilinear” condition. We also prove similar convergence rates for some parameter settings of the Consensus Optimization (CO) algorithm of Mescheder et al. 2017. | reject | This provides a simple analysis of an existing algorithm for min-max optimization under some favorable assumptions. The paper is clean and nice, though unfortunately lands just below borderline.
I urge the authors to continue their interesting work, and amongst other things address the reviewer comments, for example those on stochastic gradient descent. | train | [
"BkgIGYrAKS",
"H1evEoR6YH",
"HkljIvR3tr",
"SJx-LQD2ir",
"rkeDa5noiB",
"r1eb20l9sB",
"rJlZpTg5sr",
"r1ls4aeqoH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"*Summary* \n\nThis paper study the convergence of Hamiltonian gradient descent (HGD) on minmax games. The paper show that under some assumption on the cost function of the min max that are (in some sense) weaker than strong convex-concavity. More precisely, they use the ‘bilinearity’ of the objective (due to the i... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2020_SylGpT4FPS",
"iclr_2020_SylGpT4FPS",
"iclr_2020_SylGpT4FPS",
"rkeDa5noiB",
"rJlZpTg5sr",
"HkljIvR3tr",
"H1evEoR6YH",
"BkgIGYrAKS"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.